Sie sind auf Seite 1von 209

BIG-IP® Global Traffic Manager™

Concepts Guide

version 11.0

MAN-0346-00
Product Version
This guide applies to product version 11.0 of the BIG-IP® Global Traffic Manager™.

Publication Date
This guide was published on August 11, 2011.

Legal Notices
Copyright
Copyright 2011, F5 Networks, Inc. All rights reserved.
F5 Networks, Inc. (F5) believes the information it furnishes to be accurate and reliable. However, F5
assumes no responsibility for the use of this information, nor any infringement of patents or other rights of
third parties which may result from its use. No license is granted by implication or otherwise under any
patent, copyright, or other intellectual property right of F5 except as specifically described by applicable
user licenses. F5 reserves the right to change specifications at any time without notice.

Trademarks
3DNS, Access Policy Manager, Acopia, Acopia Networks, Advanced Client Authentication, Advanced
Routing, APM, Application Security Manager, ARX, AskF5, ASM, BIG-IP, Cloud Extender,
CloudFucious, CMP, Data Manager, DevCentral, DevCentral [DESIGN], DNS Express, DSC, DSI, Edge
Client, Edge Gateway, Edge Portal, EM, Enterprise Manager, F5, F5 [DESIGN], F5 Management Pack, F5
Networks, F5 World, Fast Application Proxy, Fast Cache, FirePass, Global Traffic Manager, GTM, IBR,
Intelligent Browser Referencing, Intelligent Compression, IPv6 Gateway, iApps, iControl, iHealth,
iQuery, iRules, iRules OnDemand, iSession, IT agility. Your way., L7 Rate Shaping, LC, Link Controller,
Local Traffic Manager, LTM, Message Security Module, MSM, Netcelera, OneConnect, Packet Velocity,
Protocol Security Module, PSM, Real Traffic Policy Builder, ScaleN, SSL Acceleration, StrongBox,
SuperVIP, SYN Check, TCP Express, TDR, TMOS, Traffic Management Operating System,
TrafficShield, Transparent Data Reduction, VIPRION, vCMP, WA, WAN Optimization Manager,
WANJet, WebAccelerator, WOM, and ZoneRunner, are trademarks or service marks of F5 Networks, Inc.,
in the U.S. and other countries, and may not be used without F5's express written consent.
All other product and company names herein may be trademarks of their respective owners.

Patents
This product may be protected by U.S. Patents 6,374,300; 6,473,802; 6,970,733; 7,047,301; 7,707,289.
This list is believed to be current as of August 11, 2011.

Export Regulation Notice


This product may include cryptographic software. Under the Export Administration Act, the United States
government may consider it a criminal offense to export this product from the United States.

RF Interference Warning
This is a Class A product. In a domestic environment this product may cause radio interference, in which
case the user may be required to take adequate measures.

FCC Compliance
This equipment has been tested and found to comply with the limits for a Class A digital device pursuant
to Part 15 of FCC rules. These limits are designed to provide reasonable protection against harmful
interference when the equipment is operated in a commercial environment. This unit generates, uses, and
can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual,
may cause harmful interference to radio communications. Operation of this equipment in a residential area
is likely to cause harmful interference, in which case the user, at his own expense, will be required to take
whatever measures may be required to correct the interference.

BIG-IP® Global Traffic ManagerTM Concepts Guide i


Any modifications to this device, unless expressly approved by the manufacturer, can void the user's
authority to operate this equipment under part 15 of the FCC rules.

Canadian Regulatory Compliance


This class A digital apparatus complies with Canadian I CES-003.

Standards Compliance
This product conforms to the IEC, European Union, ANSI/UL and Canadian CSA standards applicable to
Information Technology products at the time of manufacture.

Acknowledgments
This product includes software developed by Gabriel Forté.
This product includes software developed by Bill Paul.
This product includes software developed by Jonathan Stone.
This product includes software developed by Manuel Bouyer.
This product includes software developed by Paul Richards.
This product includes software developed by the NetBSD Foundation, Inc. and its contributors.
This product includes software developed by the Politecnico di Torino, and its contributors.
This product includes software developed by the Swedish Institute of Computer Science and its
contributors.
This product includes software developed by the University of California, Berkeley and its contributors.
This product includes software developed by the Computer Systems Engineering Group at the Lawrence
Berkeley Laboratory.
This product includes software developed by Christopher G. Demetriou for the NetBSD Project.
This product includes software developed by Adam Glass.
This product includes software developed by Christian E. Hopps.
This product includes software developed by Dean Huxley.
This product includes software developed by John Kohl.
This product includes software developed by Paul Kranenburg.
This product includes software developed by Terrence R. Lambert.
This product includes software developed by Philip A. Nelson.
This product includes software developed by Herb Peyerl.
This product includes software developed by Jochen Pohl for the NetBSD Project.
This product includes software developed by Chris Provenzano.
This product includes software developed by Theo de Raadt.
This product includes software developed by David Muir Sharnoff.
This product includes software developed by SigmaSoft, Th. Lockert.
This product includes software developed for the NetBSD Project by Jason R. Thorpe.
This product includes software developed by Jason R. Thorpe for And Communications,
http://www.and.com.
This product includes software developed for the NetBSD Project by Frank Van der Linden.
This product includes software developed for the NetBSD Project by John M. Vinopal.
This product includes software developed by Christos Zoulas.
This product includes software developed by the University of Vermont and State Agricultural College and
Garrett A. Wollman.
In the following statement, "This software" refers to the Mitsumi CD-ROM driver: This software was
developed by Holger Veit and Brian Moore for use with "386BSD" and similar operating systems.
"Similar operating systems" includes mainly non-profit oriented systems for research and education,
including but not restricted to "NetBSD," "FreeBSD," "Mach" (by CMU).
This product includes software developed by the Apache Group for use in the Apache HTTP server project
(http://www.apache.org/).
This product includes software licensed from Richard H. Porter under the GNU Library General Public
License (© 1998, Red Hat Software), www.gnu.org/copyleft/lgpl.html.

ii
This product includes the standard version of Perl software licensed under the Perl Artistic License (©
1997, 1998 Tom Christiansen and Nathan Torkington). All rights reserved. You may find the most current
standard version of Perl at http://www.perl.com.
This product includes software developed by Jared Minch.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit
(http://www.openssl.org/).
This product includes cryptographic software written by Eric Young (eay@cryptsoft.com).
This product contains software based on oprofile, which is protected under the GNU Public License.
This product includes RRDtool software developed by Tobi Oetiker (http://www.rrdtool.com/index.html)
and licensed under the GNU General Public License.
This product contains software licensed from Dr. Brian Gladman under the GNU General Public License
(GPL).
This product includes software developed by the Apache Software Foundation <http://www.apache.org/>.
This product includes Hypersonic SQL.
This product contains software developed by the Regents of the University of California, Sun
Microsystems, Inc., Scriptics Corporation, and others.
This product includes software developed by the Internet Software Consortium.
This product includes software developed by Nominum, Inc. (http://www.nominum.com).
This product contains software developed by Broadcom Corporation, which is protected under the GNU
Public License.
This product contains software developed by MaxMind LLC, and is protected under the GNU Lesser
General Public License, as published by the Free Software Foundation.
This product includes the GeoPoint Database developed by Quova, Inc. and its contributors.
This product includes software developed by Balazs Scheidler <bazsi@balabit.hu>, which is protected
under the GNU Public License.
This product includes software developed by NLnet Labs and its contributors.
This product includes software written by Steffen Beyer and licensed under the Perl Artistic License and
the GPL.
This product includes software written by Makamaka Hannyaharamitu © 2007-2008.

BIG-IP® Global Traffic ManagerTM Concepts Guide iii


iv
Table of Contents
Table of Contents

1
Overview
Global Traffic Manager .................................................................................................................. 1-1
Security features .................................................................................................................... 1-1
Local Traffic Manager resources ....................................................................................... 1-2
Internet protocol and network management support ................................................. 1-2
The Configuration utility .............................................................................................................. 1-3
The Traffic Management Shell (tmsh) ........................................................................................ 1-4

2
Components
Introduction ..................................................................................................................................... 2-1
Physical network components .................................................................................................... 2-2
Data centers ........................................................................................................................... 2-2
Servers ..................................................................................................................................... 2-2
Links ......................................................................................................................................... 2-2
Virtual servers ........................................................................................................................ 2-3
Logical network components ...................................................................................................... 2-4
Listeners .................................................................................................................................. 2-4
Pools ......................................................................................................................................... 2-4
Wide IPs .................................................................................................................................. 2-4
Distributed applications ....................................................................................................... 2-5

3
Setup and Configuration
Introduction ..................................................................................................................................... 3-1
Network Topology ........................................................................................................................ 3-2
Redundant system configuration ................................................................................................ 3-3
System communications ............................................................................................................... 3-4
Synchronization .............................................................................................................................. 3-6
Synchronization groups ....................................................................................................... 3-6
DNS zone file synchronization .......................................................................................... 3-7
Global monitor settings ................................................................................................................ 3-8
Heartbeat interval ................................................................................................................. 3-8
Synchronous monitor queries ............................................................................................ 3-9
Disabled resources ............................................................................................................... 3-9
Domain validation ........................................................................................................................ 3-10

4
Listeners
Introduction ..................................................................................................................................... 4-1
Node mode ............................................................................................................................ 4-1
Bridge or Router mode ....................................................................................................... 4-1
Wildcard listener .................................................................................................................. 4-1
Listeners and Network Traffic .................................................................................................... 4-3
Listeners and VLANs ..................................................................................................................... 4-4

5
The Physical Network
Introduction ..................................................................................................................................... 5-1
Data centers .................................................................................................................................... 5-2
Servers .............................................................................................................................................. 5-3

BIG-IP® Global Traffic ManagerTM Concepts Guide vii


Table of Contents

Global Traffic Manager systems ......................................................................................... 5-3


Local Traffic Manager systems ........................................................................................... 5-4
Third-party load balancing servers .................................................................................... 5-4
Third-party host servers ..................................................................................................... 5-5
Monitors and servers ........................................................................................................... 5-5
Availability thresholds .......................................................................................................... 5-5
Server thresholds .................................................................................................................. 5-6
Virtual server thresholds ..................................................................................................... 5-6
Virtual servers ................................................................................................................................. 5-8
Links .................................................................................................................................................. 5-9
Links and monitors ............................................................................................................... 5-9
Link weighting and billing properties ................................................................................ 5-9

6
The Logical Network
Introduction ..................................................................................................................................... 6-1
Pools .................................................................................................................................................. 6-2
Virtual servers and Ratio mode load balancing .............................................................. 6-3
Canonical pool names .......................................................................................................... 6-3
Wide IPs ........................................................................................................................................... 6-5
Wildcard characters in wide IP names ............................................................................. 6-5
Wide IPs and pools ............................................................................................................... 6-6
Incorporating iRules ............................................................................................................. 6-7
NoError response for IPv6 resolution ............................................................................ 6-7
Distributed applications ................................................................................................................ 6-8
Dependencies for distributed applications ...................................................................... 6-8
Distributed application traffic ............................................................................................. 6-9
Persistent connections ....................................................................................................... 6-10

7
Load Balancing
About load balancing and Global Traffic Manager .................................................................. 7-1
Static load balancing modes ......................................................................................................... 7-3
Drop Packet mode ............................................................................................................... 7-3
Fallback IP mode .................................................................................................................... 7-4
Global Availability mode ...................................................................................................... 7-4
None mode ............................................................................................................................ 7-4
Ratio mode ............................................................................................................................. 7-5
Return to DNS mode .......................................................................................................... 7-5
Round Robin mode ............................................................................................................... 7-5
Static Persist mode ............................................................................................................... 7-5
Topology mode ..................................................................................................................... 7-6
Dynamic load balancing modes ................................................................................................... 7-6
Completion Rate mode ....................................................................................................... 7-6
CPU mode .............................................................................................................................. 7-6
Hops mode ............................................................................................................................. 7-7
Kilobyte/Second mode ......................................................................................................... 7-7
Least Connections mode .................................................................................................... 7-7
Packet Rate mode ................................................................................................................. 7-7
Quality of Service mode ...................................................................................................... 7-7
Round Trip Times mode ..................................................................................................... 7-8
Virtual Server Score mode ................................................................................................. 7-8
VS Capacity mode ................................................................................................................. 7-8
Dynamic Ratio option .......................................................................................................... 7-9

viii
Table of Contents

Fallback load balancing method ................................................................................................. 7-10


Additional load balancing options ............................................................................................. 7-11

8
Connections
Connection management ............................................................................................................. 8-1
Resource health .............................................................................................................................. 8-2
Resource availability ...................................................................................................................... 8-3
Limit settings .......................................................................................................................... 8-3
Monitor availability requirements ..................................................................................... 8-3
Virtual server dependency .................................................................................................. 8-4
Restoration of availability ............................................................................................................. 8-5
Persistent connections .................................................................................................................. 8-6
Drain persistent requests option ...................................................................................... 8-6
Last resort pool .............................................................................................................................. 8-7

9
Topologies
Introduction ..................................................................................................................................... 9-1
IP geolocation data updates ......................................................................................................... 9-2
Topology records ........................................................................................................................... 9-3
Topology load balancing ............................................................................................................... 9-4
Longest Match load balancing option ............................................................................... 9-4

10
DNSSEC Keys and Zones
About DNSSEC ............................................................................................................................ 10-1
DNSSEC keys and zones ............................................................................................................ 10-1
Automatic key rollover ...................................................................................................... 10-1
DNSSEC resource records .............................................................................................. 10-3

11
Health and Performance Monitors
Introduction ................................................................................................................................... 11-1
Monitor types ...................................................................................................................... 11-2
Pre-configured and custom monitors ............................................................................ 11-2
Special configuration considerations ........................................................................................ 11-5
Monitor destinations .......................................................................................................... 11-5
Transparent and reverse modes ..................................................................................... 11-5
Virtual server status ........................................................................................................... 11-7
Monitors and resources ............................................................................................................. 11-7
Monitor associations .......................................................................................................... 11-8

12
Statistics
Introduction ................................................................................................................................... 12-1
Statistics access ............................................................................................................................. 12-2
Status Summary screen ............................................................................................................... 12-2
Types of statistics ......................................................................................................................... 12-3
Distributed application statistics ..................................................................................... 12-3
Wide IP statistics ................................................................................................................. 12-5
Pool statistics ....................................................................................................................... 12-6

BIG-IP® Global Traffic ManagerTM Concepts Guide ix


Table of Contents

Data center statistics ......................................................................................................... 12-7


Link statistics ........................................................................................................................ 12-8
Server statistics .................................................................................................................... 12-9
Virtual server statistics ....................................................................................................12-11
Paths statistics ....................................................................................................................12-12
Local DNS statistics ..........................................................................................................12-13
Persistence records ...................................................................................................................12-15

13
Metric Collection
Introduction ................................................................................................................................... 13-1
About metrics ............................................................................................................................... 13-2
Probes and local DNS servers .................................................................................................. 13-3
TTL and timer values .................................................................................................................. 13-5

14
Performance Data
Introduction ................................................................................................................................... 14-1
Performance data graphs ............................................................................................................ 14-1
Performance graph ............................................................................................................. 14-1
Request Breakdown graph ................................................................................................ 14-1

15
iRules
Introduction ................................................................................................................................... 15-1
What is an iRule? .......................................................................................................................... 15-2
Event-based traffic management ............................................................................................... 15-3
Event declarations ............................................................................................................... 15-3

16
ZoneRunner
ZoneRunner utility ....................................................................................................................... 16-1
ZoneRunner tasks ............................................................................................................... 16-1
Zone files ....................................................................................................................................... 16-2
Types of zone files .............................................................................................................. 16-2
Zone file import .................................................................................................................. 16-2
Resource records ......................................................................................................................... 16-4
Types of resource records ............................................................................................... 16-4
Views ............................................................................................................................................... 16-6
Named.conf ................................................................................................................................... 16-7

A
big3d Agent
Introduction .....................................................................................................................................A-1
Metrics ..............................................................................................................................................A-2
Data collection with the big3d agent ................................................................................A-3
Data collection and broadcast sequence .........................................................................A-3
Communications ............................................................................................................................A-5
iQuery and the big3d agent ................................................................................................A-5
iQuery and firewalls .............................................................................................................A-6
Communications between Global Traffic Managers, big3d agents, and
local DNS servers .................................................................................................................A-7

x
Table of Contents

B
Probes
Introduction ..................................................................................................................................... B-1
About iQuery .................................................................................................................................. B-2
Probe responsibility ....................................................................................................................... B-3
Probes and the big3d agent .......................................................................................................... B-5
LDNS probes .................................................................................................................................. B-7
Probes and log entries .................................................................................................................. B-9
Probe information in the log file ........................................................................................ B-9

Glossary

Index

BIG-IP® Global Traffic ManagerTM Concepts Guide xi


Table of Contents

xii
1
Overview

• Global Traffic Manager

• The Configuration utility

• The Traffic Management Shell (tmsh)


Overview

Global Traffic Manager


You can use BIG-IP® Global Traffic Manager™ to monitor the availability
and performance of global resources and use that information to manage
network traffic patterns. Global Traffic Manager uses load balancing
algorithms, topology-based routing, and iRules® to control and distribute
traffic according to specific policies.
Global Traffic Manager is one of several products which compose the
BIG-IP® product family. All products in the BIG-IP product family run on
the powerful Traffic Management Operating System®, commonly referred
to as TMOS®.
Global Traffic Manager provides a variety of features that meet special
needs. For example, with this product you can:
• Ensure wide-area persistence by maintaining a mapping between a local
domain name system (DNS) server (LDNS) and a virtual server in a wide
IP pool
• Direct local clients to local servers for globally-distributed sites using
Topology mode load balancing
• Change the load balancing configuration according to current traffic
patterns or time of day
• Customize load balancing modes
• Set up global load balancing among Local Traffic Manager™ systems
and other load balancing hosts
• Monitor real-time network conditions
• Configure a content delivery network (CDN) using a CDN provider
• Guarantee multiple port availability for e-commerce sites

Security features
Global Traffic Manager offers a variety of security features that can help
prevent hostile attacks on your site or equipment.
◆ Secure administrative connections
Global Traffic Manager supports Secure Shell (SSH) administrative
connections for remote administration from the command line. The web
server, which hosts the web-based Configuration utility, supports SSL
connections as well as user authentication.
◆ Secure iQuery communications
Global Traffic Manager supports web certificate authentication for
BIG-IP iQuery® protocol communications between itself and other
systems running the big3d agent.
◆ TCP wrappers
Global Traffic Manager supports the use of TCP wrappers to provide an
extra layer of security for network connections.

BIG-IP® Global Traffic ManagerTM Concepts Guide 1-1


Chapter 1

Local Traffic Manager resources


If you use Global Traffic Manager in conjunction with a Local Traffic
Manager system, it is important to understand the following network
resources. Although you do not manage these resources directly through
Global Traffic Manager, understanding their role in your network
configuration can assist you in optimizing your network’s availability and
performance.
◆ Self IP address
A self IP address is an IP address that you define on a VLAN of a
BIG-IP system. Note that this concept does not apply to the management
IP address of a BIG-IP system or to IP addresses on other devices.
◆ Node
A node is a logical object on the BIG-IP system that identifies the IP
address of a physical resource on the network, such as a web server. You
define a node object in Local Traffic Manager.

Internet protocol and network management support


In addition to the standard DNS and DNSSEC protocols, the Global Traffic
Manager supports the BIG-IP iQuery protocol, which is used for collecting
dynamic load balancing information. Global Traffic Manager also supports
administrative protocols, such as Simple Network Management Protocol
(SNMP), and Simple Mail Transfer Protocol (SMTP) (outbound only), for
performance monitoring and notification of system events. For
administrative purposes, you can use SSH, RSH, Telnet, and FTP. The
Configuration utility supports HTTPS, for secure web browser connections
using SSL, as well as standard HTTP connections.
You can use the proprietary SNMP agent to monitor status and current
traffic flow using popular network management tools. This agent provides
detailed data such as current connections being handled by each virtual
server.

1-2
Overview

The Configuration utility


The Configuration utility is a browser-based graphical user interface that
you use to configure and monitor Global Traffic Manager. Using the
Configuration utility, you can define the load balancing configuration along
with the network setup, including data centers, synchronization groups, and
servers used for load balancing and path probing. In addition, you can
configure advanced features, such as Topology mode settings and SNMP
agents. The Configuration utility also monitors network traffic, current
connections, load balancing statistics, performance metrics, and the
operating system itself. The Welcome screen of the Configuration utility
provides convenient access to downloads such as the SNMP MIB, and
documentation for third-party applications, such as ZebOS®.
For the most current list of the supported browsers for the Configuration
utility, see the current release note on the AskF5TM Knowledge Base web
site, https://support.f5.com.

BIG-IP® Global Traffic ManagerTM Concepts Guide 1-3


Chapter 1

The Traffic Management Shell (tmsh)


The Traffic Management Shell (tmsh) is a utility that you can use to
configure Global Traffic Manager from the command line. Using tmsh, you
can set up your network and configure local and global traffic management.
In addition, you can configure advanced features, such as Topology mode
settings and SNMP agents. You can also use tmsh to display information
about performance, load balancing decisions, network traffic, and the
operating system itself. For information about using tmsh to configure the
system, see the tmsh man pages.

1-4
2
Components

• Introduction

• Physical network components

• Logical network components


Components

Introduction
For the BIG-IP® Global Traffic Manager™ system to operate effectively,
you need to define the components that make up the segments of your
network. These components include physical components, such as data
centers and servers, as well as logical components, such as wide IPs,
addresses, and pools. By defining these components, you essentially build a
network map that Global Traffic Manager can use to direct Domain Name
System (DNS) traffic to the best available resource.
The most basic configuration of Global Traffic Manager includes:
• A listener that is a specific virtual server that identifies network traffic
for global traffic management
• A data center that contains at least one server
• A server that contains at least one resource or virtual server

After this basic configuration is complete, Global Traffic Manager has


enough information available to begin directing DNS traffic. You can
increase the system’s capabilities by adding additional network components.
The components that you define in Global Traffic Manager can be divided
into two basic categories:
• Physical components
• Logical components

BIG-IP® Global Traffic ManagerTM Concepts Guide 2-1


Chapter 2

Physical network components


Several components that you can configure on Global Traffic Manager
system have a direct correlation to a physical location or device on the
network. These components include:
• Data centers
• Servers
• Links
• Virtual servers

Data centers
Data centers are the top level of your physical network setup. You must
configure one data center for each physical location in your global network.
When you create a data center in Global Traffic Manager, you define the
servers (Global Traffic Manager systems, Local Traffic Manager™ systems,
Link Controller™ systems, hosts, and routers) that reside at that location.
A data center can contain any type of server. For example, one data center
can contain a Global Traffic Manager system and a host, while another
might contain two Global Traffic Manager systems and eight Local Traffic
Manager systems.

Servers
A server is a physical device on which you can configure one or more
virtual servers. The servers that you define can include both BIG-IP systems
and third-party servers, such as Local Traffic Manager systems and systems
running Microsoft® Windows® 2000 Server.
One server that you must define is Global Traffic Manager. This places the
system on the network map.

Links
A link is a logical representation of a physical device (router) that connects
your network to the Internet. You can assign multiple links to each data
center by logically attaching links to a collection of servers in order to
manage access to your data sources. Configuring links is optional, although
they are very useful when determining resource availability.

2-2
Components

Virtual servers
Servers, excluding Global Traffic Manager systems and Link Controller
systems, contain at least one virtual server. A virtual server, in the context
of Global Traffic Manager, is a combination of an IP address and a port
number that points to a resource that provides access to an application or
data source on your network. In the case of host servers, this IP address and
port number likely point to the resource itself. With load balancing systems,
such as Local Traffic Manager, these virtual servers are often proxies that
allow the load balancing server to manage the resource request across a
multitude of resources. Virtual servers are the ultimate destination for
connection requests.

BIG-IP® Global Traffic ManagerTM Concepts Guide 2-3


Chapter 2

Logical network components


In addition to the physical components of your network, Global Traffic
Manager also handles DNS traffic over logical components. Logical
network components consist of network elements that may not represent a
physical location or device. These components include:
• Listeners
• Pools
• Wide IPs
• Distributed applications

Listeners
To communicate with the rest of your network, you must configure Global
Traffic Manager so that it can correctly identify the resolution requests for
which it is responsible. A listener is an object that monitors the network for
DNS queries, and thus is critical for global traffic management. The listener
instructs the system to monitor the network traffic destined for a specific IP
address.
In most installations, when you define a listener for Global Traffic Manager,
you use the IP address of Global Traffic Manager; however, there are many
different ways you can configure listeners so that the system handles DNS
traffic correctly.

Pools
A pool is a collection of virtual servers that can reside on multiple network
servers. When you define the virtual servers to which Global Traffic
Manager directs DNS traffic, you combine those virtual servers into pools.
You can then configure Global Traffic Manager to direct traffic to a specific
virtual server within a pool, using a specific load balancing method.
You can apply a different set of options to the same resources as a virtual
server. When you add a virtual server to a pool, it becomes a pool member
to which you can apply monitors, iRules®, and other configuration options.

Wide IPs
One of the most common logical components you create in Global Traffic
Manager is a wide IP. A wide IP maps a fully-qualified domain name to one
or more pools of virtual servers that host the domain’s content.

2-4
Components

When an LDNS requests a connection to a specific domain name, the wide


IP definition specifies which pools of virtual servers are eligible to answer
the request, and which load balancing modes to use in choosing a pool.
Global Traffic Manager then load balances the request across the virtual
servers within that pool to resolve the request.

Distributed applications
A distributed application is a collection of one or more wide IPs, data
centers, and links that serve as a single application to a web site visitor. A
distributed application is the highest-level component that Global Traffic
Manager supports. You can configure Global Traffic Manager so that the
availability of distributed applications is dependent on a specific data center,
link, or server. For example, if the New York data center goes offline, this
information causes the wide IP and its corresponding distributed application
to become unavailable. Consequently, the system does not send resolution
requests to any of the distributed application resources, until the entire
application becomes available again.

BIG-IP® Global Traffic ManagerTM Concepts Guide 2-5


Chapter 2

2-6
3
Setup and Configuration

• Introduction

• Network Topology

• Redundant system configuration

• System communications

• Synchronization

• Global monitor settings

• Domain validation
Setup and Configuration

Introduction
When you install a BIG-IP® Global Traffic Manager™ system on the
network, the actions you take to integrate it into the network fall into two
categories: setup tasks and configuration tasks.
Setup tasks are tasks that apply either to Global Traffic Manager itself, or
universally to all other components that you configure later, such as servers,
data centers, and wide IPs. Examples of setup tasks include running the
Setup utility. This utility guides you through licensing the product, assigning
an IP address to the management port of the system, assigning self IP
addresses, enabling high-availability, and configuring the passwords for the
root and administrator accounts.
Configuration tasks are tasks in which you define how you want Global
Traffic Manager to manage global traffic, such as load balancing methods,
pools and pool members, and iRules®. These tasks affect specific aspects of
how you want the system to manage Domain Name System (DNS) traffic.

BIG-IP® Global Traffic ManagerTM Concepts Guide 3-1


Chapter 3

Network Topology
Global Traffic Manager is designed to manage DNS traffic as it moves from
outside the network, to the appropriate resource, and back again. The
management capabilities of the system require that it has an accurate
definition of the sections of the network over which it has jurisdiction. You
must define network elements such as data centers, servers (including
BIG-IP systems), and virtual servers in Global Traffic Manager. Defining
these elements is similar to drawing a network diagram; you include all of
the relevant components in such a diagram in order to have an accurate
depiction of how the system works as a whole.

Note

In existing version 9.x systems, by default, the IP addresses of the system


servers are in the default route domain.

As part of specifying this network topology, you configure Global Traffic


Manager itself. You specify the role of Global Traffic Manager within the
network, as well as what interactions it can and cannot have with other
network components. Without this configuration, many of the capabilities of
Global Traffic Manager cannot operate effectively. Additionally, you can
define a Global Traffic Manager redundant system configuration for high
availability.

3-2
Setup and Configuration

Redundant system configuration


A redundant system configuration is a set of two Global Traffic Manager
systems: one operating as the active unit, the other operating as the standby
unit. If the active unit goes offline, the standby unit immediately assumes
responsibility for managing DNS traffic. The new active unit remains active
until another event occurs that causes the unit to go offline, or until you
manually reset the status of each unit.
Global Traffic Manager supports two methods of checking the status of the
peer system in a redundant system configuration:
◆ Hardware-based failover
In a redundant system configuration that has been set up with
hardware-based failover, the two units in the system are connected to
each other directly using a failover cable attached to the serial ports. The
standby unit checks the status of the active unit once every second using
this serial link.
◆ Network-based failover
In a redundant system configuration that has been set up with
network-based failover, the two units in the system communicate with
each other across an Ethernet network instead of across a dedicated
failover serial cable. Using the Ethernet connection, the standby unit
checks the status of the active unit once every second.
In a network-based failover configuration, if a client queries a failed
Global Traffic Manager, and does not receive an answer, the client
automatically re-issues the request (after five seconds), and the standby
unit, functioning as the active unit, responds.

Note that when you configure a Global Traffic Manager redundant


system configuration that uses network-based failover, you must
manually enable high availability on both Global Traffic Manager
systems.

BIG-IP® Global Traffic ManagerTM Concepts Guide 3-3


Chapter 3

System communications
Before Global Traffic Manager can operate as an integrated component
within your network, you must first establish how it can communicate with
other external systems. An external system is any server with which Global
Traffic Manager must exchange information to perform its functions. In
general, system communications are established for the purpose of:
• Communicating with other BIG-IP systems
• Communicating with third-party systems

When Global Traffic Manager communicates with other BIG-IP systems,


such as Local Traffic Manager™ systems or Link Controller™ systems, it
uses a proprietary protocol called iQuery® to send and receive information.
If Global Traffic Manager is communicating with another BIG-IP system, it
uses the big3d utility to handle the communication traffic. If Global Traffic
Manager is instead communicating with another Global Traffic Manager, it
uses a different utility, called gtmd, which is designed for that purpose.
Part of the process when establishing communications between Global
Traffic Manager and other BIG-IP systems is to open port 22 and port 4353
between the two systems. Port 22 allows Global Traffic Manager to copy the
newest version of the big3d utility to existing systems, while iQuery
requires the port 4353 for its normal communications.
In order for other BIG-IP systems to communicate with Global Traffic
Manager, F5 Networks recommends that you update the big3d utility on
older BIG-IP systems by running the big3d_install script from Global
Traffic Manager. For more information about running the big3d_install
script, see big3d agent installation, on page A-3, and SOL8195 on
AskF5.com.

Note

Global Traffic Manager supports web certificate authentication for iQuery


communications between itself and other systems running the big3d agent.

Table 3.1 lists the requirements for each communication component


between Global Traffic Manager and other BIG-IP systems.

Communication component Requirements

Ports Port 22, for secure file copying of entities like


big3d.
Port 4353, for iQuery communication.

Utilities big3d, for Global Traffic Manager to BIG-IP


system communication.

Protocols iQuery

Table 3.1 Requirements for communication components (BIG-IP system)

3-4
Setup and Configuration

When Global Traffic Manager communicates with third-party systems,


whether that system is a load balancing server or a host, it can use SNMP to
send and receive information.
Table 3.2 lists the requirements for each communication component
between the big3d agent and other external systems.

Communication component Requirements

Ports Port 161

Protocols SNMP

Table 3.2 Requirements for communication components (third-party


systems)

BIG-IP® Global Traffic ManagerTM Concepts Guide 3-5


Chapter 3

Synchronization
The primary goal of Global Traffic Manager is to ensure that name
resolution requests are sent to the best available resource on the network.
Consequently, it is typical for multiple Global Traffic Manager systems to
reside in several locations within a network. For example, a standard
installation might include a Global Traffic Manager system at each data
center within an organization.
When an LDNS submits a name resolution request, you cannot control to
which Global Traffic Manager the request is sent. As a result, you often
want multiple Global Traffic Manager systems to share the same
configuration values, and maintain those configurations over time.
In network configurations that contain more than one Global Traffic
Manager, synchronization means that each Global Traffic Manager
regularly compares the timestamps of its configuration files with the
timestamps of configuration files on other Global Traffic Manager systems.
If Global Traffic Manager determines that its configuration files are older
than those on another system, it acquires the newer files and begins using
them to load balance name resolution requests. With synchronization, you
can change settings on one system and have that change distributed to all
other systems.

Synchronization groups
You can separate the Global Traffic Manager systems on your network into
separate groups, called synchronization groups. A synchronization group is
a collection of multiple Global Traffic Manager systems that share and
synchronize configuration settings. These groups are identified by a
synchronization group name, and only systems that share this name also
shares configuration settings. These synchronization groups allow you to
customize the synchronization behavior. For example, Global Traffic
Manager systems residing in data centers in Europe might belong to one
synchronization group, while the systems in North America belong to
another group.
Initially, when you enable synchronization for Global Traffic Manager, the
system belongs to a synchronization group called default. However, you
can create new groups at any time to customize the synchronization process,
ensuring that only certain sets of Global Traffic Manager systems share
configuration values.
To illustrate how synchronization groups work, consider the fictional
company, SiteRequest. SiteRequest has decided to add a new data center in
Los Angeles. As part of bringing this data center online, SiteRequest has
decided that it wants the Global Traffic Manager systems installed in New
York and in Los Angeles to share configurations, and the Paris and Tokyo
data centers to share configurations. This setup exists because SiteRequest’s
network optimization processes require slightly different settings within the
United States than the rest of the world. To accommodate this new network

3-6
Setup and Configuration

configuration, SiteRequest enables synchronization for the New York and


Los Angeles data centers, and assigns them a synchronization group name
of United States. The remaining data centers are also synchronized, but
with a group name of Rest Of World. As a result, a configuration change
made to the Global Traffic Manager system in Paris automatically modifies
the Global Traffic Manager system in Tokyo.

DNS zone file synchronization


During synchronization operations, Global Traffic Manager verifies that it
has the latest configuration files available and, if it does not, Global Traffic
Manager downloads the newer files from the appropriate system. You can
expand the definition of the configuration files to include the DNS zone files
used to respond to name resolution requests by using the Synchronize DNS
Zone Files setting. This setting is enabled by default.
It is important to note that when Global Traffic Manager is a member of a
synchronization group, the configuration of each Global Traffic Manager in
the group automatically synchronizes with the group member that has the
newest user configuration set (UCS). Therefore, if you roll back the
configuration of a member of the synchronization group to a UCS that
contains DNS configuration files that are dated earlier than the same file on
another system in the group, the system that you roll back synchronizes with
that other system, effectively losing the configuration to which it was rolled
back. You can stop the automatic synchronization of the DNS files by
clearing the Synchronize DNS Zone Files box on the system before you
roll it back to an earlier configuration.

BIG-IP® Global Traffic ManagerTM Concepts Guide 3-7


Chapter 3

Global monitor settings


As you employ Global Traffic Manager to load balance DNS traffic across
different network resources, you must acquire information on these
resources. You acquire this information by applying monitors to each
resource. A monitor is a component of Global Traffic Manager that tests to
see if a given resource responds as expected. These tests can range from
verifying that a connection to the resource is available, to conducting a
database query. Global Traffic Manager uses the information it gathers from
monitors not only to inform you of what resources are available, but to
determine which resource is the best candidate to handle incoming DNS
requests.
In most cases, you apply specific monitors to resources, depending on the
type of resource and its importance. However, the following Global Traffic
Manager settings affect all monitors:
• Heartbeat Interval
Indicates how often Global Traffic Manager communicates with other
BIG-IP systems on the network.
• Maximum Synchronous Monitor Requests
Indicates how many monitors can query a resource at any given time.
• Monitor Disabled Objects
Indicates whether monitors continue to check the availability of a
resource that you disabled through Global Traffic Manager.

While monitors supply information you need to ensure that network traffic
moves efficiently across the network, they do so at the cost of increasing
that network traffic. These settings allow you to control this increase.

Heartbeat interval
In daily operations, Global Traffic Manager frequently acquires much of its
network data from other BIG-IP systems that you employ, such as Local
Traffic Manager systems. For example, the Local Traffic Manager system
monitors the resources it manages. When Global Traffic Manager requires
this same information for load balancing DNS requests, it can query Local
Traffic Manager, instead of each resource itself. This process ensures that
the system efficiently acquires the information it needs.
Because Global Traffic Manager queries other BIG-IP systems to gather
information, you can configure the frequency at which these queries occur,
by configuring the Heartbeat Interval setting. Based on the value you
specify for this setting, Global Traffic Manager queries other BIG-IP

3-8
Setup and Configuration

systems more or less often. F5 Networks recommends the default value of


10 seconds for this setting; however, you can configure this setting to best
suit the configuration of your network.

Tip
F5 Networks recommends that, when configuring resource monitors, you
ensure that the frequency at which the monitor attempts to query a resource
is greater than the value of the Heartbeat Interval setting. Otherwise, the
monitor might acquire out-of-date data during a query.

Synchronous monitor queries


Another aspect of resource monitoring that you want to control is how many
monitors can query a resource at any given time. Network resources often
serve many different functions at the same time and it is likely you want
more than one monitor checking the availability of these resources in
different ways. You might monitor a single resource, for example, to verify
that the connection to the resource is available, that you can reach a specify
HTML page on that resource, and that a database query returns an expected
result. If this resource is used in more than one context, you might have
many more monitors assigned to it, each one performing an important check
to ensure the availability of the resource.
While these monitors are helpful in determining availability, it is equally
helpful to control how many monitors can query a resource at any given
time. This control ensures that monitor requests are more evenly distributed
during a given period of time.

Disabled resources
One of the ways a given network resource can become unavailable during
the load balancing of DNS traffic is when you manually disable the
resource. You might disable a resource because you are upgrading the server
on which it resides, or because you are modifying the resource itself and
need to remove it temporarily from service.
You can control whether Global Traffic Manager monitors these disabled
resources. In some network configurations, for example, you might want to
continue monitoring these resources when you put them offline.

Note

By default, the Monitor Disabled Objects setting is disabled for Global


Traffic Manager. F5 Networks recommends that you enable it only if you
are certain you want Global Traffic Manager to continue monitoring
resources that you have manually disabled.

BIG-IP® Global Traffic ManagerTM Concepts Guide 3-9


Chapter 3

Domain validation
Global Traffic Manager handles traffic using DNS and BIND to translate
domain names into IP addresses. By configuring the Domain Validation
setting, you can specify which domain names Global Traffic Manager
recognizes. You can configure the system so that it accepts all domain
names, or you can restrict the use of certain characters in domain names.

3 - 10
4
Listeners

• Introduction

• Listeners and Network Traffic

• Listeners and VLANs


Listeners

Introduction
Before you can fully configure Global Traffic Manager™ to handle name
resolution requests, you must determine how you want the system to
integrate with the existing network. Specifically, you must identify what
network traffic you want Global Traffic Manager to handle and how. In
general, the system performs global traffic management in two ways: Node
mode and Bridge or Router mode.

Node mode
Typically, when you add a Global Traffic Manager system to your network,
you want the system to respond to at least a subset of your incoming DNS
requests. You can configure the system to direct the requests to the wide IPs
that are configured on Global Traffic Manager; however, you can also
configure the system to respond to DNS requests for other network
resources that are not associated with a wide IP, such as other DNS servers.
When Global Traffic Manager receives traffic, processes it locally, and
sends the appropriate Domain Name System (DNS) response back to the
querying server, it is operating in Node mode. In this situation, you create a
listener that corresponds to an IP address on the system. If Global Traffic
Manager operates as a standalone unit, this IP address is the self IP address
of the system. If Global Traffic Manager is part of a redundant system
configuration for high availability purposes, this IP address is the floating IP
address that belongs to both systems.

Bridge or Router mode


Another common way to use Global Traffic Manager is to integrate it with
the existing DNS servers. In this scenario, Global Traffic Manager handles
any traffic related to the wide IPs you assign to it, while forwarding other
DNS requests either to another part of the network or another DNS server.
When forwarding traffic in this manner, Global Traffic Manager is
operating in Bridge or Router mode, depending on how the traffic was
initially sent to the system. In this configuration, you assign to Global
Traffic Manager a listener that corresponds to the IP address of the DNS
server to which you want to forward to traffic.
You can create multiple listeners to forward network traffic. The number of
listeners you create is based on your network configuration and the ultimate
destination to which you want to send specific DNS requests.

Wildcard listener
In some cases, you might want Global Traffic Manager to handle the traffic
coming into your network, regardless of the destination IP address of the
given DNS request. In this configuration, Global Traffic Manager continues

BIG-IP® Global Traffic ManagerTM Concepts Guide 4-1


Chapter 4

to process and respond to requests for the wide IPs that you configure, but is
also responsible for forwarding additional DNS requests to other network
resources, such as DNS servers. To accomplish this type of configuration,
you create a wildcard listener.

4-2
Listeners

Listeners and Network Traffic


To control how Global Traffic Manager handles network traffic, you
configure one or more listeners. A listener is a specialized resource to which
you assign a specific IP address and port 53, the DNS query port. When
traffic is sent to that IP address, the listener alerts Global Traffic Manager,
allowing it to either handle the traffic locally or forward the traffic to the
appropriate resource.

Tip
If you are familiar with Local Traffic Manager™, it might be helpful to
consider a listener as a specialized type of virtual server that is responsible
for handling traffic for Global Traffic Manager.

Note

If you configure user accounts on Local Traffic Manager, you can assign
listeners, like other virtual servers, to specific partitions. However, because
listeners play an important role in global traffic management, F5 Networks
recommends that you assign all listeners to partition Common.

You control how Global Traffic Manager responds to network traffic on a


per-listener basis. For example, a single Global Traffic Manager can be the
authoritative server for one domain, while forwarding other requests to a
separate DNS server. Regardless of how many listeners you configure, the
system manages and responds to requests for the wide IPs that are
configured on it.
To further illustrate how you configure listeners to control how Global
Traffic Manager responds to DNS traffic, consider the fictional company
SiteRequest. At this company, Global Traffic Manager is being integrated
into a network with the following characteristics:
• A DNS server already exists at IP address 10.2.5.37.
• There are two VLANs, named external and guests.
• There are two wide IPs: www.siterequest.com and
downloads.siterequest.com.

After being integrated into the network, Global Traffic Manager is


responsible for the following actions:
• Managing and responding to requests for the wide IPs
• Forwarding other DNS traffic to the existing DNS server
• Forwarding any traffic from the guests VLAN to the rest of the network

To implement this configuration, Global Traffic Manager requires three


listeners:
• A listener with an IP address that is the same as the self IP address of
Global Traffic Manager. This listener allows the system to manage DNS
traffic that pertains to its wide IPs.

BIG-IP® Global Traffic ManagerTM Concepts Guide 4-3


Chapter 4

• A listener with an IP address of 10.2.5.37, the IP address of the existing


DNS server. This listener allows the system to forward incoming traffic
to the existing DNS server.
• A wildcard listener enabled on the guests VLAN. This listener allows
Global Traffic Manager to forward traffic sent from the guests VLAN to
the rest of the network.

As you can see from this example, the role that Global Traffic Manager
plays in managing DNS traffic varies depending on the listener through
which the traffic arrives. As a result, Global Traffic Manager becomes a
flexible system for managing DNS traffic in a variety of ways.

Listeners and VLANs


On BIG-IP systems, you can create one or more VLANs and assign specific
interfaces to the VLANs of your choice. By default, each BIG-IP system
includes at least two VLANs, named internal and external. However, you
can create as many VLANs as the needs of your network demand.
When you assign listeners to Global Traffic Manager, you must take into
account the VLANs that are configured on the system. For example, a
listener that forwards traffic to another DNS server might only be
appropriate for a specific VLAN, while a wildcard listener might be
applicable to all VLANs. You can configure a listener to be applicable to all
VLANs, or enabled only on specific VLANs.

4-4
5
The Physical Network

• Introduction

• Data centers

• Servers

• Virtual servers

• Links
The Physical Network

Introduction
The components that make up Global Traffic Manager™ can be divided into
two categories: logical network components and physical networks
components. Logical network components are abstractions of network
resources, such as virtual servers. Physical network components have a
direct correlation with one or more physical entities on the network. This
chapter deals with the physical components of Global Traffic Manager, and
describes how to use Global Traffic Manager to define the following
physical network components that make up your network:
• Data centers
• Servers
• Virtual servers
• Links

BIG-IP® Global Traffic ManagerTM Concepts Guide 5-1


Chapter 5

Data centers
A data center defines the servers and links that share the same subnet on the
network. All resources on your network, whether physical or logical, are
associated in some way with a data center. Global Traffic Manager
consolidates the paths and metrics data collected from servers, virtual
servers, and links into the data center, and uses that data to conduct load
balancing operations.
Depending on your router configuration, the following data center
configurations are available:
• One data center in one physical location
• One data center that includes servers in multiple physical locations
• Multiple data centers in one physical location

For example, the fictional company SiteRequest has a network operation


center in New York, which contains two subnets: 192.168.11.0/24 and
192.168.22.0/24. Because there are two subnets, the IT team needs to create
two data centers: New York 1 and New York 2, within Global Traffic
Manager.
On the opposite side of the country, SiteRequest has three operational
centers, but they all share the same subnet of 192.168.33.0/24. Therefore,
the IT team needs to create only a single data center: West Coast.
When you create a data center, it is enabled by default. You can disable a
data center manually, which allows you to temporarily remove it from
global traffic management load balancing operations; for example, during a
maintenance period. When the maintenance period ends, you can re-enable
the data center.
The resources associated with a data center are available only when the data
center is also available, based on the metrics collected by Global Traffic
Manager.

5-2
The Physical Network

Servers
A server defines a specific physical system on the network. Within Global
Traffic Manager, servers are not only physical entities that you can
configure and modify as needed; servers also contain the virtual servers that
are the ultimate destinations of name resolution requests. When you
configure a server on Global Traffic Manager, unless the server is either a
Global Traffic Manager system or a Link Controller™ system, the server
must contain at least one virtual server.
Global Traffic Manager supports three types of servers:
◆ BIG-IP systems
A BIG-IP® system can be a Global Traffic Manager system, a Local
Traffic Manager™ system, a Link Controller system, or a VIPRION®
system.
◆ Third-party load balancing systems
A third-party load balancing system is any system, other than a BIG-IP
system, that supports and manages virtual servers on the network.
◆ Third-party host servers
A third-party host system is any server on the network that does not
support virtual servers.

At a minimum, the following servers must be defined on Global Traffic


Manager:
• Global Traffic Manager system itself
• A managed server (either a load balancing server or a host)

Global Traffic Manager systems


Global Traffic Manager systems are load balancing servers that are part of
your physical network. First, configure the settings of Global Traffic
Manager itself. Next, add other Global Traffic Manager systems to the
configuration.
If Global Traffic Manager that you are configuring has multiple links (that
is, multiple network devices that connect it to the Internet), you can add the
self IP addresses of these devices to the system. After you configure these
systems, the agents and other utilities, such as the big3d agent, can gather
and analyze network traffic path and metrics information.
After you configure the additional servers and links, you can synchronize
the settings of a specific Global Traffic Manager to other Global Traffic
Managers on the physical network.

Important
You must use a self IP address when you define Global Traffic Manager.
You cannot use the management IP address.

BIG-IP® Global Traffic ManagerTM Concepts Guide 5-3


Chapter 5

Local Traffic Manager systems


Local Traffic Manager systems are load balancing servers that manage
virtual servers on the network. There are two standard configurations for
Local Traffic Manager:
• A stand-alone system on the network
• A component module residing on the same hardware as Global Traffic
Manager

Regardless of whether Local Traffic Manager shares the same hardware as


Global Traffic Manager, you should ensure that you have the following
information available before you define the system.
• The self IP addresses and translations of the Local Traffic Manager
system’s interfaces
Note: When you define Local Traffic Manager, you must use a self IP
address. You cannot use a management IP address.
• The IP address and service name or port number of each virtual server
managed by Local Traffic Manager, unless you want to use
auto-configuration to discover the virtual servers on the Local Traffic
Manager system
Note: If your installation of Global Traffic Manager resides on the same
system as a Local Traffic Manager system, you define only one BIG-IP
server. This server entry represents both Global Traffic Manager and
Local Traffic Manager modules.

Third-party load balancing servers


In addition to BIG-IP systems, Global Traffic Manager can interact with
other load balancing servers to determine availability and performance
metrics for load balancing connection requests.
Global Traffic Manager supports the following load balancing servers:
• Alteon® Ace Director
• Cisco® CSS
• Cisco® LocalDirector v2
• Cisco® LoadDirector v3
• Cisco® SLB
• Extreme
• Foundry® ServerIron
• Radware WSD

Note

If your network uses a load balancing server that is not found on this list,
you can use the Generic Load Balancer option.

5-4
The Physical Network

Third-party host servers


Another server type that you might include as part of your network is a host.
A host is an individual network resource, such as web page or a database,
that is not a part of the BIG-IP product family and does not provide load
balancing capabilities for the resources it supports.
Global Traffic Manager supports host servers running the following
systems:
• CacheFlow®
• NetApp™
• Sun® Oracle® Solaris™
• Windows® 2000 Server
Note that you can monitor a Windows Vista® Enterprise Edition-based
server using a Windows 2000 Server-based computer.
• Windows® Server 2003
• Windows® NT 4.0

Note

If your network uses a host server that is not on this list, you can use the
Generic Host option.

Monitors and servers


Each server that you add to Global Traffic Manager, whether it is a BIG-IP
system, a third-party load balancing server, or a host server, has a variety of
monitors available. You can assign these monitors to track specific data, and
use that data to determine load balancing or other actions.

Availability thresholds
When you set thresholds for availability, Global Traffic Manager can detect
when a managed server is low on resources, and redirect the traffic to
another server. Setting limits can help eliminate any negative impact on a
server's performance of tasks that may be time critical, require high
bandwidth, or put high demand on system resources. The system resources
vary depending on the monitors you assign to the server.
You can specify thresholds for the following components:
• Servers
• Virtual servers
• Pools
• Pool members

BIG-IP® Global Traffic ManagerTM Concepts Guide 5-5


Chapter 5

Server thresholds
When you configure a server, you can set limits for specific elements
depending upon whether the server is part of the BIG-IP product family,
such as Local Traffic Manager, or another server type. If the server is part of
the BIG-IP product family, you can base thresholds on:
• Bits per second
• Packets per second
• Current connections

If the server is not part of the BIG-IP product family, such as a generic host
server, you can base thresholds on:
• CPU
• Memory
• Bits
• Packets
• Current connections

If a server meets or exceeds its limits, both the server and the virtual servers
it manages are marked as unavailable for load balancing. You can quickly
review the availability of any of your servers or virtual servers on the
Statistics screens.

Virtual server thresholds


When you configure a virtual server, you can set thresholds for:
• Bits per second
• Packets per second
• Current connections

Pool thresholds
When you configure a pool, you can set thresholds for:
• Bits per second
• Packets per second
• Current connections

If a pool meets or exceeds its limits, both the pool and the pool members it
manages are marked as unavailable for load balancing. You can quickly
review the availability of any of your pools or pool members on the
Statistics screens.

5-6
The Physical Network

Pool member thresholds


When you configure a pool member, you can set thresholds for:
• Bits per second
• Packets per second
• Current connections

BIG-IP® Global Traffic ManagerTM Concepts Guide 5-7


Chapter 5

Virtual servers
Servers, excluding Global Traffic Manager systems and Link Controller
systems, contain at least one virtual server. A virtual server, in the context
of Global Traffic Manager, is a specific IP address and port number that
points to a resource on the network. In the case of host servers, this IP
address and port number likely point to the resource itself. With load
balancing systems, such as Local Traffic Manager, these virtual servers are
often proxies that allow the load balancing server to manage the resource
request across a multitude of resources.
You can add virtual servers in two ways:
• Automatically, through the use of the discovery feature.
• Manually, through the properties screens of the given server.

5-8
The Physical Network

Links
A link defines a physical connection to the Internet that is associated with
one or more routers on the network. Global Traffic Manager tracks the
performance of links, which in turn can dictate the overall availability of a
given pool, data center, wide IP, or distributed application.
To configure the links that you want Global Traffic Manager to load
balance, you add a link entry, and then associate one or more routers with
that entry. You can also configure monitors to check certain metrics
associated with a link, and modify how the system load balances network
traffic across links.

Links and monitors


After you configure a link, you can assign monitors that track specific data
to the link. The system can use this data to manage global traffic.

Link weighting and billing properties


You can configure how the system manages and distributes traffic for a
given link on the properties screen for the link, using these settings:
◆ Ratio Weighting
If you have links of varying bandwidth sizes, and you want to load
balance the traffic to the controller based on a ratio, you can select the
Ratio option from the Weighting list. You use this configuration to
avoid oversaturating a smaller link with too much traffic.
◆ Price Weighting
If you pay varying fees for the bandwidth usage associated with the links,
you can select the Price (Dynamic Ratio) option from the Weighting
list. You use this configuration to direct traffic over the least expensive
link first and to avoid the costs associated with exceeding a prepaid
bandwidth.
◆ Duplex Billing
If your ISP provider uses duplex billing, you can configure the Duplex
Billing setting so that the statistics and billing report screens accurately
reflect the bandwidth usage for the link.

Important
You can use either the Ratio or Price (Dynamic Ratio) weighting option to
load balance the traffic through all of your links. You must use the same
weighting option for all links.

BIG-IP® Global Traffic ManagerTM Concepts Guide 5-9


Chapter 5

5 - 10
6
The Logical Network

• Introduction

• Pools

• Wide IPs

• Distributed applications
The Logical Network

Introduction
After you define the physical components of your network, such as data
centers, servers, and links, you can configure Global Traffic Manager™
with the logical components of your network. Logical components are
abstractions of network resources, such as a virtual servers. Unlike physical
components, the logical network can often span multiple physical devices,
or encompass a subsection of a single device.
Through Global Traffic Manager, you define three primary types of logical
network components:
• Pools
• Wide IPs
• Distributed applications

To better understand the interactions between pools, wide IPs, and data
centers, consider the fictional company of SiteRequest. SiteRequest is an
online application repository. Currently, its presence on the World Wide
Web consists of a main site, www.siterequest.com; a download area,
downloads.siterequest.com; and a search area, search.siterequest.com.
These three fully-qualified domain names (FQDNs), www.siterequest.com,
downloads.siterequest.com, and search.siterequest.com, are wide IPs.
Each of these wide IPs contain several pools of virtual servers. For example,
www.siterequest.com contains two pools of virtual servers: poolMain, and
poolBackup. When Global Traffic Manager receives a connection request
for www.siterequest.com, it applies its load balancing logic to select the
appropriate pool to handle the request.
After Global Traffic Manager selects a pool, it then load balances the
request to the appropriate virtual server. For example, mainPool contains
three virtual servers: 192.168.3.10:80, 192.168.4.20:80, and
192.168.5.30:80. Global Traffic Manager responds to the system that made
the connection request with the selected virtual server. At this point, Global
Traffic Manager steps out of the communication, and the system requesting
the resource communicates directly with the virtual server.

Note

If a virtual server is managed by a load balancing server that is not in the


BIG-IP® product family, the IP address and port number of the virtual
server often point to a proxy on which the load balancing server listens for
connection requests. In that case, the load balancing server remains in the
communication directing the connection to the appropriate resource.

For administration purposes, the wide IPs downloads.siterequest.com and


search.siterequest.com are added to a single distributed application,
siterequest_download_store. This configuration provides the IT staff the
ability to track the performance of the distributed application, as
performance has an immediate impact on the users that visit the web sites.

BIG-IP® Global Traffic ManagerTM Concepts Guide 6-1


Chapter 6

Pools
A pool represents one or more virtual servers that share a common role on
the network. A virtual server, in the context of Global Traffic Manager, is a
combination of IP address and port number that points to a specific resource
on the network.
Global Traffic Manager considers any virtual servers that you add to a pool
to be pool members. A pool member is a virtual server that has specific
attributes that pertain to the virtual server only in the context of that pool.
Through this differentiation, you can customize settings, such as thresholds,
dependencies, and health monitors, for a given virtual server on a per-pool
basis.
As an example of the difference between pool members and virtual servers,
consider the fictional company SiteRequest. In the London data center, the
IT team has a virtual server that acts as a proxy for a Local Traffic
Manager™ system. This virtual server is the main resource for name
resolution requests for the company’s main web page that originate from
Europe. This same virtual server is the backup resource for name resolution
requests that originate from the United States. Because these are two
distinctly different roles, the virtual server is a pool member in two different
pools. This configuration allows the IT team to customize the virtual server
for each pool to which it belongs, without modifying the actual virtual
server itself.
Before you can add virtual servers to Global Traffic Manager, you must
define a server that represents a physical component of your network. Then
you can add virtual servers to the server, and group the virtual servers in
pools.
When you create a pool, you name it and add at least one virtual server as a
member of the pool. You can also assign specific load balancing methods, a
fallback IP address, and one or more health monitors to the pool. You assign
a fallback IP address in the event that the load balancing methods you assign
to the pool fail to return a valid virtual server. The health monitors that you
assign to the pool use various methods to determine if the virtual servers
within the pool are available.
Certain load balancing methods within Global Traffic Manager select virtual
servers based on the order in which they are listed in the pool. For example,
the load balancing method, Global Availability, instructs Global Traffic
Manager to select the first virtual server in the pool until it reaches capacity
or goes offline, at which point it selects the next virtual server until the first
pool becomes available again.
If you use a load balancing method that selects virtual servers based on the
order in which they are listed in the pool, you may want to change the order
in which the virtual servers are listed in the Member List. When you
organize your virtual servers in conjunction with these load balancing
methods, you can ensure that your most robust virtual server always
receives resolution requests, while the other virtual servers act as backups in
case the primary virtual server becomes unavailable.

6-2
The Logical Network

Virtual servers and Ratio mode load balancing


One of the load balancing methods that Global Traffic Manager supports is
the Ratio mode. This mode instructs the system to load balance network
requests based on the weights assigned to a specific resource. If you use the
Ratio mode to load balance across virtual servers in a pool, you must assign
weights to the virtual servers. A weight is a value assigned to a resource,
such as a pool, that Global Traffic Manager uses to determine the frequency
at which the resource receives connection requests. Global Traffic Manager
selects a resource based on the weight of that resource as a percentage of the
total of all weights in that resource group.
To illustrate the use of weights in connection load balancing, consider the
fictional company SiteRequest. One of SiteRequest’s wide IPs,
www.siterequest.com, contains a pool labeled poolMain. This pool uses
the Ratio load balancing mode and contains three virtual servers, with the
following weight assignments:
• Virtual server 1: weight 50
• Virtual server 2: weight 25
• Virtual server 3: weight 25
Notice that the total of all the weights in this pool is 100. Each time Global
Traffic Manager selects this pool, it load balances across all three virtual
servers. Over time, the load balancing statistics for this pool appear as
follows:
• Virtual server 1: selected 50 percent of the time
• Virtual server 2: selected 25 percent of the time
• Virtual server 3: selected 25 percent of the time

This pattern exists because the weight value, 50, is 50 percent of the total
weight for all virtual servers (100), while the weight value, 25, is 25 percent.

Canonical pool names


When you create a pool, instead of adding virtual servers to the pool, you
can instead provide a canonical name (CNAME) that the system returns in
responses to requests for that pool. In this case, you do not add members to
the pool, because the CNAME always takes precedence over pool members.
The health monitors that you assign to the pool use various methods to
determine if this pool is available for load balancing.
A canonical name is the official name for a domain. In DNS, a CNAME
record maps an alias to the canonical name for a domain, for example, a
CNAME record can map the alias downloads.siterequest.com to the
canonical name siterequest.com. When you define a pool using a canonical
name, the system delegates DNS queries by responding to queries with a
CNAME record, rather than a pool member.

BIG-IP® Global Traffic ManagerTM Concepts Guide 6-3


Chapter 6

A content delivery network (CDN) is identified by a domain name


(canonical name). A CND is a network that includes devices designed and
configured to maximize the speed at which a content provider's content is
delivered. The purpose and goal of a CDN is to cache content closer, in
Internet terms, to the user than the origin site is. Using a CDN to deliver
content greatly reduces wide area network (WAN) latency so the content
gets to the user more quickly, and the origin site servers are not overloaded
and slowed by requests for content.

6-4
The Logical Network

Wide IPs
A wide IP maps a fully-qualified domain name (FQDN) to a set of virtual
servers that host the domain’s content, such as a web site, an e-commerce
site, or a CDN. Wide IPs use pools to organize virtual servers, which creates
a tiered load balancing effect: Global Traffic Manager first load balances
requests to the appropriate pool of a wide IP, and then load balances within
the pool to the appropriate virtual server.

Wildcard characters in wide IP names


Global Traffic Manager supports wildcard characters in both wide IP names
and wide IP aliases. If you have a large quantity of wide IP names and
aliases, you can use wildcard characters to simplify your maintenance tasks.
The wildcard characters you can use are: the question mark ( ? ), and the
asterisk ( * ).
The guidelines for using the wildcard characters are as follows:
◆ The question mark ( ? )
• Use the question mark to replace a single character, with the
exception of dots ( . ).
• Use more than one question mark in a wide IP name or alias.
• Use both the question mark and the asterisk in the same wide IP name
or alias.

◆ The asterisk ( * )
• Use the asterisk to replace multiple consecutive characters, with the
exception of dots ( . ).
• Use more than one asterisk in a wide IP name or alias.
• Use both the question mark and the asterisk in the same wide IP name
or alias.

The following examples are all valid uses of the wildcard characters for the
wide IP name, www.mydomain.net.
• ???.mydomain.net
• www.??domain.net
• www.my*.net
• www.??*.net
• www.my*.*
• ???.my*.*
• *.*.net
• www.*.???

BIG-IP® Global Traffic ManagerTM Concepts Guide 6-5


Chapter 6

Wide IPs and pools


A wide IP must contain at least one pool, which must contain at least one
pool member. This hierarchal configuration allows Global Traffic Manager
to load balance connection requests for a wide IP at two levels: first, the
connection is load balanced across the pools assigned to the wide IP;
second, the connection is load balanced across the pool members within the
given pool.

Tip
You can assign the same pool to multiple wide IPs.

Load balancing methods and pool order


Certain load balancing methods within Global Traffic Manager select pools
based on the order in which they are listed in the wide IP. For example, the
load balancing method, Global Availability, instructs Global Traffic
Manager to select the first pool in the wide IP until it becomes unavailable,
at which point it selects the next pool until the first pool becomes available
again.
If you use a load balancing method that selects pools based on the order in
which they are listed in a wide IP, you may want to change the order in
which the pools are listed in the Pools List. When you organize your pools
in conjunction with these load balancing methods, you can ensure that your
most robust pool always receives resolution requests, while the other pools
act as backups in case the primary pool becomes unavailable.

Load balancing methods and pool weight


One of the load balancing methods that Global Traffic Manager supports is
the Ratio mode. This mode instructs the system to load balance network
requests based on the weights assigned to a specific resource. If you use the
Ratio mode to load balance across pools in a wide IP, you must assign
weights to those pools. A weight is a value assigned to a resource, such as a
pool, that Global Traffic Manager uses to determine the frequency at which
the resource receives connection requests. Global Traffic Manager selects a
resource based on the weight of that resource as a percentage of the total of
all weights in that resource group.
To illustrate the use of weights in connection load balancing, consider the
fictional company SiteRequest. One of SiteRequest’s wide IPs,
www.siterequest.com, uses the Ratio load balancing mode and contains
three pools, with the following weight assignments:
• Pool 1: weight 50
• Pool 2: weight 25
• Pool 3: weight 25

6-6
The Logical Network

Notice that the total of all the weights in this wide IP is 100. Each time
Global Traffic Manager selects this wide IP, it load balances across all three
pools. Over time, the load balancing statistics for this wide IP appear as
follows:
• Pool 1: selected 50 percent of the time
• Pool 2: selected 25 percent of the time
• Pool 3: selected 25 percent of the time

This pattern exists because the weight value, 50, is 50 percent of the total
weight for all pools, while the weight value, 25, is 25 percent of the total.

Incorporating iRules
An iRule is a set of one or more Tcl-based expressions that you can use with
wide IPs to customize how Global Traffic Manager handles network
connection requests.
You can add or remove an iRule to a wide IP at any time. When you add an
iRule to a wide IP, Global Traffic Manager uses the iRule to determine how
to load balance name resolution requests. Removing an iRule does not
delete it from Global Traffic Manager; you can still access the iRule by
clicking iRules under Global Traffic on the Main tab.
You can also customize a wide IP using more than one iRule. For example,
a wide IP might have an iRule that focuses on the geographical source of the
name resolution request, and another that focuses on redirecting specific
requests to a different wide IP. If you assign more than one iRule to a wide
IP, Global Traffic Manager applies iRules® in the order in which they are
listed in the iRules List for the wide IP.
You can change the order in which Global Traffic Manager applies iRules to
network connection requests at any time.

NoError response for IPv6 resolution


In networks that use IPv6 addresses, a system receiving a Domain Name
System (DNS) request for a zone is required to send a specific response,
called a NoError response, any time it receives an IPv6 request for a zone
that does not contain a corresponding AAAA record. After receiving this
response, the client making the request can re-send the request for an
equivalent IPv4 A record instead. Using the NoError response allows the
client to send the equivalent request sooner and receive the name resolution
faster.
By default, Global Traffic Manager does not send a NoError response
when it does not have a AAAA record for a given zone. However, you can
enable this response on a per-wide IP basis.

BIG-IP® Global Traffic ManagerTM Concepts Guide 6-7


Chapter 6

Distributed applications
A distributed application is a collection of wide IPs that serves as a single
application to a site visitor. Within Global Traffic Manager, distributed
applications provide you with several advantages:
◆ You can organize logical network components into groups that represent
the business environment for which these components were designed.
◆ You can configure a distributed application so that it is dependent on a
physical component of your network, such as a data center, server, or
link. If this physical component becomes unavailable, Global Traffic
Manager flags the distributed application as unavailable as well. These
dependencies ensure that a user cannot access a distributed application
that does not have all of its resources available.
◆ You can define persistence for a distributed application, ensuring that a
user accessing the distributed application uses the same network
resources until they end their session.

When you create a distributed application, you name it and add at least one
wide IP. You can also configure the distributed application so that its
availability depends on the availability of specific servers, virtual servers, or
data centers. Additionally, you can configure whether the system routes
requests coming from the same source during a specific time period to the
same pool, or to a different pool. This is known as persistence.

Dependencies for distributed applications


When you create a distributed application on Global Traffic Manager, the
system acquires information about the data centers, servers, and links that
make up the application, including the status of each of these components.
You have the option of setting the status of the distributed application to be
dependent upon the status of one of these types of components. For
example, when you configure the distributed application for server
dependency, and a specified server becomes unavailable, Global Traffic
Manager considers the distributed application to be unavailable as well.
The following examples illustrate how dependencies can affect the
availability of a given distributed application. These examples involve the
fictional company SiteRequest. This company has a distributed application
that consists of two wide IPs: www.siterequest.com and
downloads.siterequest.com. The company also has data centers in New
York, Paris, and Tokyo, each of which provides resources that the
distributed application can access. In each example, a lightning storm
caused the New York data center to lose power. Although the emergency
power starts immediately, one of the wide IPs, one of the virtual servers, and
one of the Internet links used by the application are offline, and thus
unavailable.

6-8
The Logical Network

◆ Example 1: Data Center Dependency


If the application uses data center dependency, Global Traffic Manager
considers the entire data center to be unavailable to the application, even
if other virtual servers for the application remain available at the data
center. Other connection requests, independent of the application, can
still be sent to the data center.
◆ Example 2: Server Dependency Level
If the application uses server dependency, Global Traffic Manager
considers the server hosting the virtual server to be unavailable to the
application, even if other virtual servers on that server are online. Other
connection requests, independent of the application, can still be sent to
the server.
◆ Example 3: Link Dependency Level
If the application uses link dependency, Global Traffic Manager
considers all resources for the application that use that link to be
unavailable to the application. Other connection requests, independent of
the application, can still be sent to these resources through other links.
◆ Example 4: Wide IP Dependency Level
If the application uses wide IP dependency, Global Traffic Manager
considers all wide IPs that host that application to be unavailable, even if
only one of the wide IPs is unavailable. Other connection requests,
independent of the application, can still be sent to the data center.

Note

You do not have to set a dependency for a distributed application. You can
accept the default value of None. If you do not set a dependency, then
Global Traffic Manager considers the application available as long as there
is at least one wide IP to which it can load balance a name resolution
request.

Distributed application traffic


Distributed applications often consist of many data centers, servers, and
links. You might find that you need to remove a given physical component
without interrupting access to the application. For example, you might want
to take a server down to update it, yet do not want its absence to affect the
application. To accommodate this and similar situations, Global Traffic
Manager provides options so you can enable and disable distributed
application traffic for a specific physical component on the network.

Note

When you add a physical component to a distributed application, by default,


distributed application traffic is enabled for that component.

BIG-IP® Global Traffic ManagerTM Concepts Guide 6-9


Chapter 6

Persistent connections
Many distributed applications require that users access a single set of
resources until they complete their transaction. For example, customers
purchasing a product online might need to remain with the same data center
until they finish their order. In the context of Global Traffic Manager, this
requirement is called persistence. Persistence is the state in which a user of
the system remains with the same set of resources until the user closes the
connection.
When you enable persistence for a distributed application, and an LDNS
makes repetitive requests on behalf of a client, the system reconnects the
client to the same resource to which it was connected for previous requests.
For persistence to work correctly for a distributed application, you must also
specify a dependency level. This is because a connection to the distributed
application persists to the dependency object you specify (that is, the
specified wide IP, server, data center, or link), and not to a specific pool
member.

6 - 10
7
Load Balancing

• About load balancing and Global Traffic Manager

• Static load balancing modes

• Dynamic load balancing modes

• Fallback load balancing method

• Additional load balancing options


Load Balancing

About load balancing and Global Traffic Manager


Global Traffic Manager™ provides a tiered load balancing system in
which load balancing occurs at more than one point during the name
resolution request process. The tiers within Global Traffic Manager are as
follows:
◆ Wide IP-level load balancing
A wide IP contains two or more pools. Global Traffic Manager load
balances requests, first to a specific pool.
◆ Pool-level load balancing
A pool contains one or more virtual servers. After Global Traffic
Manager uses wide IP-level load balancing to select the best available
pool, it uses pool-level load balancing to select a virtual server within
that pool.
When Global Traffic Manager receives a name resolution request, the
system employs a load balancing mode to determine the best available
virtual server to which to send the request. If the first virtual server within a
pool is unavailable, Global Traffic Manager selects the next best virtual
server based on the load balancing method assigned to that pool. To help
you understand how load balancing works, we characterize the available
load balancing modes as either static or dynamic load balancing modes.
• Static load balancing mode
Global Traffic Manager selects a virtual server based on a pre-defined
pattern.
• Dynamic load balancing mode
Global Traffic Manager selects a virtual server based on current
performance metrics.
You assign a load balancing mode to a pool by making a selection from each
of the three load balancing method lists:
• Preferred
You can select either a static or a dynamic load balancing mode from this
list.
• Alternate
You can select only a static load balancing mode from this list, because
dynamic load balancing modes, by definition, rely on metrics collected
from different resources. If the preferred load balancing mode does not
return a valid resource, it is likely that Global Traffic Manager was
unable to acquire the proper metrics to perform the load balancing
operation. By limiting the alternate load balancing method to static load
balancing modes only, Global Traffic Manager can better ensure that,
should the preferred method prove unsuccessful, the alternate method
returns a valid result.
• Fallback
You can select either a static or a dynamic load balancing mode from this
list.

BIG-IP® Global Traffic ManagerTM Concepts Guide 7-1


Chapter 7

Global Traffic Manager attempts to load balance a name resolution request


using the preferred load balancing method first. If the preferred method fails
to provide a valid resource, the system uses the alternate method. Should the
alternate method also fail to provide a valid resource, the system uses the
fallback method. If all of the load balancing methods that are configured for
a pool fail, then the request fails, or the system falls back to DNS. After
Global Traffic Manager identifies a virtual server, it constructs a Domain
Name System (DNS) answer and sends that answer back to the requesting
client's local domain system server (LDNS). The DNS answer, or resource
record, can be either an A record, a AAAA record that contains the IP
address of the virtual server, or a CNAME record that contains the canonical
name for a DNS zone.
Table 7.1 shows a list of the supported static load balancing modes. Table
7.2 shows a list of the supported dynamic load balancing modes. Both tables
indicate where you can use each mode in the Global Traffic Manager
configuration.

Load balancing mode Use for wide IP Use for preferred Use for alternate Use for fallback
(static) load balancing method method method

Drop Packet X X X

Fallback IP X X X

Global Availability X X X X

None X X

Ratio X X X X

Return to DNS X X X

Round Robin X X X X

Static Persist X X X

Topology X X X X

Table 7.1 Static load balancing mode usage

Load balancing mode Use for wide IP Use for preferred Use for alternate Use for fallback
(dynamic) load balancing method method method

Completion Rate X X

CPU X X

Hops X X

Kilobytes/Second X X

Table 7.2 Dynamic load balancing mode usage

7-2
Load Balancing

Load balancing mode Use for wide IP Use for preferred Use for alternate Use for fallback
(dynamic) load balancing method method method

Least Connections X X

Packet Rate X X X

Quality of Service X X

Round Trip Time X X

Virtual Server Score X X X

VS Capacity X X X

Table 7.2 Dynamic load balancing mode usage

Static load balancing modes


Static load balancing modes distribute connections across the network
according to predefined patterns, and take server availability into account.
Global Traffic Manager supports the following static load balancing modes:
• Drop Packet
• Fallback IP
• Global Availability
• None
• Ratio
• Return to DNS
• Round Robin
• Static Persist
• Topology

The None and Return to DNS modes are special modes that you can use to
skip load balancing under certain conditions. The other static load balancing
modes perform true load balancing.

Drop Packet mode


When you choose the Drop Packet mode, Global Traffic Manager does
nothing with the packet, and simply drops the request. Note that if you do
not want Global Traffic Manager to return an address that is potentially
unavailable, F5 Networks recommends that you select Drop Packet from
the Alternate load balancing method list.

BIG-IP® Global Traffic ManagerTM Concepts Guide 7-3


Chapter 7

Fallback IP mode
When you choose the Fallback IP mode, Global Traffic Manager answers a
query by returning the IP address that you specify as the fallback IP. The IP
address that you specify is not monitored for availability before being
returned as an answer. When you use the Fallback IP mode, you can specify
that Global Traffic Manager return a disaster recovery site when no load
balancing mode returns an available virtual server. F5 Networks
recommends that you use the Fallback IP mode only for the fallback load
balancing method. Global Traffic Manager uses the fallback method only
when the preferred and alternate methods do not provide at least one virtual
server to return as an answer to a query.

Global Availability mode


The Global Availability mode uses the virtual servers included in the pool in
the order in which they are listed. For each connection request, this mode
starts at the top of the list and sends the connection to the first available
virtual server in the list. Only when the current virtual server is full or
otherwise unavailable does the Global Availability mode move to the next
virtual server in the list. Over time, the first virtual server in the list receives
the most connections and the last virtual server in the list receives the least
number of connections.

None mode
The None mode is a special mode you can use if you want to skip the
current load balancing method, or skip to the next pool in a multiple pool
configuration. For example, if you set an alternate method to None in a pool,
Global Traffic Manager skips the alternate method and immediately tries the
mode specified as the fallback method. If the fallback method is set to None,
and you have multiple pools configured, Global Traffic Manager uses the
next available pool. If all pools become unavailable, Global Traffic Manager
returns an aggregate of the IP addresses of all pool members using BIND.

Tip
If you do not want Global Traffic Manager to return multiple addresses that
are potentially unavailable, F5 Networks recommends that you set the
alternate method to Drop Packet.

You can also use this mode to limit each pool to a single load balancing
mode. For example, you can set the preferred method in each pool to the
desired mode, and then you can set both the alternate and fallback methods
to None in each pool. If the preferred method fails, the None value for both
the alternate and fallback methods forces Global Traffic Manager to go to
the next pool for a load balancing answer.

7-4
Load Balancing

Ratio mode
The Ratio mode distributes connections among a pool of virtual servers as a
weighted round robin. Weighted round robin refers to a load balancing
pattern in which Global Traffic Manager rotates connection requests among
several resources based on a priority level, or weight, assigned to each
resource. For example, you can configure the Ratio mode to send twice as
many connections to a fast, new server, and only half as many connections
to an older, slower server.
The Ratio mode requires that you define a ratio weight for each virtual
server in a pool, or for each pool if you are load balancing requests among
multiple pools. The default ratio weight for a server or a pool is set to 1.

Return to DNS mode


The Return to DNS immediately returns connection requests to the LDNS
for resolution. This mode is particularly useful if you want to temporarily
remove a pool from service, or if you want to limit a pool in a single pool
configuration to only one or two load balancing attempts.

Round Robin mode


The Round Robin mode distributes connections in a circular and sequential
pattern among the virtual servers in a pool. Over time, each virtual server
receives an equal number of connections.

Static Persist mode


The Static Persist mode uses the persist mask with the source IP address of
the LDNS in a deterministic algorithm to map to a specific pool member
(virtual server) in a pool. Like the Global Availability mode, the Static
Persist mode resolves to the first available pool member; however, the list of
pool members is ordered in a significantly different manner. With the
Global Availability mode, a system administrator manually configures the
order of the members in the list. With the Static Persist mode, Global Traffic
Manager uses a hash algorithm to determine the order of the members in the
list.
This hash algorithm orders the pool members in the list differently for each
LDNS that is passing traffic to the system taking into account the specified
CIDR of the LDNS. Thus, while each LDNS (and thus each client)
generally resolves to the same virtual server, the Global Traffic Manager
system distributes traffic across all of the virtual servers.
When the selected virtual server becomes unavailable, the system resolves
requests to another virtual server. When the original virtual server becomes
available again, the system resolves requests to that virtual server.

BIG-IP® Global Traffic ManagerTM Concepts Guide 7-5


Chapter 7

Topology mode
The Topology mode allows you to direct or restrict traffic flow by adding
topology records to a topology statement in the configuration file. When you
use the Topology mode, you can develop proximity-based load balancing.
For example, a client request in a particular geographic region can be
directed to a data center or server within that same region. Global Traffic
Manager determines the proximity of servers by comparing location
information derived from the DNS message to the topology records.
This load balancing mode requires you to do some advanced configuration
planning, such as gathering the information you need to define the topology
records. Global Traffic Manager contains an IP classifier that accurately
maps the LDNS, so when you create topology records, you can refer to
continents and countries, instead of IP subnets.

Dynamic load balancing modes


Dynamic load balancing modes distribute connections to servers that show
the best current performance. The performance metrics taken into account
depend on the particular dynamic mode you are using.
All dynamic modes make load balancing decisions based on the metrics
collected by the big3d agents running in each data center. The big3d agents
collect the information at set intervals that you define when you set the
global timer variables. If you want to use the dynamic load balancing
modes, you must run one or more big3d agents in each of your data centers,
to collect the required metrics.

Completion Rate mode


The Completion Rate mode selects the virtual server that currently
maintains the least number of dropped or timed-out packets during a
transaction between a data center and the client’s LDNS.

CPU mode
The CPU load mode selects the virtual server that currently has the most
CPU processing time available to handle name resolution requests.

7-6
Load Balancing

Hops mode
The Hops mode is based on the traceroute utility, and tracks the number of
intermediate system transitions (router hops) between a client’s LDNS and
each data center. Hops mode selects a virtual server in the data center that
has the fewest router hops from the LDNS.

Kilobyte/Second mode
The Kilobytes/Second mode selects the virtual server that is currently
processing the fewest number of kilobytes per second. You can use this load
balancing mode only with servers for which Global Traffic Manager can
collect the kilobytes per second metric.

Least Connections mode


The Least Connections mode is used for load balancing to virtual servers
managed by a load balancing server, such as a Local Traffic Manager™
server. The Least Connections mode simply selects a virtual server on the
Local Traffic Manager system that currently hosts the fewest connections.

Packet Rate mode


The Packet Rate mode selects a virtual server that is currently processing the
fewest number of packets per second.

Quality of Service mode


The Quality of Service mode uses current performance information to
calculate an overall score for each virtual server, and then distributes
connections based on those scores. The performance factors that Global
Traffic Manager takes into account include:
• Round Trip Time (RTT)
• Completion Rate
• Packet Rate
• Hops
• Virtual Server Score
• Packet Rate
• Topology
• Link Capacity
• VS Capacity
• Kilobytes/Second

BIG-IP® Global Traffic ManagerTM Concepts Guide 7-7


Chapter 7

The Quality of Service mode is a customizable load balancing mode. For


simple configurations, you can easily use this mode with its default settings.
For more advanced configurations, you can specify different weights for
each performance factor in the equation.
You can also configure the Quality of Service mode to use the dynamic ratio
feature. When you activate the dynamic ratio feature, the Quality of Service
mode functions similarly to the Ratio mode; the connections are distributed
in proportion to ratio weights assigned to each virtual server. The ratio
weights are based on the QOS scores: the better the score, the higher
percentage of connections the virtual server receives.
When Global Traffic Manager selects a virtual server, it chooses the server
with the best overall score. In the event that one or more resources has an
identical score based on the Quality of Service criteria, Global Traffic
Manager load balances connections between those resources using the
Round Robin mode. If the system cannot determine a Quality of Service
score, it load balances connections across all pool members using the Round
Robin mode, as well.

Round Trip Times mode


The Round Trip Times (RTT) mode selects the virtual server with the fastest
measured round trip time between a data center and a client’s LDNS.

Virtual Server Score mode


The Virtual Server Score mode instructs Global Traffic Manager to assign
connection requests to virtual servers based on a user-defined ranking
system. This load balancing mode is available only for managing
connections between virtual servers controlled by Local Traffic Manager
systems.
Unlike other settings that affect load balancing operations, you cannot
assign a virtual server score to a virtual server through Global Traffic
Manager. Instead, you assign this setting through the Local Traffic Manager
system that is responsible for the virtual server. For more information, see
the F5 DevCentral web site: http://devcentral.f5.com.

VS Capacity mode
The VS Capacity mode creates a list of the virtual servers, weighted by
capacity, then picks one of the virtual servers from the list. The virtual
servers with the greatest capacity are picked most often, but over time all
virtual servers are returned. If more than one virtual server has the same
capacity, then Global Traffic Manager load balances using the Round Robin
mode among those virtual servers.

7-8
Load Balancing

Dynamic Ratio option


The dynamic load balancing modes also support the Dynamic Ratio option.
When you activate this option, Global Traffic Manager treats dynamic load
balancing values as ratios, and it uses each server in proportion to the ratio
determined by this option. When the Dynamic Ratio option is disabled,
Global Traffic Manager uses only the server with the best result based on
the dynamic load balancing mode you implemented (in which case it is a
winner-takes-all situation), until the metrics information is refreshed.
To illustrate how the Dynamic Ratio setting works, consider a pool,
primaryOne, that contains several pool members. This pool is configured
so that Global Traffic Manager load balances name resolution requests
based on the Round Trip Time mode. The primaryOne pool contains two
pool members: memberOne and memberTwo. For this example, Global
Traffic Manager determines that the round trip time for memberOne is 50
microseconds, while the round trip time for memberTwo is 100
microseconds.
If the primaryOne pool has the Dynamic Ratio setting disabled (the
default setting), Global Traffic Manager always load balances to the pool
with the best value. In this case, this results in requests going to
memberOne, because it has the lowest round trip time value.
If the primaryOne pool has the Dynamic Ratio setting enabled, however,
Global Traffic Manager treats the round trip time values as ratios and divide
requests among pool members based on these ratios. In this case, this results
in memberOne getting twice as many connections as memberTwo,
because the round trip time for memberOne is twice as fast as the round trip
time for memberTwo. Note tha, with the Dynamic Ratio option enabled,
both pool members are employed to handle connections, while if the option
is disabled, only one pool member receives connections.

BIG-IP® Global Traffic ManagerTM Concepts Guide 7-9


Chapter 7

Fallback load balancing method


The fallback method is unique among the three load balancing methods that
you can apply to a pool. Unlike the preferred and alternate methods, the
fallback method ignores the availability status of a resource. This occurs to
ensure that Global Traffic Manager returns a response to the DNS request.
However, you can opt to verify that a virtual server is available even when
the load balancing mode changes to the specified Fallback method. To do
this, you enable the Respect Fallback Dependency option on the System
Configuration Global Traffic Load Balancing screen.
Global Traffic Manager contains several options that help you control how
the system responds when using a fallback load balancing setting. These
options allow you to:
• Configure the fallback load balancing method
• Configure the fallback IP load balancing mode

7 - 10
Load Balancing

Additional load balancing options


Global Traffic Manager supports additional options that affect how the
system load balances name resolution requests. These options are:
• Ignore path TTL
• Verify virtual server availability

Enabling the Ignore Path TTL option instructs Global Traffic Manager to
use path information gathered during metrics collection even if the
time-to-live value for that information has expired. This option is often used
when you want the system to continue using a dynamic load balancing mode
even if some metrics data is temporarily unavailable, and you want Global
Traffic Manager to use old metric data rather than employ an alternate load
balancing method. This option is disabled by default.
The Verify Virtual Server Availability option instructs Global Traffic
Manager to verify that a virtual server is available before returning it as a
response to a name solution request. If this option is disabled, the system
responds to a name resolution request with the virtual server’s IP address
regardless of whether the server is up or down. This option is enabled by
default,and is rarely disabled outside of a test or staging environment.

BIG-IP® Global Traffic ManagerTM Concepts Guide 7 - 11


Chapter 7

7 - 12
8
Connections

• Connection management

• Resource health

• Resource availability

• Restoration of availability

• Persistent connections

• Last resort pool


Connections

Connection management
When you integrate a Global Traffic Manager™ system into your network,
one of its primary responsibilities is to load balance incoming connection
requests to the virtual server resource that best fits the configuration
parameters you defined. However, load balancing is only one part of
managing connections to your network resources. Additional issues that you
must consider include:
◆ Resource health
Resource health refers to the ability of a given resource to handle
incoming connection requests. For example, the Configuration utility
uses a green circle to identify a resource, such as a wide IP, that has
available pools and virtual servers, while a pool that is down appears as a
red diamond. These visual clues can help you identify connection issues
quickly and efficiently.
◆ Resource availability
Resource availability refers to the settings within the Configuration
utility that you use to control when a resource is available for connection
request. For example, you can establish limit settings, which instruct
Global Traffic Manager to consider a resource as unavailable when a
statistical threshold (such as CPU usage) is reached.
◆ Restoring availability
When a resource goes offline, Global Traffic Manager immediately
sends incoming connection requests to the next applicable resource.
When you bring that resource online again, you can control how to
restore its availability to Global Traffic Manager, ensuring that
connections are sent to the resource only when it is fully ready to receive
them.
◆ Persisting connections
Certain interactions with your network require that a given user access
the same virtual server resource until their connection is completed. An
example of this situation is an online store, in which you want the user to
access the same virtual server for their shopping cart until they place
their order. With Global Traffic Manager, you can configure your load
balancing operations to take persistent connections into account.
◆ Selecting a last resort pool
Global Traffic Manager includes the ability to create a last resort pool. A
last resort pool is a collection of virtual servers that are not used during
normal load balancing operations. Instead, these virtual servers are held
in reserve unless all other pools for a given wide IP become unavailable.

In addition, it is important to understand what happens when Global Traffic


Manager cannot find an available resource with which to respond to a
connection request.

BIG-IP® Global Traffic ManagerTM Concepts Guide 8-1


Chapter 8

Resource health
In Global Traffic Manager, resource health refers to the ability of a given
resource to handle incoming connection requests. Global Traffic Manager
determines this health through the use of limit settings, monitors, and
dependencies on other network resources.
The health of a resource is indicated by a status code in the Configuration
utility. A status code is a visual representation of the availability of a given
resource. Global Traffic Manager displays these status codes in the main
screens for a given resource. The types of status codes available for a
resource are:
◆ Blue
A blue status code indicates that the resource has not been checked. This
status often appears when you first add a resource into the Configuration
utility.
◆ Green
A green status code indicates that the resource is available and
operational. Global Traffic Manager uses this resource to manage traffic
as appropriate.
◆ Red
A red status code indicates that the resource did not respond as expected
to a monitor. Global Traffic Manager uses this resource only when two
conditions are met:
• Global Traffic Manager is using the load balancing mode specified in
the Fallback load balancing setting.
• The Fallback load balancing setting for the pool is not None.
◆ Yellow
A yellow status code indicates that the resource is operational, but has
exceeded one of its established bandwidth thresholds. Global Traffic
Manager uses a resource that has a yellow status code only if no other
resource is available.
◆ Black
A black status code indicates that the resource has been manually
disabled and is no longer available for load balancing operations.

As the preceding list illustrates, the health of a resource does not necessarily
impact the availability of that resource. For example, Global Traffic
Manager can select a virtual server that has a red status code.

8-2
Connections

Resource availability
To load balance effectively, Global Traffic Manager must determine
whether the appropriate resources are available. In the context of the
Global Traffic Manager, availability means that the resource meets one or
more sets of pre-defined requirements. These requirements can be a set of
statistical thresholds, a dependency on another resource, or set of values
returned by a monitoring agent. If a resource fails to meet one or more of
these requirements, Global Traffic Manager considers it unavailable
and attempts to select the next resource based on the load balancing
methodology you defined.
Global Traffic Manager includes three methods of determining resource
availability:
• Limit settings
• Monitor availability requirements
• Virtual server dependencies

Limit settings
One of the methods for determining the availability of a resource is to
establish limit settings. A limit setting is a threshold for a particular statistic
associated with a system.
Global Traffic Manager supports the following limit settings:
• Kilobytes
• Packets
• Total Connections

For BIG-IP systems, Global Traffic Manager also supports a Connections


limit setting.
For hosts, Global Traffic Manager also supports CPU and Memory limit
settings.

Monitor availability requirements


Another method for determining the availability of a given resource is
through the use of monitors. A monitor is a software utility that specializes
in a specific metric of a Global Traffic Manager resource. You can
customize monitors to be as specific or as general as needed.
To illustrate the use of monitors to determine the availability of a resource,
consider the fictional company SiteRequest. One of the servers at
SiteRequest’s Paris data center, serverWeb1, contains the main web site
content for the wide IP, www.siterequest.com. To ensure that this server is
available, SiteRequest configures an HTTP monitor within Global Traffic
Manager and assigns it to serverWeb1. This monitor periodically accesses

BIG-IP® Global Traffic ManagerTM Concepts Guide 8-3


Chapter 8

the server to verify that the main index.html page is available. If the
monitor cannot access the page, it notifies Global Traffic Manager, which
then considers the server unavailable until the monitor is successful.
Monitors provide a robust, customizable means of determining the
availability of a given resource with Global Traffic Manager. The following
procedure describes how to control the impact that a set of monitors has on
the availability of a resource.
You can also assign monitors to a specific server. In most cases, when you
assign a monitor to a server, that monitor checks all virtual servers
associated with that server.
An exception to this guideline is the SNMP monitor. If you assign an SNMP
monitor to a Cisco®, Alteon®, Extreme Networks®, Foundry®, or Radware
server, that monitor obtains information on the virtual servers associated
with that server. If you assign the SNMP monitor to any other server type,
that monitor obtains data on the server itself.
In cases where you assign a monitor to a virtual server both directly and to
its parent server, the availability information acquired from the monitor
directly assigned to the virtual server takes precedence over any other data.

Virtual server dependency


Within Global Traffic Manager, you can configure a virtual server to be
dependent on one or more virtual servers. In such a configuration, the virtual
server is available only if all of the resources in its Dependency List are
available as well.
For an example of virtual server dependencies, consider the fictional
company SiteRequest. One of the servers, serverMain, at the Tokyo data
center has two virtual servers: vsContact, which points to the contacts page
of SiteRequest’s web site, and vsMail, which points to their mail system.
The vsContact virtual server has vsMail added in its Dependency List. As
a result, Global Traffic Manager considers the vsContact virtual server
available only if the vsMail virtual server is also available.
You can set dependencies for a virtual server at any time. When you
configure the Dependency List option for a virtual server, Global Traffic
Manager checks each virtual server in the order in which you added it to the
Configuration utility. You can change this order at any time.

8-4
Connections

Restoration of availability
When a network resource, such as a virtual server, goes offline, Global
Traffic Manager considers that resource to be unavailable and proceeds to
send name resolution requests to other resources based on the configured
load balancing mode. By default, Global Traffic Manager resumes sending
requests to an offline resource as soon as that the resource becomes
available again, provided that the resource meets the appropriate load
balancing requirements.
Under certain circumstances, you might not want Global Traffic Manager to
resume connections to a resource immediately. For example, a server for the
fictional company, SiteRequest, goes offline. Global Traffic Manager
detects that the virtual servers associated with this server are unavailable,
and proceeds to send name resolution requests to other virtual servers as
appropriate. When the server is online again, it must still run several
synchronization processes before it is fully ready to handle name resolution
requests. However, Global Traffic Manager might detect that the server is
available before these processes are complete, and send requests to the
server before that server can handle them.
To avoid this possibility, you can configure pools to use the manual resume
feature. The manual resume feature ensures that Global Traffic Manager
does not load balance requests to a virtual server within a pool until you
manually re-enable it.

BIG-IP® Global Traffic ManagerTM Concepts Guide 8-5


Chapter 8

Persistent connections
Most load balancing modes divide name resolution requests among
available pools or virtual servers. Each time Global Traffic Manager
receives a request, it sends that request to the most appropriate resource
based on the configuration of your network. For example, when a user visits
a web site, it results in multiple name resolution requests as that user moves
from page to page. Depending on the load balancing mode selected, the
system sends each request to a completely different server, virtual server, or
data center.
In certain circumstances, you might want to ensure that a user remains with
a given set of resources throughout the session. For example, a user
attempting to conduct a transaction through an online bank needs to remain
with the same set of resources to ensure the transaction is completed
successfully.
To ensure that users stay with a specific set of resources, Global Traffic
Manager includes a persistence option. The persistence option instructs the
system to send a user to the same set of resources until a specified period of
time has elapsed.

Drain persistent requests option


If you elect to use persistent connections with a load balancing mode, you
must decide how to handle connection requests when you need to take a
specific pool of virtual servers offline. By default, Global Traffic Manager
immediately sends connection requests to other pools when you take that
pool offline, even if persistent connections are enabled. In some situations,
this behavior might not be desirable. For example, consider an online store.
You might need to take a pool of virtual servers for this store offline;
however, you do not want to interrupt shoppers currently purchasing any
products. In this situation, you want to drain persistent requests.
Draining requests refers to allowing existing sessions to continue accessing
a specific set of resources while disallowing new connections. In Global
Traffic Manager, you configure this capability through the Drain Persistent
Requests option. This option applies only when you manually disable the
pool. It does not apply when the pool goes offline for any other reason.

8-6
Connections

Last resort pool


When Global Traffic Manager load balances name resolution requests, it
considers any pool associated with a given wide IP as a potential resource.
You can, however, modify this behavior by creating a last resort pool. A last
resort pool is a pool of virtual servers to which the system sends connection
requests in the event that all other pools are unavailable.
It is important to remember that any pool you assign as the last resort pool is
not a part of the normal load balancing operations of Global Traffic
Manager. Instead, this pool is kept in reserve. The system uses the resources
included in this pool only if no other resources are available to handle the
name resolution request.

BIG-IP® Global Traffic ManagerTM Concepts Guide 8-7


Chapter 8

8-8
9
Topologies

• Introduction

• IP geolocation data updates

• Topology records

• Topology load balancing


Topologies

Introduction
As the name implies, Global Traffic Manager™ handles name resolution
requests at an international level. You can use topologies to load balance
these requests. A topology is a set of characteristics that identifies the origin
of a given name resolution request. In Global Traffic Manager, topologies
belong to one of several categories, including:
• Continent
• Country
• IP Subnet
• ISP
• Region
• State
A region is a customized collection of topologies. For example, you can
create a topology for Denmark, Iceland, Finland, Norway, and Sweden.
These topologies can compose a custom region called Scandinavia.
Through topologies, you can instruct Global Traffic Manager to select a data
center or resource based on its physical proximity to the client making the
name resolution request. This process helps ensure that name resolution
requests are answered and managed in the fastest possible time.
You can also instruct Global Traffic Manager to use topologies to load
balance name resolution requests across pools at the wide IP level, and
across virtual servers at the pool level.
To better understand topologies, consider the fictional company,
SiteRequest, which allows its customers to download applications from its
web site. SiteRequest has three data centers: New York, Paris, and Tokyo.
To ensure that customers can download their purchased application as
quickly as possible, the IT department has decided to create topologies with
which to load balance name resolution requests.
The New York data center is chosen as the designated data center for any
name resolution requests originating in the western hemisphere. To ensure
that these requests go only to the New York data center, the IT department
first creates a custom region, called Western Hemisphere, that contains the
continents North America and South America. With this custom region
created, the next step is to create a topology record for Global Traffic
Manager. A topology record is a statement that tells Global Traffic Manager
how to handle name resolution requests based on topologies. In this case,
the IT department creates the record as follows:
• Request Source: Region is Western Hemisphere
• Destination Source: Data Center is New York
• Weight: 10

The final step to implement this topology is to configure the pools in the
corresponding wide IP, www.siterequest.com, to use topology load
balancing.

BIG-IP® Global Traffic ManagerTM Concepts Guide 9-1


Chapter 9

IP geolocation data updates


Global Traffic Manager uses an IP geolocation database to determine the
origin of a name resolution request. The default database provides
geolocation data for IPv4 addresses at the continent, country, state, ISP, and
organization levels. The state-level data is worldwide, and thus includes
designations in other countries that correspond to the U.S. state-level in the
geolocation hierarchy, for example, provinces in Canada. Note that you can
access the ISP and organization-level geolocation data for IPv4 addresses
only using the iRules® whereis command.
The default database also provides geolocation data for IPv6 addresses at the
continent and country levels.

Tip
If you require geolocation data at the city-level, contact your F5 Networks
sales representative to purchase additional database files.

You can download a monthly update to the IP geolocation database from F5


Networks.

9-2
Topologies

Topology records
A topology record has several elements: a request source statement, a
destination statement, an operator, and a weight.
A request source statement defines the origin of a name resolution request.
You can define the origin of a request as a:
• Continent
• Country (based on the ISO 3166 top-level domain codes)
• Internet Service Provider (ISP)
• IP subnet (Classless Inter-Domain Routing [CIDR] format)
• Region (custom)
• State
A destination statement defines the resource to which Global Traffic
Manager directs the name resolution request. The types of resources
available for a destination statement are as follows:
• Continent
• Country (based on the ISO 3166 top-level domain codes)
• Data center
• Internet Service Provider (ISP)
• IP subnet (CDIR definition)
• Pool of virtual servers
• Region (custom)
• State
You can select one of two operators for both a request source and a
destination statement. The is operator indicates that the name resolution
request matches the statement. The is not operator indicates that the name
resolution request does not match the statement.
The last element of a topology record, called the topology score or weight,
specifies the weight of the topology record. The system finds the weight of
the first topology record that matches the server object (pool or pool
member) and the LDNS. The system then assigns that weight as the
topology score for that server object. The system load balances to the server
object with the highest topology score. If the system finds no topology
record that matches both the server object and the LDNS, then the system
assigns that server object a zero score.

Note

A group of topology records defined for Global Traffic Manager is referred


to as a topology statement.

BIG-IP® Global Traffic ManagerTM Concepts Guide 9-3


Chapter 9

Topology load balancing


You can use the Topology mode to load balance and distribute traffic among
the pools in a wide IP. To do this, you must have at least two pools
configured in the wide IP. With topology load balancing, Global Traffic
Manager resolves name resolution requests using the IP addresses of virtual
servers in a specific data center or other resource, based on the origin of the
request.
In addition to setting up the Topology mode to select a pool within a wide
IP, you can also modify the settings to select a virtual server within a pool.
However, you must configure the topology records before Global Traffic
Manager can use the Topology mode.
To further refine the topology load balancing capabilities of Global Traffic
Manager, you can create custom topology regions. Regions allow you to
extend the functionality of your topologies by allowing you to define
specific geographical regions that have meaning for your network.
You create a custom region by adding one or more region member types to
the region member list. The available region member types are:
• Continent
• Country (based on the ISO 3166 top-level domain codes)
• Data center
• Internet Service Provider (ISP)
• IP subnet (CDIR definition)
• Pool of virtual servers
• Region (another custom region)
• State
After you select a region member type, you fill in the details about that
region member and add it to the region member list. The region member
options change based on the region member type that you select.

Longest Match load balancing option


Global Traffic Manager supports a Longest Match option that affects how
the system load balances name resolution requests.
The Longest Match option instructs Global Traffic Manager to use the
topology statement that most completely matches the source IP address of
the name resolution request. For example, two topology statements exist:
one that matches a source IP address of 10.0.0.0/8 and one that matches a
source IP address of 10.15.0.0/16. A name resolution request arrives with a
source IP address of 10.15.65.8. With the Longest Match setting enabled,
Global Traffic Manager uses the topology statement with 10.15.0.0/16,
because it has the longest, and therefore, most complete, match. If this
option is disabled, the order of the topology entries as they exist in

9-4
Topologies

/config/gtm/topology.inc is preserved. Global Traffic Manager uses the


first topology entry found that matches both the LDNS and the server
objects. This option is enabled by default.

Note

When you enable the Longest Match option the system gives priority to the
topology records that contain IP subnet blocks that you defined using the
CIDR format. You can create a region and define an IP subnet using the
CIDR format as a member of that region; however, the system gives a
higher priority to the IP subnet defined in the topology record.

BIG-IP® Global Traffic ManagerTM Concepts Guide 9-5


Chapter 9

9-6
10
DNSSEC Keys and Zones

• About DNSSEC

• DNSSEC keys and zones


DNSSEC Keys and Zones

About DNSSEC
The Domain Name System Security Extensions (DNSSEC) is an
industry-standard protocol that functions as an extension to the Domain
Name System (DNS) protocol. The BIG-IP® Global Traffic Manager™ uses
DNSSEC to guarantee the authenticity of DNS responses to queries and to
return Denial of Existence responses.
You can use the DNSSEC feature of Global Traffic Manager to protect your
network infrastructure from DNS protocol and DNS server attacks such as
spoofing, ID hacking, cache poisoning, and denial of service.

DNSSEC keys and zones


Global Traffic Manager responds to DNS requests to a specific zone by
returning signed nameserver responses based on the currently available
generations of a key. Before you can configure Global Traffic Manager to
handle nameserver responses that are DNSSEC-compliant, you must create
DNSSEC keys and zones.
There are two kinds of DNSSEC keys: zone-signing keys and key-signing
keys. Global Traffic Manager uses a zone-signing key to sign all of the
records in a DNSSEC record set, and a key-signing key to sign only the
DNSKEY record of a DNSSEC record set.
F5 Networks recommends that for emergency rollover purposes, when you
create a key, you create a duplicate version of the key with a similar name,
but do not enable that version. For example, create a key-signing key called
ksk1a that is enabled. Then create a duplicate key, but name it ksk1b, and
change the state to disabled. When you associate both of these keys with the
same zone, you are prepared to easily perform a manual rollover of the key,
if necessary.
In order for Global Traffic Manager to use the keys that you create to sign
requests, you must assign the keys to a zone. DNSSEC zones are containers
that map a domain name to a set of DNSSEC keys that the system uses to
sign DNSSEC-compliant nameserver responses to DNS queries.
When you create a DNSSEC zone, you must assign at least one enabled
zone-signing and one enabled key-signing key to the zone before the Global
Traffic Manager can sign requests to that zone.

Automatic key rollover


To enhance key security, the BIG-IP® system has an automatic key rollover
feature that uses overlapping generations of a key to ensure that the system
can always respond to requests with a signature. The system dynamically
creates new generations of each key based on the values of the Rollover
Period and Expiration Period settings of the key. The first generation of a

BIG-IP® Global Traffic ManagerTM Concepts Guide 10 - 1


Chapter 10

key has an ID of 0 (zero). Each time the system dynamically creates a new
generation of the key, the ID increments by 1. When a generation of a key
expires, the system automatically removes that generation of the key from
the configuration.
Figure 10.1 illustrates this, and shows how over time each generation of a
key overlaps the previous generation of the key.

Figure 10.1 Overlapping generations of a key and TTL value

The value that you assign to the TTL (time-to-live) setting for a key
specifies how long a client resolver can cache the key. As shown in Figure
10.1, the value you assign to the TTL setting of the key must be less than
the difference between the values of the Rollover Period and Expiration
Period settings of the key; otherwise, a client can make a query and the
system can send a valid key that the client cannot recognize.

Important
To ensure that each Global Traffic Manager system is referencing the same
time when generating keys, you must synchronize the time setting on each
system with the Network Time Protocol (NTP) servers that Global Traffic
Manager references.

10 - 2
DNSSEC Keys and Zones

DNSSEC resource records


Your configuration of BIND is independent of the configuration of
DNSSEC on Global Traffic Manager. If you want to use BIND for
delegation or other tasks, you must add the DNSSEC resource records to
your BIND configuration; otherwise, BIND is not aware of these records. If
you do this, you can view the DNSSEC resource records in Zone Runner™.

BIG-IP® Global Traffic ManagerTM Concepts Guide 10 - 3


Chapter 10

10 - 4
11
Health and Performance Monitors

• Introduction

• Special configuration considerations

• Monitors and resources


Health and Performance Monitors

Introduction
An important feature of Global Traffic Manager™ is a set of load balancing
tools called monitors. Monitors verify connections on pools and virtual
servers. A monitor can be either a health monitor or a performance monitor.
Monitors are designed to check the status of a pool or virtual server on an
ongoing basis, at a set interval. If a pool or virtual server being checked does
not respond within a specified timeout period, or the status of a pool or
virtual server indicates that performance is degraded, then Global Traffic
Manager can redirect the traffic to another resource.
Some monitors are included as part of Global Traffic Manager, while other
monitors are user-created. Monitors that Global Traffic Manager provides
are called pre-configured monitors. User-created monitors are called
custom monitors.
Before configuring and using monitors, it is helpful to understand some
basic concepts regarding monitor types, monitor settings, and monitor
implementation.
◆ Monitor types
Every monitor, whether pre-configured or custom, belongs to a certain
category, or monitor type. Each monitor type checks the status of a
particular protocol, service, or application. For example, an HTTP
monitor allows you to monitor the availability of the HTTP service on a
pool member (that is a virtual server).
◆ Monitor settings
Every monitor consists of settings with values. The settings and their
values differ depending on the type of monitor. In some cases, Global
Traffic Manager assigns default values. For example, the following are
the default values for the HTTP monitor:
• Interval: 30 seconds
• Timeout: 120 seconds
• Probe Timeout: 5 seconds
• Reverse: No
• Transparent: No
These settings specify that an HTTP monitor is configured to check the
status of an IP address every 30 seconds, to time out after 120 seconds, to
timeout the probe request every 5 seconds, and specifies that the monitor
does not operate in either Reverse or Transparent mode.
◆ Monitor implementation
The task of implementing a monitor varies depending on whether you are
using a pre-configured monitor or creating a custom monitor. If you want
to implement a pre-configured monitor, you need only associate the
monitor with a pool or virtual server. If you want to implement a custom
monitor, you must first create the custom monitor, and then associate it
with a pool or virtual server.

BIG-IP® Global Traffic ManagerTM Concepts Guide 11 - 1


Chapter 11

Monitor types
Global Traffic Manager includes many different types of monitors, each
designed to perform a specific type of monitoring. The monitors belong to
one of three categories: simple, extended content verification (ECV), and
extended application verification (EAV).
◆ Simple monitors check the health of a resource by sending a packet using
the specified protocol, and waiting for a response from the resource. If
the monitor receives a response, then the health check is successful and
the resource is considered up.
◆ ECV monitors check the health of a resource by sending a query for
content using the specified protocol, and waiting to receive the content
from the resource. If the monitor receives the correct content, then the
health check is successful and the resource is considered up.
◆ EAV monitors check the health of a resource by accessing the specified
application. If the monitor receives the correct response, then the health
check is successful and the resource is considered up.

Pre-configured and custom monitors


When you want to monitor the health or performance of pool members or
virtual servers, you can either use a pre-configured monitor, or create and
configure a custom monitor.

Pre-configured monitors
For a subset of monitor types, Global Traffic Manager includes a set of
pre-configured monitors. A pre-configured monitor is an existing monitor
with default settings already configured. You use a pre-configured monitor
when the default values of the settings meet your needs.
Global Traffic Manager includes these pre-configured monitors:
• big ip
• big ip link
• gateway_icmp
• http
• https
• real_server
• snmp
• tcp
• tcp_half_open
• udp

11 - 2
Health and Performance Monitors

An example of a pre-configured monitor is the http monitor. If the default


values of this monitor meet your needs, you simply assign the http
pre-configured monitor directly to a pool or virtual server. In this case, you
do not need to use the Monitors screens, unless you simply want to view the
default settings of the pre-configured monitor.
If you do not want to use the values configured in a pre-configured monitor,
you can create a custom monitor.

Custom monitors
A custom monitor is a monitor that you create based on one of the allowed
monitor types.
Like http, each of the custom monitors has a Type setting based on the type
of service it checks (for example, https, ftp, pop3), and takes that type as its
name. (Exceptions are port-specific monitors, like the external monitor,
which calls a user-supplied program.)
If a pre-configured monitor exists that corresponds to the type of custom
monitor you are creating, you can import the settings and values of that
pre-configured monitor into the custom monitor. For example, if you create
a custom monitor called my_http, the monitor can inherit the settings and
values of the pre-configured monitor http. This ability to import existing
setting values is useful when you want to retain some setting values for your
new monitor, but modify others.
The following list shows an example of a custom HTTP monitor called
my_http, which is based on the pre-configured monitor http. Note that the
value of the Interval setting has been changed from the default value of 30
to a new value of 60. The other settings retain the values defined in the
pre-configured monitor.
• Name: my_http
• Type: HTTP
• Interval: 60
• Timeout: 120
• Reverse: No
• Transparent: No

You can import settings from another custom monitor instead of from a
pre-configured monitor. This is useful when you want to use the setting
values defined in another custom monitor, or when no pre-configured
monitor exists for the type of monitor you are creating. For example, if you
create a custom monitor called my_oracle_server2, you can import settings
from an existing Oracle® monitor such as my_oracle_server1. In this case,
because Global Traffic Manager does not provide a pre-configured Oracle®
monitor, a custom monitor is the only kind of monitor from which you can
import setting values.

BIG-IP® Global Traffic ManagerTM Concepts Guide 11 - 3


Chapter 11

If no pre-configured or custom monitor exists that corresponds to the type of


monitor you are creating, Global Traffic Manager imports settings from a
monitor template. A monitor template is an abstraction that exists within
Global Traffic Manager for each monitor type and contains a group of
settings and default values. A monitor template serves as a tool for Global
Traffic Manager to use for importing settings to a custom monitor when no
monitor of that type already exists.

11 - 4
Health and Performance Monitors

Special configuration considerations


Every pre-configured or custom monitor has settings with some default
values assigned. The following sections contain information that is useful
when changing these default values.

Monitor destinations
By default, the value for the Alias Address setting for most monitors is set
to the wildcard * Addresses, and the Alias Service Port setting is set to the
wildcard * Ports (exceptions to this rule are the WMI and Real Server
monitors). This value causes the monitor instance created for a pool or
virtual server to take that resource’s address or address and port as its
destination. You can, however, replace either or both wildcard symbols with
an explicit destination value, by creating a custom monitor. An explicit
value for the Alias Address and/or Alias Service Port setting is used to
force the instance destination to a specific address and/or port which may
not be that of the pool or virtual server.
The ECV monitors http, https, and tcp have the settings Send String and
Receive String for the send string and receive expression, respectively.
The most common Send String value is GET /, which retrieves a default
HTML page for a web site. To retrieve a specific page from a web site, you
can enter a Send String value that is a fully qualified path name:
"GET /www/support/customer_info_form.html"

The Receive String expression is the text string the monitor looks for in the
returned resource. The most common Receive String expressions contain a
text string that is included in a particular HTML page on your site. The text
string can be regular text, HTML tags, or image names.
The sample Receive expression below searches for a standard HTML tag:
"<HEAD>"

You can also use the default null Receive String value [""]. In this case,
any content retrieved is considered a match. If both the Send String and
Receive String are left empty, only a simple connection check is performed.
For HTTP monitors, you can use the special settings get or hurl in place of
Send String and Receive String statements, respectively.

Transparent and reverse modes


The normal and default behavior for a monitor is to ping the destination pool
or virtual server by an unspecified route, and to mark the resource up if the
test is successful. However, with certain monitor types, you can specify a
route through which the monitor pings the destination server. You configure
this by specifying the Transparent or Reverse setting within a custom
monitor.

BIG-IP® Global Traffic ManagerTM Concepts Guide 11 - 5


Chapter 11

◆ Transparent setting
Sometimes it is necessary to ping the aliased destination through a
transparent pool or virtual server. When you create a custom monitor and
set the Transparent setting to Yes, Global Traffic Manager forces the
monitor to ping through the pool or virtual server with which it is
associated (usually a firewall) to the pool or virtual server. (In other
words, if there are two firewalls in a load balancing pool, the destination
pool or virtual server is always pinged through the pool or virtual server
specified and not through the pool or virtual server selected by the load
balancing method.) In this way, the transparent pool or virtual server is
tested: if there is no response, the transparent pool or virtual server is
marked as down.
Common examples are checking a router, or checking a mail or FTP
server through a firewall. For example, you might want to check the
router address 10.10.10.53:80 through a transparent firewall
10.10.10.101:80. To do this, you create a monitor called http_trans in
which you specify 10.10.10.53:80 as the monitor destination address,
and set the Transparent setting to Yes. Then you associate the monitor
http_trans with the transparent firewall (10.10.10.101:80).
This causes the monitor to check the address 10.10.10 53:80 through
10.10.10.101:80. (In other words, Global Traffic Manager routes the
check of 10.10.10.53:80 through 10.10.10.101:80.) If the correct
response is not received from 10.10.10.53:80, then 10.10.10.101:80 is
marked down.
◆ Reverse setting
In most monitor settings, Global Traffic Manager considers the resource
available when the monitor successfully probes it. However, in some
cases you may want the resource to be considered unavailable after a
successful monitor test. You accomplish this configuration with the
Reverse setting. With the Reverse setting set to Yes, the monitor marks
the pool or virtual server down when the test is successful. For example,
if the content on your web site home page is dynamic and changes
frequently, you may want to set up a reverse ECV service check that
looks for the string: Error. A match for this string means that the web
server was down.

Table 11.1 shows the monitors that contain the Transparent setting, the
Reverse setting, or both.

Monitor Type Setting

Gateway ICMP Transparent N/A

TCP Transparent Reverse

HTTP Transparent Reverse

HTTPS Transparent Reverse

Table 11.1 Monitors that contain the Transparent or Reverse settings

11 - 6
Health and Performance Monitors

Monitor Type Setting

TCP Transparent Reverse

TCP Half Open Transparent N/A

UDP Transparent N/A

Table 11.1 Monitors that contain the Transparent or Reverse settings

Virtual server status


If all iQuery® connections between a Global Traffic Manager system and a
BIG-IP system are lost, by default Global Traffic Manager marks all of the
virtual servers on the BIG-IP system as down. However, you can configure
the Global Traffic Manager system so that even when all iQuery
connections from Global Traffic Manager to the BIG-IP system are lost,
Global Traffic Manager marks the virtual servers as down only when the
monitors associated with the virtual servers time out.
To do this, you change the value of the virtuals-depend-on-server-state
option to no. Note that even after you set this option to no, as long as the
iQuery connections between Global Traffic Manager and the BIG-IP system
are still connected, when Global Traffic Manager receives a down response
for a virtual server from the BIG-IP system, it immediately marks that
virtual server down.
The default value of the virtuals-depend-on-server-state option is yes. To
change the value to no, use the following tmsh command:
tmsh gtm settings general modify virtuals-depend-on-server-state no

For information about the command syntax you use to change this variable,
see the gtm settings component in the Traffic Management Shell (tmsh)
Reference Guide.

Monitors and resources


After you create a monitor and configure its settings, the final task is to
associate the monitor with the resources to be monitored. The resources that
can be monitored are nodes, servers, pools, pool members, and links.
When you associate a monitor with a resource, Global Traffic Manager
automatically creates an instance of that monitor for that resource.
Therefore, you can have multiple instances of the same monitor.
The Configuration utility allows you to disable an instance of a monitor that
is running on a server. This allows you to suspend health or performance
checking, without having to actually remove the monitor association. When
you are ready to begin monitoring that server again, you simply re-enable
that instance of the monitor.

BIG-IP® Global Traffic ManagerTM Concepts Guide 11 - 7


Chapter 11

Monitor associations
Some monitor types are designed for association only with nodes (IP
address), while other monitor types are intended for association only with
pools and virtual servers (IP address and service port). Therefore, when you
use the Configuration utility to associate a monitor with a pool or virtual
server, the utility displays only those pre-configured monitors that are
designed for association with that object type.
The types of monitor associations are:
◆ Monitor-to-pool association
Links a monitor with an entire load balancing pool. In this case, the
monitor checks all members of the pool. For example, you can create an
instance of the monitor http for the pool my_pool, thus ensuring that all
members of that pool are checked.
◆ Monitor-to-pool member association
Links a monitor with a pool member within a given pool. For example,
you can create an instance of the monitor FTP for specific pools within
the pool my_pool, ensuring that only specific pool members are verified
as available through the FTP monitor.
◆ Monitor-to-virtual server association
Links a monitor with a specific virtual server. In this case, the monitor
checks only the virtual server itself, and not any services running on that
virtual server. For example, you can create an instance of the monitor
http for virtual server 10.10.10.10.

11 - 8
12
Statistics

• Introduction

• Statistics access

• Status Summary screen

• Types of statistics

• Persistence records
Statistics

Introduction
An important part of successfully managing a network is having access to
up-to-date information about network performance. This information can
verify that Global Traffic Manager™ is handling your name resolution
requests as efficiently as possible, as well as provide data about the overall
performance of a specific resource, such as a data center or distributed
application.
Global Traffic Manager gathers and displays statistical data about multiple
aspects of your network. The types of statistics you can view include:
• Status Summary (a summary of network components, as defined in
Global Traffic Manager)
• Distributed applications
• Wide IPs
• Pools
• Pool Members
• Data centers
• Links
• Servers
• Virtual servers
• iRules
• Paths
• Local DNS
• Persistence Records

A persistence record provides information about network load balancing


when the persistence option is enabled for a given pool or virtual server.
This option ensures that the system sends name resolution from the same
source within a given session to the same resource on your network.
Global Traffic Manager gathers statistics through a software component
called the big3d agent. This agent probes the various monitors that you
assign to your network components, and returns statistics based on those
monitors. The gtmd utility manages those monitors, determining when to
probe and when to time out the probe attempts.
Statistics are often paired with metrics collection; however, the two have
different roles. Statistics pertain to a broad set of data that focuses on how
often a given set of resources are used and how well those resources are
performing. Metrics collection, on the other hand, focuses specifically on
data that relates to overall communication between Global Traffic Manager
and an LDNS. Unlike statistics, metrics collection is designed to provide
performance data, as opposed to usage or historical data.

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 1


Chapter 12

Statistics access
You can access Global Traffic Manager statistics in two ways:
• Through the Statistics option on the Main tab of the navigation pane
• Through the Statistics menu from various main screens for different
components

Both methods take you to the same screen within Global Traffic Manager.
When you access statistics through a menu on the main screen for a given
network component, the Statistics screen is pre-configured for the given
network element, although you can switch to a different set of statistics at
any time.
Additionally, you can use the search feature to locate a specific component
or group of components. The default search value is an asterisk (*), which
instructs the system to display all relevant components in a list. You can
type a string in the box, and when you click the Search button, the system
modifies the list to show only those components that match the string.

Tip
You can also access statistics from the command line using the tmsh
command show. For more information about viewing statistics using tmsh,
see the Traffic Management Shell (tmsh) Reference Guide.

Status Summary screen


As you track the performance of your data centers, virtual servers, and other
resources, you may find it helpful to have a single screen in which you can
get a snapshot of overall resource availability. In Global Traffic Manager,
you can view this data on the Status Summary screen.
The Status Summary screen consists of a Global Traffic Summary table that
contains the following information:
◆ Object Type
The Object Type column describes the specific resource type. These
types are: distributed application, wide IPs, pools, data centers, links, and
servers.
◆ Total
The Total column describes the total number of resources of the type
corresponding to the Object Type column, regardless of whether the
resource is available.
◆ Available
The Available column describes the total number of resources of the type
corresponding to the Object Type column that Global Traffic Manager
can verify as available.

12 - 2
Statistics

◆ Unavailable
The Unavailable column describes the total number of resources of the
type corresponding to the Object Type column that Global Traffic
Manager can verify as unavailable.
◆ Offline
The Offline column describes the total number of resources of the type
corresponding to the Object Type column that Global Traffic Manager
can verify as offline.
◆ Unknown
The Available column describes the total number of resources of the type
corresponding to the Object Type column that Global Traffic Manager
can verify as available.

Each value within the Total, Available, Unavailable, Offline, and Unknown
columns is a link. When you click the link, you access the main screen for
that resource, with the list of resources filtered to show only those resources
with the corresponding status. For example, if the Available column for data
centers has a value of 5, clicking the 5 brings up a filtered main screen for
data centers that shows only the five data centers that are available.

Types of statistics
You can view a variety of statistics through Global Traffic Manager as
described in the following sections.

Distributed application statistics


Global Traffic Manager captures several statistics related to the performance
of a distributed application. You can use these statistics to see how many
resolution requests have been sent for the application, and how the system
has load balanced these requests. You can access the wide IP statistics by
selecting Distributed Applications from the Statistics Type list in the
Statistics screen.
As an example of distributed application statistics, consider the fictional
company SiteRequest. The IT department at SiteRequest has a distributed
application, downloader, which contains multiple wide IPs associated with
the viewing and downloading of SiteRequest applications. The wide IPs in
the downloader application use the Global Availability load balancing
mode. This mode sends all name resolution requests for this wide IP to a
specific pool until that pool is unavailable. Because the distributed
application is critical to SiteRequest’s operations, the IT department wants
to track traffic to the application and ensure that it is being managed
effectively. The distributed applications statistics provide the IT department

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 3


Chapter 12

the information they need to see how many requests are being sent for the
application, allowing them to plan additional resource allocations more
effectively.
The distributed application statistics screen consists of a Distributed
Application Statistics table. This table contains the following information:
◆ Status
The Status column indicates the current status of the wide IP. The
available status types are: Available, Unavailable, Offline, and
Unknown. Each status type is represented by a symbol; for example, the
available status type is represented by a green circle.
◆ Distributed Application
The Distributed Application column displays the name of an application
for which Global Traffic Manager is responsible. Each name appears as a
link. When you click the link, the properties screen for the distributed
application opens.
◆ Members
The Members column provides a link that opens a wide IP details screen
for the distributed application. This screen displays load balancing
statistics for each pool within the distributed application. You can return
to the main distributed application statistics screen by clicking the Back
button in the Display Options area of the screen.
◆ Requests
The Requests column displays the cumulative number of Domain Name
System (DNS) requests sent to the distributed application.
◆ Load Balancing
The Load Balancing column provides information about how Global
Traffic Manager load balanced connection requests to this resource. This
column consists of four subcolumns:
• The Preferred subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the preferred load
balancing method.
• The Alternate subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the alternate load
balancing method.
• The Fallback subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the Fallback load
balancing method.
• The Returned to DNS subcolumn displays the cumulative number of
requests that Global Traffic Manager did not resolve and returned to
the DNS.

12 - 4
Statistics

Wide IP statistics
Global Traffic Manager captures several statistics related to the performance
of a wide IP. These statistics primarily focus on how many resolution
requests have been sent for the wide IP, and how Global Traffic Manager
has load balanced these requests. You can access the wide IP statistics by
selecting Wide IPs from the Statistics Type list in the Statistics screen.
As an example of wide IP statistics, consider the fictional company
SiteRequest. The IT department at SiteRequest has a wide IP,
www.siterequest.com, which uses the Global Availability load balancing
mode. This mode sends all name resolution requests for this wide IP to a
specific pool until that pool is unavailable. Because the wide IP,
www.siterequest.com, is critical to SiteRequest’s operations, the IT
department wants to track traffic to the wide IP and ensure that the primary
pool is not at risk of getting overloaded. The wide IP statistics provide the
IT department the information they need to see how many requests are being
sent for the wide IP, allowing them to plan additional resource allocations
more effectively.
The wide IP statistics screen consists of a Wide IP Statistics table. This table
contains the following information:
◆ Status
The Status column indicates the current status of the wide IP. The
available status types are: Available, Unavailable, Offline, and
Unknown. Each status type is represented by a symbol; for example, the
available status type is represented by a green circle.
◆ Wide IP
The Wide IP column displays the name of a wide IP for which Global
Traffic Manager is responsible. Each name appears as a link. When you
click the link, the properties screen for the wide IP opens.
◆ Pools
The Pools column provides a link that opens a pool details screen for the
wide IP. This screen displays load balancing statistics for each pool
within the wide IP. You can return to the main wide IP statistics screen
by clicking the Back button in the Display Options area of the screen.
◆ Requests
The Requests column displays the cumulative number of DNS requests
sent to the wide IP.
◆ Requests Persisted
The Requests Persisted column displays the cumulative number of
requests that persisted. Persisted requests use the same pool during a
connection session.
◆ Load Balancing
The Load Balancing column provides information about how Global
Traffic Manager load balanced connection requests to this resource. This
column consists of four subcolumns:

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 5


Chapter 12

• The Preferred subcolumn displays the cumulative number of requests


that Global Traffic Manager load balanced with the preferred load
balancing method.
• The Alternate subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the alternate load
balancing method.
• The Fallback subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the Fallback load
balancing method.
• The Returned to DNS subcolumn displays the cumulative number of
requests that Global Traffic Manager did not resolve and returned to
the DNS.

Pool statistics
The pool statistics available through Global Traffic Manager focus on how
Global Traffic Manager has load balanced name resolution requests. You
can access the pool statistics by selecting Pools from the Statistics Type list
in the Statistics screen.
As an example of pool statistics, consider the fictional company
SiteRequest. The IT department at SiteRequest has a wide IP,
www.siterequest.com, which contains pools that use the dynamic load
balancing mode, Quality of Service. This mode acquires statistical data
about response times between Global Traffic Manager and an LDNS
sending a name resolution request. There has been some concern of late as
to how well this new load balancing mode is working and if Global Traffic
Manager is able to gather the statistical information it needs to load balance
with this mode, or if it has to resort to an alternate or fallback method. By
using the pool statistics screen, the IT department can track how many name
resolution requests are load balanced using the preferred Quality of Service
method, and how many are load balanced using another method.
The pool statistics screen consists of a Pool Statistics table. This table
contains the following information:
◆ Status
The Status column indicates the current status of the pool. The available
status types are: Available, Unavailable, Offline, and Unknown. Each
status type is represented by a symbol; for example, the available status
type is represented by a green circle.
◆ Pool
The Pool column displays the name of a wide IP for which Global
Traffic Manager is responsible. Each name appears as a link. When you
click the link, the properties screen for the pool opens.
◆ Members
The Members column provides a link that opens a virtual server details
screen for the pool. This screen displays connection statistics for each
virtual server within the pool, including the number of times the virtual

12 - 6
Statistics

server was selected for a name resolution request and the amount of
traffic flowing from and to the virtual server. You can return to the main
wide IP statistics screen by clicking the Back button in the Display
Options area of the screen.
◆ Load Balancing
The Load Balancing column provides information about how Global
Traffic Manager load balanced connection requests to this resource. This
column consists of four subcolumns:
• The Preferred subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the preferred load
balancing method.
• The Alternate subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the alternate load
balancing method.
• The Fallback subcolumn displays the cumulative number of requests
that Global Traffic Manager load balanced with the Fallback load
balancing method.
• The Returned to DNS subcolumn displays the cumulative number of
requests that Global Traffic Manager did not resolve and returned to
the DNS.

Data center statistics


Data center statistics revolve around the amount of traffic flowing to and
from each data center. This information can tell you if your resources are
distributed appropriately for your network. You can access the data center
statistics by selecting Data Centers from the Statistics Type list in the
Statistics screen.
As an example of how the statistics for data centers can help you manage
your network resources, consider the fictional company SiteRequest.
SiteRequest has decided that its New York data center should handle all
name resolution requests originating in North America. However, since a
new marketing campaign started in the United States and the IT department
is concerned it might overload the data center. By using the data center
statistics, the IT department can track the overall amount of traffic that the
New York data center is handling, allowing them to make adjustments to
their load balancing methods in a timely manner.
The data center statistics screen consists of a Data Center Statistics table.
This table contains the following information:
◆ Status
The Status column indicates the current status of the data center. The
available status types are: Available, Unavailable, Offline, and
Unknown. Each status type is represented by a symbol; for example, the
available status type is represented by a green circle.

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 7


Chapter 12

◆ Data Center
The Data Center column displays the name of a data center. Each name
appears as a link. When you click the link, the properties screen for the
data center opens.
◆ Servers
The Servers column provides a link that opens a server details screen for
the data center. This screen displays connection statistics for each server
at a data center, including the number of times the server was selected for
a name resolution request and the amount of traffic flowing from and to
the server. You can return to the main data center statistics screen by
clicking the Back button in the Display Options area of the screen.
◆ Connections
The Connections column displays the cumulative number of requests that
Global Traffic Manager resolved using a resource from the
corresponding data center.
◆ Throughput (bits/sec)
The Throughput (bits/sec) column contains two subcolumns:
• The In column displays the cumulative number of bits per second sent
to the data center.
• The Out column displays the cumulative number of bits per second
sent from the data center.

◆ Throughput (packets/sec)
The Throughput (packets/sec) column contains two subcolumns:
• The In column displays the cumulative number of packets per second
sent to the data center.
• The Out column displays the cumulative number of packets per
second sent from the data center.

Link statistics
Link statistics focus on how much traffic is flowing in and out through a
specific link to the Internet. This information can help you prevent a link
from getting over-used, saving your organization from higher bandwidth
costs. You can access the link statistics by selecting Links from the
Statistics Type list in the Statistics screen.
As an example of how the statistics for data centers can help you manage
your network resources, consider the fictional company SiteRequest.
SiteRequest has two links with two different Internet Service Providers
(ISPs). The primary ISP is paid in advance for a specific amount of
bandwidth usage. This allows SiteRequest to save money, but if the
bandwidth exceeds the prepaid amount, the costs increase considerably. As
a result, the IT department uses a second ISP, which has a slower connection
but considerably lower costs. By using the links statistics, the IT department
can ensure that links to the Internet are used as efficiently as possible.

12 - 8
Statistics

The link statistics screen consists of a Link Statistics table. This table
contains the following information:
◆ Status
The Status column indicates the current status of the link. The available
status types are: Available, Unavailable, Offline, and Unknown. Each
status type is represented by a symbol; for example, the available status
type is represented by a green circle.
◆ Link
The Link column displays the name of a link for which Global Traffic
Manager is responsible. Each name appears as a link. When you click the
link, the properties screen for the link opens.
◆ Throughput (bits/sec)
The Throughput (bits/sec) column contains four subcolumns:
• The In column displays the cumulative number of bits per second sent
to the data center.
• The Out column displays the cumulative number of bits per second
sent from the data center.
• The Total column displays the cumulative number of both incoming
and outgoing bits per second for the link.
• The Over Prepaid column displays the amount of traffic, in bits per
second, that has exceeded the prepaid traffic allotment for the link.

In addition to viewing the link data as a table, you can also view it in a graph
format. To use this format, click the Graph button. A graph screen opens,
which shows the amount of traffic used over time. You can change the
amount of time shown in the graph by selecting a value from the Graph
Interval list, located in the Display Options area of the screen.

Server statistics
With server statistics, you can analyze the amount of traffic flowing to and
from each server. This information can tell you if your resources are
distributed appropriately for your network. You can access the server
statistics by selecting Servers from the Statistics Type list in the Statistics
screen.
As an example of how the statistics for servers can help you manage your
network resources, consider the fictional company SiteRequest. The IT
department at SiteRequest is considering whether it needs a few more
servers to better manage name resolution requests; however, there is some
debate as to whether the servers should be consolidated at the New York
data center (which the New York team prefers) or spread out over all of the
data centers. It is also possible that an under-utilized server at one data
center might be moved to another data center. By using the server statistics,
the IT department can look at how much traffic is handled by each server,
giving them the information they need to decide where these new servers, if
any, should go.

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 9


Chapter 12

The server statistics screen consists of a Server Statistics table. This table
contains the following information:
◆ Status
The Status column indicates the current status of the server. The
available status types are: Available, Unavailable, Offline, and
Unknown. Each status type is represented by a symbol; for example, the
available status type is represented by a green circle.
◆ Server
The Server column displays the name of a server for which Global
Traffic Manager is responsible. Each name appears as a link. When you
click the link, the properties screen for the server opens.
◆ Virtual Servers
The Virtual Servers column provides a link that opens a virtual server
details screen for the server. This screen displays connection statistics for
each virtual server at a data center, including the number of times the
virtual server was selected for a name resolution request and the amount
of traffic flowing from and to the server. You can return to the main data
center statistics screen by clicking the Back button in the Display
Options area of the screen.
◆ Picks
The Picks column displays the cumulative number of times Global
Traffic Manager picked a server to handle a name resolution request.
◆ Connections
The Connections column displays the cumulative number of requests that
Global Traffic Manager resolved using a resource from the
corresponding data center.
◆ Throughput (bits/sec)
The Throughput (bits/sec) column contains two subcolumns:
• The In column displays the cumulative number of bits per second sent
to the server.
• The Out column displays the cumulative number of bits per second
sent from the server.

◆ Throughput (packets/sec)
The Throughput (packets/sec) column contains two subcolumns:
• The In column displays the cumulative number of packets per second
sent to the server.
• The Out column displays the cumulative number of packets per
second sent from the server.

12 - 10
Statistics

Virtual server statistics


Virtual server statistics provide information about the amount of traffic
flowing to and from each virtual server. This information can tell you if your
resources are distributed appropriately for your network. You can access the
virtual server statistics by selecting Virtual Servers from the Statistics
Type list in the Statistics screen.
As an example of how the statistics for servers can help you manage your
network resources, consider the fictional company SiteRequest. SiteRequest
recently added a Local Traffic Manager™ system to their Tokyo data center.
The IT department wants to see how well the new system is handling the
traffic, and if it can perhaps be utilized to handle traffic for a new wide IP,
www.SiteRequestAsia.com. After installing Local Traffic Manager and
adding it to Global Traffic Manager as a server, the IT department can use
the virtual server statistics to monitor the performance of the virtual servers
that compose the new Local Traffic Manager, allowing them to determine if
more resources are required for the new wide IP.
The server statistics screen consists of a Virtual Server Statistics table. This
table contains the following information:
◆ Status
The Status column indicates the current status of the server. The
available status types are: Available, Unavailable, Offline, and
Unknown. Each status type is represented by a symbol; for example, the
available status type is represented by a green circle.
◆ Virtual Server
The Virtual Server column displays the name of a virtual server for
which Global Traffic Manager is responsible. Each name appears as a
link. When you click the link, the properties screen for the virtual server
opens.
◆ Server
The Servers column provides a link that opens a server details screen for
the data center. This screen displays connection statistics for each server
at a data center, including the number of times the server was selected for
a name resolution request and the amount of traffic flowing from and to
the server. You can return to the main data center statistics screen by
clicking the Back button in the Display Options area of the screen.
◆ Picks
The Picks column displays the cumulative number of times Global
Traffic Manager picked a server to handle a name resolution request.
◆ Connections
The Connections column displays the cumulative number of requests that
Global Traffic Manager resolved using a resource from the
corresponding data center.

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 11


Chapter 12

◆ Throughput (bits/sec)
The Throughput (bits/sec) column contains two subcolumns:
• The In column displays the cumulative number of bits per second sent
to the server.
• The Out column displays the cumulative number of bits per second
sent from the server.

◆ Throughput (packets/sec)
The Throughput (packets/sec) column contains two subcolumns:
• The In column displays the cumulative number of packets per second
sent to the server.
• The Out column displays the cumulative number of packets per
second sent from the server.

Paths statistics
The paths statistics captured by Global Traffic Manager provide information
about how quickly traffic moves between an LDNS and a resource for
which Global Traffic Manager is responsible. Information presented in the
paths statistics screen includes details about round trip times (RTT), hops,
and completion rates. You can access the paths statistics by selecting Paths
from the Statistics Type list in the Statistics screen.
Paths statistics are primarily used when you employ a dynamic load
balancing mode for a given wide IP or pool. You can use the information in
the Paths statistics to get an overall sense of how responsive your wide IPs
are in relation to the local DNS servers that have been sending name
resolution requests to a wide IP.
The paths statistics screen consists of a paths statistics table. This table
contains the following information:
◆ Local DNS Address
The Local DNS Address column displays the IP address of each LDNS
that has sent a name resolution request for a wide IP for which Global
Traffic Manager is responsible.
◆ Link
The Link column displays the ISP link that Global Traffic Manager used
to send and receive data from the LDNS.
◆ Round Trip Time (RTT)
The Round Trip Time (RTT) column contains two subcolumns:
• The Current subcolumn displays the current round trip time between
the LDNS and Global Traffic Manager.
• The Average subcolumn displays the average round trip time between
the LDNS and Global Traffic Manager.

12 - 12
Statistics

◆ Hops
The Hops column contains two subcolumns:
• The Current subcolumn displays the current number of hops between
the LDNS and Global Traffic Manager.
• The Average subcolumn displays the average number of hops
between the LDNS and Global Traffic Manager.

◆ Completion Rate
The Completion Rate column contains two subcolumns:
• The Current subcolumn displays the current completion rate of
transactions between the LDNS and Global Traffic Manager.
• The Average subcolumn displays the average completion rate of
transactions between the LDNS and Global Traffic Manager.

◆ Last Probe Time


The Last Probe Time column displays the last time Global Traffic
Manager probed the LDNS for metrics data.

Local DNS statistics


The Local DNS statistics screen provides location details related to the
different local DNS servers that communicate with Global Traffic Manager.
These statistics include the geographical location of each LDNS as well as a
timestamp for the last time that the LDNS accessed Global Traffic Manager.
You can access LDNS statistics by selecting Local DNS from the Statistics
Type list in the Statistics screen.
As an example of how the statistics for servers can help you manage your
network resources, consider the fictional company SiteRequest. SiteRequest
is currently considering whether it needs a new data center in North
America to ensure that its customers can access SiteRequest’s web site as
effectively as possible. To help make their decision, the IT department uses
the local DNS statistics to see where most of their European traffic is
coming from. By using these statistics, the IT department discovers that a
high concentration of local DNS servers accessing SiteRequest is in the
southwest United States. This information proves helpful in determining that
a new data center in Las Vegas might be appropriate.
The Local DNS statistics screen displays a statistics table that contains the
following information:
◆ IP Address
The IP Address column displays the IP address of each LDNS that has
sent a name resolution request for a wide IP for which Global Traffic
Manager is responsible.
◆ Requests
The Requests column displays the number of times this LDNS has made
a name resolution request that Global Traffic Manager handled.

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 13


Chapter 12

◆ Last Accessed
The Last Accessed column displays the last time the LDNS attempted a
connection to Global Traffic Manager.
◆ Location
The Location column contains four subcolumns:
• The Continent subcolumn displays the continent on which the LDNS
resides.
• The Country subcolumn displays the country in which the LDNS is
located.
• The State subcolumn displays the state in which the LDNS is located.
• The City subcolumn displays the city in which the LDNS is located.

12 - 14
Statistics

Persistence records
One of the common methods of modifying name resolution requests with
Global Traffic Manager is to activate persistent connections. A persistent
connection is a connection in which Global Traffic Manager sends name
resolution requests from a specific LDNS to the same set of resources until a
time-to-live value has been reached. If you use persistent connections in
your configuration of Global Traffic Manager, you may want to see what
persistent connections are currently active on your network. You can access
the persistence records by selecting Persistence Records from the Statistics
Type list in the Statistics screen.
The persistence records screen consists of a persistence records table. This
table contains the following information:
◆ Local DNS Address
The LDNS Address column displays the IP address of each LDNS that
has sent a name resolution request for a wide IP for which Global Traffic
Manager is responsible.
◆ Level
The Level column displays the level at which the persistent connection is
based. Available types are wide IPs and distributed applications.
◆ Destination
The Destination column displays the wide IP or distributed application to
which the name resolution request was directed.
◆ Target Type
The Target Type column displays the type of resource on which
persistence is based. Examples of target types include data centers,
servers, pools, and virtual servers.
◆ Target Name
The Target Name column displays the name of the resource on which
persistence is based.
◆ Expires
The Expires column displays the time at which the persistence for the
given LDNS request expires.

BIG-IP® Global Traffic ManagerTM Concepts Guide 12 - 15


Chapter 12

12 - 16
13
Metric Collection

• Introduction

• About metrics

• Probes and local DNS servers

• TTL and timer values


Metric Collection

Introduction
Global Traffic Manager™ system uses specialized software components,
called monitors, to capture data regarding the availability of a resource, such
as a virtual server. Monitors represent one half of the statistical gathering
capabilities of Global Traffic Manager. The second half, metrics collection,
captures data about how well network traffic flows between Global Traffic
Manager and the external local DNS servers and internal resources with
which it communicates.
The resources you make available to your users over the Internet are often
critical to your organization; consequently, it is vital that these resources are
not only available, but highly responsive to your users. Typically, two main
criteria determine the responsiveness of a resource: hops and paths. A hop is
one point-to-point transmission between a host and a client server in a
network. A network path that includes a stop at a network router has two
hops: the first from the client to the router, and the second from the router to
the host server. A path is a logical network route between a data center
server and an LDNS.
It is important to remember that hops and paths can differ from each other
widely on a per-connection basis. For example, an LDNS might take a long
path to reach a specific resource, but require only a few hops to get there. On
the other hand, that same LDNS might select a short path, yet have to move
between a larger number of routers, increasing the number of hops it takes to
reach the resource. It is up to you to determine what thresholds for hops and
paths are acceptable for your network, as the needs of each network, and
even each application within the same network, can vary widely.
Through the metrics collection capabilities of Global Traffic Manager, you
can accomplish several tasks related to improving the availability and
responsiveness of your network applications and resources. You can:
• Define the types of metrics that Global Traffic Manager collects, and
how long the system keeps those metrics before acquiring fresh data.
• Assign probes to local DNS servers that attempt to acquire the metrics
information.
• Configure Time-to-Live (TTL) values for your metrics data.
• Exclude specific local DNS servers from Global Traffic Manager probes.
• Implement the Quality of Service load balancing mode, which uses
metrics to determine the best resource for a particular name resolution
request.

BIG-IP® Global Traffic ManagerTM Concepts Guide 13 - 1


Chapter 13

About metrics
When you decide to use Global Traffic Manager to collect metrics on the
local DNS servers that attempt to access your network resources, you can
define the following characteristics:
• Types of metrics collected (either hops, paths, both, or disabled)
• Time-to-live (TTL) values for each metric
• Frequency at which the system updates the data
• Size of a packet sent (relevant for hop metrics only)
• Length of time that can pass before the system times out the collection
attempt
• Number of packets sent for each collection attempt

While each of these settings is important, the ones that perhaps require the
most planning beforehand are the TTL values. In general, the lower the TTL
value, the more often Global Traffic Manager probes an LDNS. This
improves the accuracy of the data, but increases bandwidth usage.
Conversely, increasing the TTL value for a metric lowers the bandwidth
your network uses, but increases the chance that Global Traffic Manager is
basing its load balancing operations off of stale data
An additional consideration is the number of local DNS servers that Global
Traffic Manager queries. The more local DNS servers that the system
queries, the more bandwidth is required to ensure those queries are
successful. Therefore, setting the TTL values for metrics collection can
require incremental fine-tuning. F5 Networks recommends that you
periodically check the TTL values, and verify that they are appropriate for
your network.

13 - 2
Metric Collection

Probes and local DNS servers


To capture accurate metrics data from the local DNS servers that send name
resolution requests to Global Traffic Manager, you assign probes to each
LDNS. A probe is a query that employs a specific methodology to learn
more about an LDNS.
You can assign one or more of the following probes to query local DNS
servers:
◆ DNS_REV
The DNS_REV probe sends a DNS message to the probe target LDNS
querying for a resource record of class IN, type PTR. Most versions of
DNS answer with a record containing their fully-qualified domain name.
The system makes these requests only to measure network latency and
packet loss; it does not use the information contained in the responses.
◆ DNS_DOT
The DNS.DOT probe sends a DNS message to the probe target LDNS
querying for a dot (.). If the LDNS is not blocking queries from unknown
addresses, it answers with a list of root nameservers. The system makes
these requests only to measure network latency and packet loss; it does
not use the information contained in the responses.
◆ UDP
The UDP probe uses the user datagram protocol (UDP) to query the
responsiveness of an LDNS. The UDP protocol provides simple but
unreliable datagram services. The UDP protocol adds a checksum and
additional process-to-process addressing information. UDP is a
connectionless protocol which, like TCP, is layered on top of IP. UDP
neither guarantees delivery nor requires a connection. As a result, it is
lightweight and efficient, but the application program must take care of
all error processing and retransmission.
◆ TCP
The TCP probe uses the transmission control protocol (TCP) to query the
responsiveness of an LDNS. The TCP protocol is the most common
transport layer protocol used on Ethernet and Internet. The TCP protocol
adds reliable communication, flow-control, multiplexing, and
connection-oriented communication. It provides full-duplex,
process-to-process connections. TCP is connection-oriented and
stream-oriented.
◆ ICMP
The ICMP probe uses the Internet control message protocol (ICMP) to
query the responsiveness of an LDNS. The ICMP protocol is an
extension to the Internet Protocol (IP). The ICMP protocol generates
error messages, test packets, and informational messages related to IP.

With these probes, it does not matter whether Global Traffic Manager
receives a valid response, such as the name of the LDNS as queried by the
DNS_REV probe, or a request refused statement. The relevant information
is the metrics generated between the probe request and the response. For
example, Global Traffic Manager uses the DNS_REV probe to query two

BIG-IP® Global Traffic ManagerTM Concepts Guide 13 - 3


Chapter 13

local DNS servers. The first LDNS responds to the probe with its name, as
per the request. The second LDNS, however, responds with a request
refused statement, because it is configured to not allow such requests. In
both cases, the probe was successful, because Global Traffic Manager was
able to acquire data about how long it took for both local DNS servers to
respond to the probe.
You can configure Global Traffic Manager to use a select number of probes,
or you can assign all five. The more probes that Global Traffic Manager
uses, the more bandwidth is required.
When Global Traffic Manager attempts to probe an LDNS, it is actively
attempting to acquire data from that LDNS. Certain Internet Service
Providers and other organizations might request that you do not probe their
local DNS servers, while other local DNS servers might be known to act as
proxies, which do not provide accurate metrics data. In these situations, you
can configure Global Traffic Manager to exclude local DNS servers from
probes. When you exclude an LDNS, Global Traffic Manager does not
probe that server; however, Global Traffic Manager is also unable to use the
Quality of Service load balancing mode to load balance name resolution
requests from that LDNS.
You can remove an LDNS from the address exclusion list at any time.
Situations in which you want to remove an LDNS include the LDNS
becoming inactive, or the IP address of the LDNS changing to a different
network subnet.

13 - 4
Metric Collection

TTL and timer values


Each resource in Global Traffic Manager has an associated time-to-live
(TTL) value. A TTL is the amount of time (measured in seconds) for which
the system considers metrics valid. The timer values determine how often
Global Traffic Manager refreshes the information.
Each resource also has a timer value. A timer value defines the frequency
(measured in seconds) at which Global Traffic Manager refreshes the
metrics information it collects. In most cases, the default values for the TTL
and timer parameters are adequate. However, if you make changes to any
TTL or timer values, keep in mind that an object’s TTL value must be
greater than its timer value.

BIG-IP® Global Traffic ManagerTM Concepts Guide 13 - 5


Chapter 13

13 - 6
14
Performance Data

• Introduction

• Performance data graphs


Performance Data

Introduction
Global Traffic Manager™ captures data about how network traffic flows
between Global Traffic Manager and the external local DNS servers and
internal resources with which it communicates.
You can view graphs that display information about how Global Traffic
Manager is performing. You can use this information to help you determine
how to modify the configuration to obtain the best possible performance
from the system.

Performance data graphs


Global Traffic Manager provides two types of performance data graphs on
the performance screen: the GTM Performance and GTM Request
Breakdown graphs. You can view detailed versions of each graph by
clicking the View Detailed Graph link.

Performance graph
The GTM Performance graph shows the throughput of Global Traffic
Manager. The graph includes the following data:
• GTM Requests
Represents the number of incoming DNS requests.
• GTM Resolutions
Represents the number of incoming DNS requests that were resolved by
any method.
• GTM Resolutions Persisted
Represents the number of incoming DNS requests that were resolved by
a persistence record.
• GTM Resolutions Returned to DNS
Represents the number of incoming DNS requests that were not resolved
by Global Traffic Manager, but were instead passed on to the DNS server
for resolution.

Request Breakdown graph


The GTM Request Breakdown graph includes the following data:
• GTM Type A - IPv4 Requests
Represents IPv4-formatted requests.
• GTM Type AAAA/A6 - IPv6 Requests
Represents IPv6-formatted requests.

BIG-IP® Global Traffic ManagerTM Concepts Guide 14 - 1


Chapter 14

14 - 2
15
iRules

• Introduction

• What is an iRule?

• Event-based traffic management


iRules

Introduction
As you work with Global Traffic Manager™, you might find that you want
to incorporate additional customizations beyond the available features
associated with load balancing, monitors, or other aspects of your traffic
management. For example, you might want to have the system respond to a
name resolution request with a specific CNAME record, but only when the
request is for a particular wide IP and originates from Europe. In Global
Traffic Manager, these customizations are defined through iRules®. iRules
are code snippets that are based on TCL 8.4. These snippets allow you a
great deal of flexibility in managing your global network traffic.
If you are familiar with Local Traffic Manager™, you might already be
aware of and use iRules to manage your network traffic on a local level. The
iRules in Global Traffic Manager share a similar syntax with their Local
Traffic Manager counterparts, but support a different set of events and
objects.
Due to the dynamic nature of iRules development, the following sections
focus on providing an overview of iRule operations and describe the events
and command specific to Global Traffic Manager. For additional
information about how to write iRules, visit the F5 DevCentral web site:
http://devcentral.f5.com. At this site, you can learn more about iRules
development, as well as discuss iRules functionality with others.

BIG-IP® Global Traffic ManagerTM Concepts Guide 15 - 1


Chapter 15

What is an iRule?
An iRule is a script that you write if you want individual connections to
target a pool other than the default pool defined for a virtual server. iRules
allow you to more directly specify the pools to which you want traffic to be
directed. Using iRules, you can send traffic not only to pools, but also to
individual pool members or hosts.
The iRules you create can be simple or sophisticated, depending on your
content-switching needs. Figure 15.1 shows an example of a simple iRule.

when DNS_REQUEST {
if { [IP::addr [IP::client_addr] equals 10.10.10.10] } {
pool my_pool
}
}

Figure 15.1 Example of an iRule

This iRule is triggered when a DNS request has been detected, causing
Global Traffic Manager to send the packet to the pool my_pool, if the IP
address of the local DNS making the request matches 10.10.10.10.
iRules can direct traffic not only to specific pools, but also to individual pool
members, including port numbers and URI paths, either to implement
persistence or to meet specific load balancing requirements.
The syntax that you use to write iRules is based on the Tool Command
Language (Tcl) programming standard. Thus, you can use many of the
standard Tcl commands, plus a set of extensions that Global Traffic
Manager provides to help you further increase load balancing efficiency.
For information about standard Tcl syntax, see the Tcl Reference Manual at
http://tmml.sourceforge.net/doc/tcl/index.html.
Within Global Traffic Manager, you assign iRules to the wide IPs in your
network configuration.

15 - 2
iRules

Event-based traffic management


In a basic system configuration where no iRule exists, Global Traffic
Manager directs incoming traffic to the default pool assigned to the wide IP
that receives that traffic based on the assigned load balancing modes.
However, you might want Global Traffic Manager to direct certain kinds of
connections to other destinations. The way to do this is to write an iRule that
directs traffic to that other destinations contingent on a certain type of event
occurring. Otherwise, traffic continues to go to the default pool assigned to
the wide IP.
iRules are evaluated whenever an event occurs that you have specified in the
iRule. For example, if an iRule includes the event declaration
DNS_REQUEST, then the iRule is triggered whenever Global Traffic
Manager receives a name resolution request. Global Traffic Manager then
follows the directions in the remainder of the iRule to determine the
destination of the packet.
When you assign multiple iRules as resources for a wide IP, it is important
to consider the order in which you list them on the wide IP. This is because
Global Traffic Manager processes duplicate iRule events in the order that
the applicable iRules are listed. An iRule event can therefore terminate the
triggering of events, thus preventing Global Traffic Manager from triggering
subsequent events.

Event declarations
The iRules feature includes several types of event declarations that you can
make in an iRule. Specifying an event declaration determines when Global
Traffic Manager evaluates the iRule. The following sections list and
describe these event types. Also described is the concept of iRule context
and the use of the when keyword.
You make an event declaration in an iRule by using the when keyword,
followed by the event name. For example:
when DNS_REQUEST {
iRule details...

BIG-IP® Global Traffic ManagerTM Concepts Guide 15 - 3


Chapter 15

15 - 4
16
ZoneRunner

• ZoneRunner utility

• Zone files

• Resource records

• Views

• Named.conf
ZoneRunner

ZoneRunner utility
One of the modes in which you can operate Global Traffic Manager™
system is the node mode. In node mode, Global Traffic Manager is
responsible not only for load balancing name resolution requests and
monitoring the health of your physical and logical network; it is also
responsible for maintaining the DNS zone files that map name resolution
requests to the appropriate network resource.
In Global Traffic Manager, you create, manage, and maintain DNS files
using the ZoneRunner™ utility. The ZoneRunner utility is a zone file
management utility that can manage both DNS zone files and your BIND
configuration. With the ZoneRunner utility, you can:
• Manage the DNS zones and zone files for your network, including
importing and transferring zone files
• Manage the resource records for those zones
• Manage views
• Manage a local nameserver and its configuration file, named.conf

The ZoneRunner utility is an advanced feature of Global Traffic Manager.


F5 Networks highly recommends that you become familiar with the various
aspects of BIND and DNS before you use this feature. For in-depth
information, see the following resources:
• DNS and BIND, 4th edition, Paul Albitz and Cricket Liu
• The IETF DNS documents, RFC 1034 and RFC 1035
• The Internet Systems Consortium web site,
http://www.isc.org/products/BIND

ZoneRunner tasks
When you use the ZoneRunner utility to manage your DNS zones and
resource records, you can accomplish several tasks, including:
• Configure a zone
• Configure the resource records that make up the zone
• Configure a view, for access control
• Configure options in the named.conf file

Note

In the Configuration utility, you must configure a zone before you configure
any other objects in the ZoneRunner utility.

BIG-IP® Global Traffic ManagerTM Concepts Guide 16 - 1


Chapter 16

Zone files
With the ZoneRunner utility, you can create, modify, and delete zone files.
Additionally, you can transfer zone files to another nameserver, or import
zone files from another nameserver. A zone file contains resource records
and directives that describe the characteristics and hosts of a zone, otherwise
known as a domain or sub-domain.

Types of zone files


There are five types of zone files. Each type has its own content
requirements and role in the DNS.
The types of zones are:
◆ Primary (Master)
Zone files for a primary zone contain, at minimum, the start of authority
(SOA) and nameserver (NS) resource records for the zone. Primary
zones are authoritative, that is, they respond to DNS queries for the
domain or sub-domain. A zone can have only one SOA record, and must
have at least one NS record.
◆ Secondary (Slave)
Zone files for a secondary zone are copies of the principal zone files. At
an interval specified in the SOA record, secondary zones query the
primary zone to check for and obtain updated zone data. A secondary
zone responds authoritatively for the zone as long as the zone data is
valid.
◆ Stub
Stub zones are similar to secondary zones, except that stub zones contain
only the NS records for the zone. Note that stub zones are a specific
feature of the BIND implementation of DNS. F5 Networks recommends
that you use stub zones only if you have a specific requirement for this
functionality.
◆ Forward
The zone file for a forwarding zone contains only information to forward
DNS queries to another nameserver on a per-zone (or per-domain) basis.
◆ Hint
The zone file for a hint zone specifies an initial set of root nameservers
for the zone. Whenever the local nameserver starts, it queries a root
nameserver in the hint zone file to obtain the most recent list of root
nameservers.

Zone file import


Often, when you add Global Traffic Manager to your network, you already
have a DNS server that manages your zone files. Typically, Global Traffic
Manager can then become either a secondary server that provides backup

16 - 2
ZoneRunner

DNS information in case your primary DNS server goes offline, or the
primary DNS server. In either situation, you can use the ZoneRunner utility
to import existing zone files into Global Traffic Manager instead of
re-creating them manually. It is important to note that you can import only
primary zones files.
Through the ZoneRunner utility, you can import zone files using one of two
methods:
• Loading zones from a file
If you know where the zone files you want to import reside on your
server, you can load these files directly into Global Traffic Manager
through the ZoneRunner utility. After you load a zone file into Global
Traffic Manager, the ZoneRunner utility displays information about the
zone and any of its resource records within the Configuration utility.

Important
You can load only primary zones files.

• Transferring zones from a server


Instead of loading zones from a file, you have the option of transferring
them from existing DNS server. This method is useful if the zone files
you need reside at a remote location. After you transfer a zone file into
Global Traffic Manager, the ZoneRunner utility displays information
about the zone and any of its resource records within the Configuration
utility.

Before you can transfer zone files from another server, you must ensure
that the you have configured the source server to allow transfers to the
destination server. You typically accomplish this task using the
allow-transfer option. See your DNS and BIND documentation for more
information.

Important
You can transfer only primary zones files.

BIG-IP® Global Traffic ManagerTM Concepts Guide 16 - 3


Chapter 16

Resource records
Resource records are the files that contain details about a zone. These
resource records, in a hierarchical structure, make up the domain name
system (DNS). After you have created a zone, you can use the ZoneRunner
utility to view, create, modify, and delete the resource records for that zone.

Note

Although case is preserved in names and data fields when loaded into the
nameserver, comparisons and lookups in the nameserver database are not
case-sensitive.

Types of resource records


The ZoneRunner utility supports a number of common resource records.
The types of resource records are:
◆ SOA (Start of authority)
The start of authority resource record, SOA, starts every zone file and
indicates that a nameserver is the best source of information for a
particular zone. The SOA record indicates that a nameserver is
authoritative for a zone. There must be exactly one SOA record per zone.
Unlike other resource records, you create a SOA record only when you
create a new master zone file.
◆ A (Address)
The Address record, or A record, lists the IP address for a given host
name. The name field is the host’s name, and the address is the network
interface address. There should be one A record for each IP address of
the machine.
◆ AAAA (IPv6 Address)
The IPv6 Address record, or AAAA record, lists the 128-bit IPv6 address
for a given host name.
◆ CNAME (Canonical Name)
The Canonical Name resource record, CNAME, specifies an alias or
nickname for the official, or canonical, host name. This record must be
the only one associated with the alias name. It is usually easier to supply
one A record for a given address and use CNAME records to define alias
host names for that address.
◆ DNAME (Delegation of Reverse Name)
The Delegation of Reverse Name resource record, DNAME, specifies the
reverse lookup of an IPv6 address. These records substitute the suffix of
one domain name with another. The DNAME record instructs Global
Traffic Manager (or any DNS server) to build an alias that substitutes a
portion of the requested IP address with the data stored in the DNAME
record.

16 - 4
ZoneRunner

◆ HINFO (Host Information)


The Host Information resource record, HINFO, contains information on
the hardware and operating system relevant to Global Traffic Manager
(or other DNS).
◆ MX (Mail Exchanger)
The Mail Exchange resource record, MX, defines the mail system(s) for
a given domain.
◆ NS (nameserver)
The nameserver resource record, NS, defines the nameservers for a given
domain, creating a delegation point and a subzone. The first name field
specifies the zone that is served by the nameserver that is specified in the
nameservers name field. Every zone needs at least one nameserver.
◆ PTR (Pointer)
A name pointer resource record, PTR, associates a host name with a
given IP address. These records are used for reverse name lookups.
◆ SRV (Service)
The Service resource record, SRV, is a pointer that allows an alias for a
given service to be redirected to another domain. For example, if the
fictional company SiteRequest had an FTP archive hosted on
archive.siterequest.com, the IT department can create an SRV record
that allows an alias, ftp.siterequest.com to be redirected to
archive.siterequest.com.
◆ TXT (Text)
The Text resource record, TXT, allows you to supply any string of
information, such as the location of a server or any other relevant
information that you want available.

BIG-IP® Global Traffic ManagerTM Concepts Guide 16 - 5


Chapter 16

Views
In BIND, a view allows you to modify the nameserver configuration based
on the community attempting to access it. For example, if your DNS handles
requests from both inside and outside your company, you can create two
views: internal and external. Through views, you can build nameserver
configurations on the same server, and have those configurations apply
dynamically when the request originates from a specified source.
In Global Traffic Manager, a single view is created automatically within the
ZoneRunner utility: external. If you do not want to create views, all zones
that Global Traffic Manager maintains are associated with this default view.

16 - 6
ZoneRunner

Named.conf
You define the primary operational characteristics of BIND using a single
file, named.conf. The functions defined in this file include views, access
control list definitions, and zones.
You can control most of the contents of the named.conf file through the
ZoneRunner utility, as this utility updates the named.conf file to implement
any modifications that you make. However, you can also use the
ZoneRunner utility to edit the named.conf file directly.

Important
Modifying the named.conf file carries a high level of risk, as a syntax error
can prevent the entire BIND system from performing as expected. For this
reason, F5 Networks recommends that you use the user interface of the
ZoneRunner utility whenever possible, and that you exercise caution when
editing the named.conf file.

BIG-IP® Global Traffic ManagerTM Concepts Guide 16 - 7


Chapter 16

16 - 8
A
big3d Agent

• Introduction

• Metrics

• Communications
big3d Agent

Introduction
The big3d agent runs on all BIG-IP® systems, collects performance
information on behalf of the Global Traffic Manager™ system, and
continually monitors the availability of the servers that Global Traffic
Manager load balances. The utility also monitors the integrity of the
network paths between the servers that host the domain, and the various
local DNS servers that attempt to connect to the domain. Each big3d agent
broadcasts its collected data to all of the Global Traffic Manager systems
and Link Controller™ systems in your network, ensuring that these systems
work with the latest information.
You can turn off the big3d agent on any BIG-IP system at any time;
however, if you turn off the big3d agent on a server, Global Traffic
Manager can no longer check the availability of the server or its virtual
servers, and the statistics screens display the status of these servers as
unknown (blue ball).

Tip
F5 Networks recommends that you have at least one BIG-IP system running
the big3d agent in each data center in your network. This ensures that
Global Traffic Manager has timely access to the metrics associated with
network traffic.

BIG-IP® Global Traffic ManagerTM Concepts Guide A-1


Appendix A

Metrics
A big3d agent collects the following types of performance information that
the system uses for load balancing. The big3d agent broadcasts this
information to all Global Traffic Manager systems in your network.
◆ Network path round trip time
The big3d agent calculates the round trip time for the network path
between the utility’s data center and the client’s LDNS that is making the
resolution request. Global Traffic Manager uses round trip time to
determine the best virtual server to answer the request when a pool uses a
dynamic load balancing mode, such as Round Trip Time, or Quality of
Service.
◆ Network path packet loss
The big3d agent calculates the packet completion percentage for the
network path between the utility’s data center and the client’s LDNS that
is making the resolution request. Global Traffic Manager uses the packet
completion rate to determine the best virtual server to answer the request
when a wide IP or pool uses either the Completion Rate or the Quality of
Service load balancing modes.
◆ Router hops along the network path
The big3d agent calculates the number of intermediate system transitions
(router hops) between the utility’s data center and the client’s LDNS.
Global Traffic Manager uses hops to determine the best virtual server to
answer the request when a pool uses the Hops or the Quality of Service
load balancing modes.
◆ Server performance
The big3d agent returns server metrics, such as the packet rate, for
BIG-IP systems or SNMP-enabled hosts. Global Traffic Manager uses
packet rate to determine the best virtual server to answer the request
when a pool uses the Packet Rate, KBPS, Least Connections, or Quality
of Service load balancing modes.
◆ Virtual server availability and performance
The big3d agent queries virtual servers to verify whether they are up and
available to receive connections, and uses only those virtual servers that
are up for load balancing. The big3d agent also determines the number
of current connections to virtual servers that are defined on BIG-IP
systems or SNMP-enabled hosts. Global Traffic Manager uses the
number of current connections to determine the best virtual server when
a pool uses the Least Connections or VS Capacity load balancing mode.

A-2
big3d Agent

Data collection with the big3d agent


Setting up the big3d agents involves the following tasks:
◆ Installing big3d agents on BIG-IP systems
Each new version of the Global Traffic Manager software includes the
latest version of the big3d agent. You need to distribute that copy of the
big3d agent to each BIG-IP system in the network. See the release notes
provided with the Global Traffic Manager software for information about
which versions of the BIG-IP software the current big3d agent supports.
◆ Setting up communications between big3d agents and other systems
Before the big3d agents can communicate with the Global Traffic
Manager systems in the network, you need to configure the appropriate
ports and tools to allow communication between the devices running the
big3d agent and Global Traffic Manager systems in the network.

big3d agent installation


The big3d agent is installed by running the big3d_install script. With the
correct ports open, Global Traffic Manager also automatically updates older
big3d agents on the network.
When you install the big3d agent, you must complete the following tasks:
• Install Global Traffic Manager.
• Add the BIG-IP systems as servers to the Global Traffic Manager
system.
• Exchange the appropriate web certificates between the Global Traffic
Manager system and other systems.
• Open ports 22 and 4353 between the Global Traffic Manager system and
the other BIG-IP systems.

Data collection and broadcast sequence


The big3d agents collect and broadcast information on demand. Global
Traffic Manager in a synchronization group issues a data collection request
to all big3d agents running in the network. In turn, the big3d agents collect
the requested data, and then broadcast that data to all Global Traffic
Manager systems running in the network.

big3d agent configuration trade-offs


You must run a big3d agent on each BIG-IP system in your network if you
use dynamic load balancing modes (those that rely on path data). You must
have a big3d agent running on at least one system in each data center to
gather the necessary path metrics.

BIG-IP® Global Traffic ManagerTM Concepts Guide A-3


Appendix A

The load on the big3d agents depends on the timer settings that you assign
to the different types of data the big3d agents collect. The shorter the timers,
the more frequently the big3d agent needs to refresh the data. While short
timers guarantee that you always have valid data readily available for load
balancing, they also increase the frequency of data collection.
Another factor that can affect data collection is the number of client local
DNS servers that make name resolution requests. The more local DNS
servers that make resolution requests, the more path data that the big3d
agents have to collect. While round trip time for a given path may vary
constantly due to current network load, the number of hops along a network
path between a data center and a specific LDNS does not often change.
Consequently, you may want to set short timer settings for round trip time
data so that it refreshes more often, but set high timer settings for hops data
because it does not need to be refreshed often.

A-4
big3d Agent

Communications
In order to copy big3d agents from a Global Traffic Manager system to
BIG-IP systems, the Global Traffic Manager system must be able to
communicate with these other systems. Specifically, every BIG-IP system,
which you define as a server on the Global Traffic Manager system, must
have sufficient network privileges and configured routes to be able to probe
the virtual servers that it hosts, as well as the virtual servers hosted by other
servers defined on the Global Traffic Manager systems in a synchronization
group.
In the following configuration, every big3d agent that the Global Traffic
Manager synchronization group recognizes must be able to probe the virtual
server 10.1.0.1:80 via TCP.
server { // datacenter=DC1, #VS=1
name "Generic Host Server 1"
type generic
box {
address 10.1.0.1
unit_id 1
}
monitor "http"
vs {
name "Generic_VS1"
address 10.1.0.1:80 // http
}
}

iQuery and the big3d agent


The iQuery® protocol uses one of two ports to communicate between the
big3d agents throughout the network and Global Traffic Manager systems.
The ports used by iQuery traffic change, depending on whether the traffic is
inbound from the big3d agent or outbound from Global Traffic Manager.
Table A.1 shows the protocols and ports for both inbound and outbound
iQuery communications between Global Traffic Manager systems and
big3d agents distributed in your network.

From To Protocol From port To port

GTM system big3d agent TCP >1023 4353

big3d agent GTM system TCP 4353 >1023

Table A.1 Communication between big3d agents and Global Traffic


Manager systems

BIG-IP® Global Traffic ManagerTM Concepts Guide A-5


Appendix A

Table A.2 shows the protocols and corresponding ports used for iQuery
communications between big3d agents and SNMP agents that run on host
servers.

From To Protocol From port To port Purpose

big3d agent host SNMP agent UDP >1023 161 Ephemeral ports used to make
SNMP queries for host statistics

host SNMP agent big3d agent UDP 161 >1023 Ephemeral ports used to receive
host statistics using SNMP

Table A.2 Communication between big3d agents and SNMP agents on hosts

Table A.3 shows the ports used for communications between big3d agents
and virtual servers that are not hosted by a BIG-IP system.

From To Protocol From port To port Purpose

big3d agent virtual server UDP >1024 Service Ephemeral ports used to monitor
Port host virtual server

big3d agent virtual server TCP >1024 Service Ephemeral ports used to monitor
Port host virtual servers

Table A.3 Communication between big3d agents and virtual servers not hosted by BIG-IP systems

iQuery and firewalls


The payload information of an iQuery packet contains information that
potentially requires network address translation when there is a firewall in
the path between the big3d agent and the Global Traffic Manager system.
The firewall translates only the packet headers, not the payloads.
The virtual server translation option resolves this issue. When you configure
address translation for virtual servers, the iQuery packet stores the original
IP address in the packet payload itself. When the packet passes through a
firewall, the firewall translates the IP address in the packet header normally,
but the IP address within the packet payload is preserved. Global Traffic
Manager reads the IP address out of the packet payload, rather than out of
the packet header.
For example, firewall separates the path between a BIG-IP system running a
big3d agent, and the Global Traffic Manager system. The packet addresses
are translated at the firewall. However, addresses within the iQuery payload
are not translated, and they arrive at the BIG-IP system in their original
states.

A-6
big3d Agent

Communications between Global Traffic Managers, big3d agents,


and local DNS servers
Table A.4 shows the protocols and ports that the big3d agent uses when
collecting path data for local DNS servers.

From To Protocol From port To port Purpose

big3d LDNS ICMP N/A N/A Probe using ICMP pings

big3d LDNS TCP >1023 53 Probe using TCP (Cisco® routers: allow
establish)

LDNS big3d TCP 53 >1023 Replies using TCP (Cisco® routers: allow
establish)

big3d LDNS UDP 53 33434 Probe using UDP or traceroute utility

LDNS big3d ICMP N/A N/A Replies to ICMP, UDP pings, or traceroute
probes

big3d LDNS dns_rev >1023 53 Probe using DNS rev or DNS dot
dns_dot

LDNS big3d dns_rev 53 >1023 Replies to DNS rev or DNS dot probes
dns_dot

Table A.4 Communications between big3d agents and local DNS servers

BIG-IP® Global Traffic ManagerTM Concepts Guide A-7


Appendix A

A-8
B
Probes

• Introduction

• About iQuery

• Probe responsibility

• Probes and the big3d agent

• LDNS probes

• Probes and log entries


Probes

Introduction
When you install a Global Traffic Manager™ system in a network, that
system typically works within a larger group of BIG-IP® products. These
products include other Global Traffic Manager systems, Link Controller™
systems, and Local Traffic Manager™ systems. Global Traffic Manager
must be able to communicate with these other systems to maintain an
accurate assessment of the health and availability of different network
components. For example, Global Traffic Manager must be able to acquire
statistical data from resources that are managed by Local Traffic Manager in
a different data center. BIG-IP systems acquire this information through the
use of probes. A probe is an action a BIG-IP system takes to acquire data
from other network resources.
Probes are an essential means by which Global Traffic Manager tracks the
health and availability of network resources; however, it is equally
important that the responsibility for conducting probes be distributed across
as many BIG-IP products as possible. This distribution ensures that no one
system becomes overloaded with conducting probes, which can cause a
decrease in performance in the other tasks for which a BIG-IP system is
responsible.
To distribute probe requests effectively across multiple BIG-IP systems,
Global Traffic Manager systems employ several different technologies and
methodologies, including:
• iQuery®, which is the communication protocol used between Global
Traffic Manager systems and the big3d agents that reside on other
BIG-IP systems
• A selection methodology that determines which Global Traffic Manager
is responsible for managing the probe request
• A selection methodology that determines which big3d agent actually
conducts the probe

One of the important concepts to remember when understanding how Global


Traffic Manager acquires network data is that the process consists of several
tasks:
• Global Traffic Manager is chosen to be responsible for the probe.
• Global Traffic Manager delegates the probe to a big3d agent.
• The big3d agent conducts the probe.
• The big3d agent broadcasts the results of the probe, allowing all Global
Traffic Manager systems to receive the information.

BIG-IP® Global Traffic ManagerTM Concepts Guide B-1


Appendix B

About iQuery
At the heart of probe management with Global Traffic Manager systems is
iQuery, the communications protocol that these systems use to send
information from one system to another. With iQuery, Global Traffic
Manager systems in the same synchronization group can share configuration
settings, assign probe requests to big3d agents, and receive data on the
status of network resources.
The iQuery protocol is an XML protocol that is sent between each system
using gzip compression and SSL. These communications can only be
allowed between systems that have a trusted relationship established, which
is why configuration tools such as big3d_install, bigip_add, and gtm_add
are critical when installing or updating Global Traffic Manager systems. If
two systems have not exchanged their SSL certificates, they cannot share
information with each other using iQuery.
In addition to requiring trusted relationships, systems send iQuery
communications only on the VLAN on which the system received the
incoming message. Also, iQuery communications occur only within the
same synchronization group. If your network consists of two
synchronization groups, with each group sharing a subset of network
resources, these groups probe the network resources and communicate with
iQuery separately.
Generally, iQuery communications require no user intervention; however,
on occasion it can be necessary to view the data transmitted between each
system. For example, you might be troubleshooting the reason that a Global
Traffic Manager system is exhibiting a particular behavior. In such a
situation, you can use the command, iqdump.

B-2
Probes

Probe responsibility
When you assign a monitor to a network resource, Global Traffic Manager
is responsible for ensuring that a big3d agent probes the selected resource. It
is important to remember that this does not necessarily mean the selected
Global Traffic Manager actually conducts the probe; it means only that a
specific Global Traffic Manager is in charge of assigning a big3d agent to
probe the resource. The big3d agent can be installed on the same system as
Global Traffic Manager, a different Global Traffic Manager, or another
BIG-IP system.
A crucial component to determining which system manages a probe request
is the data centers that you define in the Global Traffic Manager
configuration. For each probe, the Global Traffic Manager systems
determine the following:
• Is there a Global Traffic Manager system in the same data center as the
resource?
• Is there more than one Global Traffic Manager at that data center?

By default, Global Traffic Manager systems delegate probe management to


a system that belongs to the same data center as the resource, since the close
proximity of system and resource improves probe response time.
To illustrate how these considerations factor into probe management,
consider a fictional company, SiteRequest. This company has three data
centers: one in Los Angeles, one in New York, and one in London. The
following table lists a few characteristics of each data center.

Data center Characteristics

Los Angeles Two Global Traffic Manager systems, configured as a


redundant system

New York A single Global Traffic Manager

London Resources only; no Global Traffic Manager systems

Table B.1 Characteristics of the data centers at SiteRequest

Now, consider that you want to acquire statistical data from a resource in the
New York data center. First, the Global Traffic Manager systems, based on
their iQuery communications with each other, identify whether there is a
Global Traffic Manager system that belongs to the New York data center. In
this case, the answer is yes; the New York data center contains a Global
Traffic Manager system. Next, the systems determine if more than one
Global Traffic Manager belongs to the New York data center. In this case,
the answer is no; the New York data center has only a stand-alone system.
Consequently, the Global Traffic Manager system in the New York data
center assumes responsibility for conducting the probe on this particular
resource.

BIG-IP® Global Traffic ManagerTM Concepts Guide B-3


Appendix B

In situations where more than one Global Traffic Manager belongs to a data
center, the systems use an algorithm to distribute the responsibility for
probes equally among Global Traffic Manager systems. This distribution
ensures that each Global Traffic Manager has an equal chance of being
responsible for managing a probe request.
To demonstrate how probe requests are delegated between two Global
Traffic Manager systems at the same data center, consider again the network
configuration at SiteRequest. This time, the company needs to acquire data
from a resource that resides at the Los Angeles data center. As with the
previous example, the first step identifies whether the Los Angeles data
center has any Global Traffic Manager systems; in this case, the answer is
yes. The next criteria is whether there is more than one Global Traffic
Manager at that data center; in this case, the answer is also yes: the Los
Angeles data center has a redundant system configuration that consists of
two Global Traffic Manager systems. Because there are two Global Traffic
Manager systems at this data center, each system compares the hash value of
the resource with its own information; whichever Global Traffic Manager
has the closest value to the resource becomes responsible for managing the
probe request.
A final consideration is if a data center does not have any Global Traffic
Manager systems at all, such as the London data center in the configuration
for SiteRequest. In this situation, the responsibility for probing a resource at
that data center is divided among the other Global Traffic Manager systems;
much in the same way as the responsibility is divided among Global Traffic
Manager systems within the same data center.
When Global Traffic Manager becomes responsible for managing a probe, it
remains responsible for that probe until the network configuration changes
in one of the following ways:
• Global Traffic Manager goes offline.
• A new Global Traffic Manager system is added to the data center.
• The network configuration of the resource (such as its IP address)
changes.

B-4
Probes

Probes and the big3d agent


The first stage in conducting a probe of a network resource is to select the
Global Traffic Manager system. In turn, Global Traffic Manager delegates
the probe to a big3d agent, which is responsible for querying the given
network resource for data.
The probe delegation of network resources process is similar to the
two-tiered load balancing method Global Traffic Manager uses when
delegating traffic. With DNS traffic, Global Traffic Manager identifies the
wide IP to which the traffic belongs, and then load balances that traffic
among the pools associated with the wide IP. After it selects a pool, the
system load balances the request across the pool members within that pool.
Delegating probe requests occurs in a similar two-tiered fashion. First, the
Global Traffic Manager systems within a synchronization group determine
which system is responsible for managing the probe. This does not
necessarily mean that the selected Global Traffic Manager conducts the
probe itself; it means only that a specific Global Traffic Manager ensures
that the probe takes place. Next, Global Traffic Manager selects one of the
available big3d agents to actually conduct the probe. As each BIG-IP
system has a big3d agent, the number of agents available to conduct the
probe depends on the number of BIG-IP systems.
To illustrate how these considerations factor into probe management,
consider the fictional company, SiteRequest. This company has three data
centers: one in Los Angeles, one in New York, and one in London. The
following table lists a few characteristics of each data center:

Data center Characteristics

Los Angeles Two Global Traffic Manager systems, configured as a


redundant system
Two Local Traffic Manager systems

New York A single Global Traffic Manager


Two Local Traffic Manager systems, configured as a
redundant system

London Resources only; no Global Traffic Manager systems


A single Local Traffic Manager

Table B.2 Characteristics of the data centers at SiteRequest

Consider that a Global Traffic Manager system in the Los Angeles data
center has assumed responsibility for managing a probe for a network
resource. At this data center, the system can assign the probe to one of four
big3d agents: one for each BIG-IP system at the data center. To select a
big3d, Global Traffic Manager looks to see which big3d agent has the
fewest number of probes for which it is responsible. The big3d agent with
the lowest number of probes is tasked with conducting the probe. Global

BIG-IP® Global Traffic ManagerTM Concepts Guide B-5


Appendix B

Traffic Manager checks this statistic each time it needs to delegate the
probe; as a result, the selected big3d can change from probe instance to
probe instance.
In situations where a big3d agent does not reside in the same data center as
the resource, the designated Global Traffic Manager selects a big3d from all
available big3d agents on the network. Again, the agent selected is the agent
with the fewest number of probe requests, and this check occurs each time
the probe is conducted.
For example, SiteRequest adds a new set of web servers in Tokyo. At this
location, the company has yet to install its BIG-IP systems; however, the
current set of Global Traffic Manager systems in Los Angeles and New
York are managing traffic to these web servers. When initiating a probe
request to determine the availability of one of these servers, a Global Traffic
Manager system is selected to manage the probe request. Then, that system
chooses a big3d agent to probe the web server, selecting any big3d agent
located in Los Angeles, New York, or London.

B-6
Probes

LDNS probes
Global Traffic Manager systems are responsible for probes of local DNS
servers (LDNS). Unlike probes conducted on internal systems, such as web
servers, probes of local DNS servers require that the Global Traffic Manager
system verifies data from a resource that exists outside the network.
Typically, this data is the path information Global Traffic Manager requires
when conducting Quality of Service, Round Trip Time, Completion Rate,
and Hops load balancing methods.

Note

If you do not use Quality of Service load balancing, Global Traffic Manager
does not conduct probes of local DNS servers.

When a given LDNS makes a DNS request for a wide IP, that request is sent
to a single Global Traffic Manager. Global Traffic Manager then creates an
LDNS entry, and assigns that entry one of the following states:
• New: Global Traffic Manager has not come across this particular LDNS
before
• Active: Global Traffic Manager already has an existing entry for this
LDNS
• Pending: Global Traffic Manager has been contacted by this LDNS
before, however, this server has yet to respond to a probe from a Global
Traffic Manager system on this network

In general, the New and Pending states are temporary states; an LDNS
remains in one of these states only until it responds to the first probe request
from Global Traffic Manager. After Global Traffic Manager receives a
response, the LDNS entry is moved to the Active state. Each Global Traffic
Manager within a given synchronization group shares the LDNS entries that
are assigned this state, resulting in the synchronization group having a
common list of known local DNS servers.
Unlike internal probes, LDNS probes are not load balanced across Global
Traffic Manager systems. Instead, the Global Traffic Manager system that
the LDNS first queries becomes responsible for the initial probe to that
LDNS. These probes are load balanced, however, across the multiple big3d
agents, with preference given to big3d agents that either belong to the same
data center as the responding Global Traffic Manager, or belong to the same
link through which Global Traffic Manager received the LDNS query. After
the initial probe, an algorithm is used to load balance subsequent probes
across the available Global Traffic Manager systems.

BIG-IP® Global Traffic ManagerTM Concepts Guide B-7


Appendix B

The process for identifying and managing LDNS probe requests is as


follows:
1. An LDNS sends a DNS request to Global Traffic Manager.
2. Global Traffic Manager that responds to the request determines if it
already has an entry for the LDNS. If it does not, it creates an entry
with a status of New.
3. Global Traffic Manager delegates the probe of the LDNS to a big3d
agent; preferably a big3d agent that resides in the same data center
as the Global Traffic Manager system.
4. When the LDNS responds to the probe, it sends its information to
Global Traffic Manager.
5. Global Traffic Manager updates its entry for the LDNS, assigning it
an Active status.
6. Global Traffic Manager synchronizes its list of active local DNS
servers with the other members of its synchronization group.

B-8
Probes

Probes and log entries


Probes are the means by which Global Traffic Manager tracks the health and
availability of network resources, and it is important that the responsibility
for conducting probes is distributed across as many BIG-IP products as
possible. You can use information in the Global Traffic Manager log file to
determine how to fine tune the probes that you have configured. However,
the probe logs feature is disabled by default. You must turn on the feature
for the probe information to appear in the log file.
If you want Global Traffic Manager to gather information about probes and
save it in the log file, you must set the database variable
GTM.DebugProbeTuningInterval to a non-zero value. The value of the
variable indicates, in seconds, how often you want the system to add probe
information to the log file. By default this variable is set to 0 (zero), which
disables the logging of information about probes.
To change the value of the database variable, use the tmsh command:
modify / sys db gtm.debugprobetuninginterval value [database variable value]
For information about the command syntax you use to change this variable,
see the tmsh man pages.

Probe information in the log file


The probe information displays in the logs in the Configuration utility when
the GTM setting on the Logs screen is set to the default value of Notice.
When you set the GTM.DebugProbeTuningInterval database variable to a
non-zero value, the log file contains information about probes including the
number of local DNS servers, Global Traffic Manager systems, paths, and
persistence records in your network. The log file also includes the
information in the following list.
◆ For monitors:
• The time in microseconds that each monitor spends in the active
queue
• For each active monitor, the log file displays the following
information:
• Base name
• Monitor name
• Number of total instances

BIG-IP® Global Traffic ManagerTM Concepts Guide B-9


Appendix B

• Number of up instances and the average and maximum probe


time for each up instance
• Number of down instances, the average probe time for each
down instance, and a sorted list of reasons that the instance is
down. Each reason in the list is followed the number of instances
that were marked down for this reason.
◆ For each Global Traffic Manager and Local Traffic Manager:
• Datacenter name
• Server name
• IP address
• Current tmm CPU usage
• Number of virtual servers in each state: up or down
• Active and pending queue sizes for monitors, SNMP monitors, and
paths
• Number of monitors that have received a down response from the
system
◆ For each host server:
• Datacenter name
• Server name
• IP address
• CPU usage
• Memory usage
Note: This value is -1, unless an SNMP monitor is assigned to the
server.
• Number of virtual servers in each state: up or down

B - 10
Glossary
Glossary

A record
The A record is the resource record that Global Traffic Manager™ returns to
a local DNS server in response to a name resolution request. The A record
contains a variety of information, including one or more IP addresses that
resolve to the requested domain name.

access control list (ACL)


An access control list is a list of local DNS server IP addresses that are
excluded from path probing or hops queries.

active unit
In a redundant system configuration, an active unit is a system that currently
load balances name resolution requests. If the active unit in the redundant
system fails, the standby unit assumes control and begins to load balance
requests.

alternate method
The alternate method specifies the load balancing mode that Global Traffic
Manager uses to pick a virtual server if the preferred method fails. See also
fallback method, preferred method.

big3d agent
The big3d agent is a monitoring agent that collects metrics information
about server performance and network paths between a data center and a
specific local DNS server. Global Traffic Manager uses the information
collected by the big3d agent for dynamic load balancing.

BIG-IP system
A BIG-IP system can be a Global Traffic Manager system (including the
current Global Traffic Manager system), a Local Traffic Manager™ system,
or a Link Controller™ system.

BIND (Berkeley Internet Name Domain)


BIND is the most common implementation of the Domain Name System
(DNS). BIND provides a system for matching domain names to IP
addresses. For more information, refer to
http://www.isc.org/products/BIND.

bridge mode
Bridge mode instructs Global Traffic Manager to forward the traffic it
receives to another part of the network.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 1


Glossary

CIDR (Classless Inter-Domain Routing)


Classless Inter-Domain Routing (CIDR) is an expansion of the IP address
system that allows a single IP address to be used to designate many unique
IP addresses. A CIDR IP address looks like a standard IP address except that
it ends with a slash followed by a number, which is the IP network prefix.
For example: 172.200.0.0/16

CNAME record
A canonical name (CNAME) record acts as an alias to another domain
name. A canonical name and its alias can belong to different zones, so the
CNAME record must always be entered as a fully qualified domain name.
CNAME records are useful for setting up logical names for network
services so that they can be easily relocated to different physical hosts.

completion rate
The completion rate is the percentage of packets that a server successfully
returns during a given session.

Completion Rate mode


The Completion Rate mode is a dynamic load balancing mode that
distributes connections based on which network path drops the fewest
packets, or allows the fewest number of packets to time out.

Configuration utility
The Configuration utility is the browser-based application that you use to
configure the BIG-IP system.

content delivery network (CDN)


A content delivery network (CDN) is an architecture of web-based network
components that helps dramatically reduce the wide-area network latency
between a client and the content they wish to access. A CDN includes some
or all of the following network components: wide-area traffic managers,
Internet service providers, content server clusters, caches, and origin content
providers.

custom monitor
A custom monitor is a user-created monitor. See also monitor, health
monitor, performance monitor, pre-configured monitor.

data center
A data center is a physical location that houses one or more Global Traffic
Manager systems, BIG-IP systems, or host machines.

data center server


A data center server is any server recognized in the Global Traffic Manager
configuration. A data center server can be any of the following: a Global
Traffic Manager system, a BIG-IP system, or a host.

Glossary - 2
Glossary

destination statement
A destination statement defines the resource to which Global Traffic
Manager directs the name resolution request.

distributed application
A distributed application is a collection of wide IPs, data center, and links. It
is the highest level component that Global Traffic Manager supports.

DNSSEC (DNS Security Extensions)


DNSSEC is a set of extensions to DNS that protects a computer network
against most of the threats to the Domain Name System.

DNSSEC zones
DNSSEC zones are containers that map a domain name to a set of DNSSEC
keys.

domain name
A domain name is a unique name that is associated with one or more IP
addresses. Domain names are used in URLs to identify particular web pages.
For example, in the URL http://www.f5.com/index.html, the domain name
is f5.com.

draining requests
Draining requests refers to allowing existing sessions to continue accessing
a specific set of resources while disallowing new connections.

Drop Packet mode


Drop Packet load balancing mode instructs Global Traffic Manager to do
nothing with a packet, and simply drop the request.

dynamic load balancing modes


Dynamic load balancing modes base the distribution of name resolution
requests to virtual servers on the matrix of live data, such as current server
performance and current connection load.

Dynamic Ratio weighting


Dynamic Ratio weighting is a methodology in which the system
continuously checks the performance of each link and sends traffic through
the link with the best performance data.

EAV (Extended Application Verification)


EAV is a health check that verifies an application on a node by running that
application remotely. EAV health check is only one of the three types of
health checks available on a Link Controller™. See also health monitor,
external monitor.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 3


Glossary

EAV monitor
An EAV monitor checks the health of a resource by accessing the specified
application.

ECV (Extended Content Verification)


On Global Traffic Manager, ECV is a service monitor that checks the
availability of actual content, (such as a file or an image) on a server, rather
than just checking the availability of a port or service, such as HTTP on port
80.

ECV monitor
An ECV monitor checks the health of a resource by sending a query for
content using the specified protocol, and waiting to receive the content from
the resource. See also monitor, health monitor, external monitor.

external monitor
An external monitor is a user-supplied health monitor. See also health
monitor.

external system
An external system is any server with which Global Traffic Manager must
exchange information to perform its functions.

failover
Failover is the process whereby a standby unit in a redundant system
configuration takes over when a software failure or hardware failure is
detected on the active unit.

failover cable
The failover cable is the cable that directly connects the two system units in
a hardware-based redundant system configuration.

fallback method
The fallback method is the third method in a load balancing hierarchy that
Global Traffic Manager uses to load balance a resolution request. Global
Traffic Manager uses the fallback method only when the load balancing
modes specified for the preferred and alternate methods fail. Unlike the
preferred method and the alternate method, the fallback method uses neither
server nor virtual server availability for load balancing calculations. See also
preferred method, alternate method.

Global Availability mode


Global Availability is a static load balancing mode that bases connection
distribution on a particular server order, always sending a connection to the
first available server in the list. This mode differs from Round Robin mode
in that it searches for an available server always starting with the first server

Glossary - 4
Glossary

in the list, while Round Robin mode searches for an available server starting
with the next server in the list (with respect to the server selected for the
previous connection request).

Global Traffic Manager


Global Traffic Manager provides wide-area traffic management and high
availability of IP applications/services running across multiple data centers.

gtmd
The gtmd utility processes communications between two Global Traffic
Manager systems.

health monitor
A health monitor checks a node to see if it is up and functioning for a given
service. If the node fails the check, it is marked down. Different monitors
exist for checking different services. See also monitor, custom monitor,
pre-configured monitor, performance monitor.

host
A host is a network server that manages one or more virtual servers that
Global Traffic Manager uses for load balancing.

ICMP (Internet Control Message Protocol)


ICMP is an Internet communications protocol used to determine information
about routes to destination addresses, such as nodes that are managed by
BIG-IP systems.

iQuery
The iQuery® protocol is used to exchange information between Global
Traffic Manager systems and BIG-IP systems. The iQuery protocol is
officially registered with IANA for port 4353, and works on UDP and TCP
connections.

iRule
An iRule is a user-written script that controls the behavior of a connection
passing through the Global Traffic Manager™ system. iRules® are an F5
Networks feature and are frequently used to direct certain connections to a
non-default load balancing pool. However, iRules can perform other tasks,
such as implementing secure network address translation and enabling
session persistence.

key-signing key
Global Traffic Manager uses key signing keys to sign only the DNSKEY
record of a DNSSEC record set. See also DNSSEC (DNS Security
Extensions), DNSSEC zones, and zone-signing key.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 5


Glossary

Kilobytes/Second mode
The Kilobytes/Second mode is a dynamic load balancing mode that
distributes connections based on which available server currently processes
the fewest kilobytes per second.

LDNS
An LDNS is a server that makes name resolution requests on behalf of a
client. With respect to Global Traffic Manager, local DNS servers are the
source of name resolution requests.

Least Connections mode


The Least Connections mode is a dynamic load balancing mode that bases
connection distribution on which server currently manages the fewest open
connections.

link
A link is a logical representation of a physical device (router), which
connects your network to the rest of the Internet.

Link Controller
Link Controller™ is an IP application switch that manages traffic to and
from a site across multiple links, regardless of connection type or provider.

listener
A listener is an object that listens for DNS queries. A listener instructs
Global Traffic Manager to listen for network traffic destined for a specific
IP address.

load balancing methods


Load balancing methods are the settings that specify the hierarchical order
in which Global Traffic Manager uses three load balancing modes. The
preferred method specifies the first load balancing mode that Global Traffic
Manager tries, the alternate method specifies the next load balancing mode
to try if the preferred method fails, and the fallback method specifies the last
load balancing mode to use if both the preferred and the alternate methods
fail. See also alternate method, fallback method, and preferred method.

load balancing mode


A load balancing mode is the way in which Global Traffic Manager
determines how to distribute connections across an array.

logical network components


Logical components are abstractions of network resources, such as a virtual
servers. See also physical network components.

Glossary - 6
Glossary

metrics information
Metrics information is the data that is typically collected about the paths
between BIG-IP systems and local DNS servers. Metrics information is also
collected about the performance and availability of virtual servers. Metrics
information is used for load balancing, and it can include statistics such as
round trip time, packet rate, and packet loss.

monitor
A monitor is a software utility that specializes in a specific metric of a
Global Traffic Manager resource. A monitor tests to see if a given resource
responds as expected. See also custom monitor, pre-configured monitor,
health monitor, performance monitor.

monitor template
A monitor template is an abstraction that exists within the Global Traffic
Manager system for each monitor type, and contains a group of settings and
default values.

named
The named daemon manages domain nameserver software.

nameserver
A nameserver is a server that maintains a DNS database, and resolves
domain name requests to IP addresses using that database.

name resolution
Name resolution is the process by which a nameserver matches a domain
name request to an IP address, and sends the information to the client
requesting the resolution.

Network Time Protocol (NTP)


Network Time Protocol functions over the Internet to synchronize system
clocks to Universal Coordinated Time. NTP provides a mechanism to set
and maintain clock synchronization within milliseconds.

node
A node is a logical object on the BIG-IP system that identifies the IP address
of a physical resource on the network, such as a web server.

Node mode
The Node mode instructs Global Traffic Manager to process traffic locally,
and send the appropriate DNS response back to the querying server.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 7


Glossary

NS record
A nameserver (NS) record is used to define a set of authoritative
nameservers for a DNS zone. A nameserver is considered authoritative for
some given zone when it has a complete set of data for the zone, allowing it
to answer queries about the zone on its own, without needing to consult
another nameserver.

packet rate
The packet rate is the number of data packets per second processed by a
server.

Packet Rate mode


The Packet Rate mode is a dynamic load balancing mode that distributes
connections based on which available server currently processes the fewest
packets per second.

path
A path is a logical network route between a data center server and a local
DNS server.

path probing
Path probing is the process of collecting metrics data, such as round trip
time and packet rate, for a given path between a requesting LDNS and a data
center server.

performance monitor
Performance monitors check the performance of a pool or virtual server, and
dynamically load balance traffic accordingly. See also monitor,
pre-configured monitor, custom monitor, health monitor.

persistence
On Global Traffic Manager, persistence is a series of related requests
received from the same local DNS server for the same wide IP name. When
persistence is activated, Global Traffic Manager sends all requests from a
particular local DNS server for a specific wide IP to the same virtual server,
instead of load balancing the requests.

physical network components


Physical network components have a direct correlation with one or more
physical entities on the network. See also logical network components.

picks
Picks represent the number of times a particular virtual server is selected to
receive a load balanced connection.

Glossary - 8
Glossary

pool
A pool is a group of virtual servers managed by a BIG-IP system, or a host.
Global Traffic Manager load balances among pools (using the Pool LB
Mode), as well as among individual virtual servers.

pool-level load balancing


With pool-level load balancing, after Global Traffic Manager uses wide
IP-level load balancing to select the best available pool, it uses a pool-level
load balancing to select a virtual server within that pool. If the first virtual
server within the pool is unavailable, Global Traffic Manager selects the
next best virtual server based on the load balancing mode assigned to that
pool. See also tiered load balancing and wide IP-level load balancing.

pool ratio
A pool ratio is a ratio weight applied to pools in a wide IP. If the Pool LB
mode is set to Ratio, Global Traffic Manager uses each pool for load
balancing in proportion to the weight defined for the pool.

preferred method
The preferred method specifies the first load balancing mode that Global
Traffic Manager uses to load balance a resolution request. See also alternate
method, fallback method, and load balancing methods.

pre-configured monitor
Pre-configured monitors are monitors that Global Traffic Manager provides.
See also monitor, custom monitor, and health monitor.

probe
A probe is a specific query, initiated by a big3d agent, that attempts to
gather specific data from a given network resource. Probes are most often
employed when a health monitor attempts to verify the availability of a
resource.

QOS equation mode


The QOS equation is the equation on which the Quality of Service load
balancing mode is based. The equation calculates a score for a given path
between a data center server and a local DNS server. The Quality of Service
mode distributes connections based on the best path score for an available
data center server. You can apply weights to the factors in the equation, such
as round trip time and completion rate.

Quality of Service mode


The Quality of Service load balancing mode is a dynamic load balancing
mode that bases connection distribution on a configurable combination of
the packet rate, completion rate, round trip time, hops, virtual server
capacity, kilobytes per second, link capacity, and topology information.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 9


Glossary

ratio
A ratio is the parameter in a virtual server statement that assigns a weight to
the virtual server for load balancing purposes.

Ratio mode
The Ratio load balancing mode is a static load balancing mode that
distributes connections across an pool of virtual servers in proportion to the
ratio weight assigned to each individual virtual server.

Ratio weighting
Ratio weighting is a methodology in which the system uses a frequency that
you set to determine to which link to send traffic.

redundant system configuration


A redundant system configuration is a pair of units that are configured for
failover. One system runs as the active unit and the other system runs as the
standby unit. If the active unit fails, the standby unit takes over and manages
resolution requests.

region
A region is a customized collection of topologies. See topology.

request source statement


A request source statement defines the origin of a name resolution request
for a connection.

resource record
A resource record is a record in a DNS database that stores data associated
with domain names. A resource record typically includes a domain name, a
TTL, a record type, and data specific to that record type. See also A record,
CNAME record, NS record.

root nameserver
A root nameserver is a master DNS server that maintains a complete DNS
database. There are approximately 13 root nameservers in the world that
manage the DNS database for the World Wide Web.

Round Robin mode


Round Robin mode is a static load balancing mode that bases connection
distribution on a set server order. Round Robin mode sends a connection
request to the next available server in the order.

Glossary - 10
Glossary

round trip time (RTT)


Round trip time is the calculation of the time (in microseconds) that a local
DNS server takes to respond to a ping issued by the big3d agent running on
a data center server. Global Traffic Manager takes RTT values into account
when it uses dynamic load balancing modes.

Round Trip Time mode


Round Trip Time is a dynamic load balancing mode that bases connection
distribution on which virtual server has the fastest measured round trip time
between the data center server and the local DNS server.

router hops
Router hops are intermediate system transitions along a given network path.

Router mode
Router mode instructs Global Traffic Manager to forward the traffic it
receives to another DNS server.

self IP address
A self IP address is an IP address that you define on a VLAN of a BIG-IP
system. This term does not apply to the management IP address of a BIG-IP
system, or to IP addresses on other devices.

server
A server is a physical device on which you can configure one or more
virtual servers.

Setup utility
The Setup utility is a utility that takes you through the initial system
configuration process. The Setup utility runs automatically when you turn
on a system for the first time.

Simple monitor
A Simple monitor checks the health of a resource by sending a packet using
the specified protocol, and waiting for a response from the resource. See
also health monitor.

SNMP (Simple Network Management Protocol)


SNMP is the Internet standard protocol, defined in STD 15, RFC 1157, that
was developed to manage nodes on an IP network.

standby unit
A standby unit is the system in a redundant system configuration that is
always prepared to become the active unit if the active unit fails.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 11


Glossary

static load balancing modes


Static load balancing modes base the distribution of name resolution
requests to virtual servers on a pre-defined list of criteria and server and
virtual server availability; they do not take current server performance or
current connection load into account. See also dynamic load balancing
modes.

synchronization
Synchronization means that each Global Traffic Manager regularly
compares the timestamps of its configuration files with the timestamps of
the configuration files on the other Global Traffic Manager systems on the
network.

synchronization group
A synchronization group is a group of Global Traffic Manager systems that
synchronize system configurations and zone files (if applicable). All
synchronization group members receive broadcasts of metrics data from the
big3d agents throughout the network. All synchronization group members
also receive broadcasts of updated configuration settings from Global
Traffic Manager that has the latest configuration changes.

tiered load balancing


Tiered load balancing is load balancing that occurs at more than one point
during the resolution process. See also wide IP-level load balancing and
pool-level load balancing.

tmsh
The Traffic Management Shell (tmsh) is a command-line utility that you
can use to configure Global Traffic Manager.

topology
A topology is a set of characteristics that identify the origin of a given name
resolution request.

Topology mode
The Topology mode is a static load balancing mode that bases the
distribution of name resolution requests on the weighted scores for topology
records. Topology records are used by the Topology load balancing mode to
redirect DNS queries to the closest virtual server, geographically, based on
location information derived from the DNS query message.

topology record
A topology record specifies a score for a local DNS server location endpoint
and a virtual server location endpoint.

Glossary - 12
Glossary

topology score
The topology score is the weight assigned to a topology record when Global
Traffic Manager is filtering the topology records to find the best virtual
server match for a DNS query.

topology statement
A topology statement is a collection of topology records.

TTL (Time to Live)


The TTL is the number of seconds for which a DNS record or metric is
valid, or for which a DNSSEC key is cached by a client resolver. When a
TTL expires, the server usually must refresh the information before using it
again. See also DNSSEC (DNS Security Extensions).

unavailable
The unavailable status is used for data center servers and virtual servers.
When a data center server or virtual server is unavailable, Global Traffic
Manager does not use it for load balancing.

unknown
The unknown status is used for data center servers and virtual servers.
When a data center server or virtual server is new to Global Traffic Manager
and does not yet have metrics information, Global Traffic Manager marks
its status as unknown. Global Traffic Manager can use unknown servers for
load balancing, but if the load balancing mode is dynamic, Global Traffic
Manager uses default metrics information for the unknown server until it
receives live metrics data.

up
The up status is used for data center servers and virtual servers. When a data
center server or virtual server is up, the data center server or virtual server is
available to respond to name resolution requests.

user configuration set (UCS)


A user configuration set is a backup file that you create for the BIG-IP
system configuration data. When you create a UCS, the BIG-IP system
assigns a .ucs extension to the file name.

virtual server
A virtual server, in the context of Global Traffic Manager, is a combination
of an IP address and a port number that, together, provide access to an
application or data source on your network.

BIG-IP® Global Traffic ManagerTM Concepts Guide Glossary - 13


Glossary

wide IP
A wide IP is a collection of one or more domain names that maps to one or
more groups of virtual servers managed either by BIG-IP systems, or by
host servers. Global Traffic Manager load balances name resolution requests
across the virtual servers that are defined in the wide IP that is associated
with the requested domain name.

wide IP-level load balancing


With wide IP-level load balancing, Global Traffic Manager load balances
requests, first to a specific pool, and then to a specific virtual server in the
selected pool. If the preferred, alternate, and fallback load balancing
methods that are configured for the pool or virtual server fail, then the
requests fail, or the system falls back to DNS. See also tiered load balancing
and pool-level load balancing.

wildcard listener
A wildcard listener monitors all traffic coming into your network, regardless
of the destination IP address of the given DNS request.

zone
In DNS terms, a zone is a subset of DNS records for one or more domains.

zone file
In DNS terms, a zone file is a database set of domains with one or many
domain names, designated mail servers, a list of other nameservers that can
answer resolution requests, and a set of zone attributes, which are contained
in an SOA record.

zone-signing key
Global Traffic Manager uses a zone-signing key to sign all of the record sets
in a DNSSEC zone. See also DNSSEC (DNS Security Extensions), DNSSEC
zones, and key-signing key.

ZoneRunner
ZoneRunner™ is the utility that allows you manage your resource records,
zone files, and named configuration associated with your implementation of
DNS and BIND.

Glossary - 14
Index
Index

A data graphs, performance 14-1


A record, defined 16-4 denial of service, preventing 10-1
AAAA record, defined 16-4 dependencies
address exclusion list 13-4 creating for virtual servers 8-4
alias addresses 11-5 organizing for virtual servers 8-4
alternate load balancing method 7-2 setting 6-8
applications distributed applications
See distributed applications. 6-8 about 6-9
availability, defined 8-3 and dependencies 6-8
and persistent connections 6-10
and statistics for 12-3
B and wide IPs 2-5
big3d agent defined 2-5, 6-8
and broadcasting sequence A-3 DNAME record,defined 16-4
and configuration trade-offs A-3 DNS zone files, about synchronization 3-7
and data collection A-3 DNSSEC key expiration 10-2
and dynamic load balancing 7-6 DNSSEC keys
and iQuery A-5 about 10-1
and metrics A-2 about generations of 10-1
installing A-3 about key-signing keys 10-1
introducing A-1 about zone-signing keys 10-1
selecting for probe requests B-3, B-5 and TTL 10-2
setting up A-3 DNSSEC resource records 10-3
using with system communications 3-4 DNSSEC zones, about 10-1
billing, and links 5-9 DNSSEC, and independence from BIND 10-3
BIND configuration and DNSSEC 10-3 domain names, and system validation 3-10
Bridge mode, about 4-1 domain validation, configuring 3-10
broadcast sequence and big3d agent A-3 Drain Persistent Requests option 8-6
Drop Packet load balancing mode 7-3
dynamic load balancing modes
C and big3d agents 7-6
cache poisoning, preventing 10-1 and fallback load balancing method 7-2
CNAME record, defined 16-4 defined 7-1
communications overview 7-6
and big3d A-5 See also Completion Rate load balancing mode.
and probes B-1 See also CPU load balancing mode.
system 3-4 See also Hops load balancing mode.
Completion Rate load balancing mode 7-6 See also Kilobyte/Second load balancing mode.
configuration tasks, about 3-1 See also Least Connections load balancing mode.
Configuration utility, about 1-3 See also Packet Rate load balancing mode.
connections, resuming 8-5 See also Quality of Service load balancing mode.
CPU load balancing mode 7-6 See also Round Trip Times load balancing mode.
custom monitors See also Virtual Server Score load balancing mode.
about 11-3 See also VS Capacity load balancing mode.
and monitor templates 11-4 dynamic ratio
and pre-configured monitor 11-3 and Quality of Service mode 7-8
defined 11-3 introducing 7-9
using with Quality of Service mode 7-9
D
data center statistics 12-7 E
data centers EAV monitors 11-2
about configuring 5-2 ECV monitors 11-2
and defining physical network components 5-1 event declarations 15-3
configuring 5-2 event execution, about terminating 15-3
defined 2-2 event-based traffic management 15-3
data collection, and big3d agent A-3

BIG-IP® Global Traffic ManagerTM Concepts Guide Index - 1


Index

F internet protocols 1-2


failover iqdump command, using B-2
for hardware-based 3-3 iQuery
for network-based 3-3 and firewalls A-6
Fallback IP load balancing mode 7-4 and probes B-1
fallback load balancing and VLANs B-2
and load balancing mode usage 7-2 defined A-5
introducing 7-10 using with system communications 3-4
selecting 7-2 iRule evaluation, controlling 15-3
features of Global Traffic Manager 1-1 iRules
firewalls and iQuery A-6 and wide IPs 6-7
forward zone files, defined 16-2 assigning 15-3
introducing 15-1
is not operator 9-3
G is operator 9-3
Global Availability load balancing mode 7-4
Global Traffic Manager
and components 2-1 K
and DNSSEC keys and zones 10-1 key expiration 10-2
and operation modes 4-1 key generations, understanding DNSSEC keys 10-1
defining current 3-2, 5-3 key-signing keys, about 10-1
selecting for probe requests B-3 Kilobytes/Second load balancing mode 7-7
graphs for performance data 14-1
GTM Performance graph 14-1 L
GTM Request Breakdown graph 14-1
last resort pool 8-7
gtmd 3-4
LDNS probes B-7
Least Connections load balancing mode 7-7
H limit setting
hardware-based failover 3-3 defined 8-3
health monitor settings 11-1 using Kilobytes 8-3
health monitors using Packets 8-3
about pre-configured 11-2 limit settings
and alias addresses 11-5 See limit thresholds.
and default settings 11-1 limit thresholds
and disabled resources 3-9 about 5-5
and health monitor types 11-2 and BIG-IP systems 8-3
and links 5-9 and pool members 5-7
and number of queries 3-9 and pools 5-6
and reverse mode 11-5 and servers 5-6
and servers to 5-5 and virtual servers 5-6
and transparent mode 11-5 using Total Connections 8-3
assigning heartbeat intervals 3-8 link statistics 12-8
associating resources to 11-7 links
defined 11-2 about managing 5-9
determining availability with 8-3 and defining physical network components 5-1
introducing 11-1 and monitors 5-9
heartbeat interval 3-8 billing 5-9
HINFO record, defined 16-5 defined 2-2
HINT zone files, defined 16-2 weighting 5-9
Hops load balancing mode 7-7 listeners, defined 2-4
host servers, defined 5-5 load balancing
and dynamic modes 7-6
and pools 7-1
I and wide IPs 7-1
ID hacking, preventing 10-1 enabling ignore path TTL option 7-11
ignore path TTL option 7-11 introducing 7-1

Index - 2
Index

using alternate methods 7-2 M


using dynamic load balancing modes 7-1 manual resume 8-5
using fallback method 7-10 master zone files
using pool-level 7-1 See primary zone files.
using static load balancing modes 7-1 metrics
using tiered 7-1 defined 13-2
using Topology mode 9-4 introducing 13-1
using wide IP-level 7-1 metrics collection
verifying virtual server availability 7-11 and big3d agent A-2
load balancing methods and probes 13-4
selecting 7-1 and TTL and timers 13-5
using fallback load balancing 7-2 excluding local DNS from probes 13-4
load balancing mode usage 7-2 removing local DNS from probes 13-4
load balancing modes sequence A-3
about Topology 7-6 setting TTL and timer values 13-5
and name resolution requests 7-1 monitors
defined 7-2 configuring global 3-8
using Completion Rate 7-6 defined 8-3
using CPU 7-6 See also health monitors. 5-5
using Drop Packet 7-3 summary of types 11-2
using Fallback IP 7-4 mx record, defined 16-5
using Global Availability 7-4
using Hops 7-7
using Kilobytes/Second 7-7 N
using Least Connections 7-7 named.conf file 16-7
using None 7-3, 7-4 network management tools 1-2
using Packet Rate 7-7 network traffic flows, graphs 14-1
using Quality of Service 7-8 network-based failover
using Ratio 7-5 and redundant system configurations 3-3
using Return to DNS 7-3, 7-5 Node mode
using Round Robin 7-5 and listeners 4-1
using Round Trip Times 7-8 defined 4-1
using static 7-3 NoError response, implementing 6-7
using Static Persist 7-5 None load balancing mode
using Virtual Server Score 7-8 using 7-4
using VS Capacity 7-8 using to skip load balancing 7-3
load balancing servers, defined 5-4 NS record, defined 16-5
local DNS
excluding from probes 13-4
removing from probes 13-4 O
local DNS statistics 12-13 operators, defined 9-3
Local Traffic Manager
and resources 1-2 P
defined 5-4
Packet Rate load balancing mode 7-7
logical network components
paths statistics 12-12
and distributed applications 2-5
performance data, viewing 14-1
and listeners 2-4
persistence records 12-15
and pools 2-4
persistent connections
and wide IPs 2-4
and distributed applications 6-10
defined 2-4, 5-1
and persistent records 12-15
introducing 6-1
draining 8-6
Longest Match option 9-4
introducing 8-6
physical network components
about data centers 5-2
and data centers 2-2
and links 2-2

BIG-IP® Global Traffic ManagerTM Concepts Guide Index - 3


Index

and virtual servers 2-3 and SOA records 16-4


introducing 5-1 and types of records 16-4
using servers 2-2 viewing DNSSEC 10-3
pool members, using with limit thresholds 5-7 Return to DNS load balancing mode
pool statistics 12-6 using 7-5
pool-level load balancing 7-1 using to skip load balancing 7-3
pools reverse mode 11-5
adding to wide IPs 6-6 Round Robin load balancing mode 7-5
and limit thresholds 5-6 Round Trip Times load balancing mode 7-8
and topology load balancing 9-4 Router mode, about 4-1
defined 2-4, 6-3
organizing virtual servers 6-2
organizing within wide IPs 6-6 S
weighting virtual servers 6-3 secondary zone files, defined 16-2
weighting within wide IPs 6-6 security features 1-1
pre-configured health monitors, about 11-2 server statistics 12-9
preferred load balancing method 7-2 servers
primary zone files, defined 16-2 about 2-2
probes and BIG-IP systems defined 5-3
and information in log file B-9 and defining physical network components 5-1
and LDNS B-7 and limit thresholds 5-6
defined B-1 defining current Global Traffic Manager 5-3
determining responsibility for B-3 defining host servers 5-5
enabling logging B-9 defining load balancing servers 5-4
selecting big3d agents B-5 defining Local Traffic Managers 5-4
selecting Global Traffic Manager systems B-3 introducing 5-3
using log entries to tune B-9 setup tasks, about 3-1
PTR record, defined 16-5 Setup Utility, about 3-1
simple monitors 11-2
slave zone files
Q See secondary zone files.
Quality of Service load balancing mode SMTP 1-2
and default settings 7-8 SNMP MIB 1-3
customizing 7-8 SNMP, using for system communications 3-5
introducing 7-8 SOA record, defined 16-4
using dynamic ratio 7-8, 7-9 spoofing, preventing 10-1
SRV record, defined 16-5
SSL 1-1
R static load balancing modes
Ratio load balancing mode 7-5 about Topology 7-6
regions 9-4 and alternate load balancing methods 7-1
request source statements 9-3 and fallback load balancing method 7-2
requests defined 7-1
draining 8-6 described 7-3
on performance graph 14-1 using Drop Packet 7-3
resolutions, on performance graph 14-1 using Fallback IP 7-4
resource availability using Global Availability 7-4
and limit settings 8-3 using None 7-3, 7-4
and monitor availability requirements 8-3 using Ratio 7-5
and monitors 11-2 using Return to DNS 7-3, 7-5
and virtual server dependencies 8-3 using Round Robin 7-5
defined 8-3 using Static Persist 7-5
resource health, determining 8-2 Static Persist load balancing mode 7-5
resource records statistics
about DNSSEC and BIND 10-3 accessing 12-2
and NS records 16-5 and data centers 12-7
and PTR records 16-5

Index - 4
Index

and distributed applications 12-3 TTL values


and links 12-8 and metrics collection 13-5
and local DNS servers 12-13 introducing 13-5
and paths 12-12 TTL, and DNSSEC keys 10-2
and pools 12-6 TXT record, defined 16-5
and servers 12-9
and status summary 12-2
and virtual servers 12-11 V
and wide IPs 12-5 validation, domain 3-10
described 12-3 Verify Virtual Server Availability option 7-11
status code, defined 8-2 views, and BIND 9 16-6
status summary 12-2 Virtual Server Score load balancing mode 7-8
stub zone files, defined 16-2 virtual server statistics 12-11
synchronization virtual servers
and DNS zone files 3-7 about 2-3
defined 3-6 about managing 5-8
synchronization groups 3-6 and defining physical network components 5-1
system communications 3-4 and iRules 15-3
system resources and limit thresholds 5-6
and dependencies 8-4 creating dependencies 8-4
associating health monitors to 11-7 organizing dependencies 8-4
determining availability 8-3 organizing within pools 6-2
resuming connections to 8-5 weighting within pools 6-3
systems VS Capacity load balancing mode 7-8
availability 8-3
W
T weight
tasks See topology score, and topology records.
about configuration 3-1 weighting, using with links 5-9
about setup 3-1 when keyword, using with iRules 15-3
Tcl syntax 15-2 wide IP load balancing, and load balancing modes 7-2
tiered load balancing 7-1 wide IP statistics 12-5
timer values wide IP-level load balancing 7-1
and metrics collection 13-5 wide IPs
introducing 13-5 adding pools to 6-6
tmsh, about 1-4 and iRules 6-7
Tools Command Language syntax 15-2 and persistent connections 8-6
topologies and topology load balancing 9-4
and longest match option 9-4 defined 2-4
and pools 9-4 maintaining 6-5
and records 9-3 organizing pools 6-6
and regions 9-4 setting up 3-1
and request source statements 9-3 weighting pools 6-6
and Topology Threshold option 9-4 wildcard characters
and wide IPs 9-4 and wide IPs 6-5
introducing 9-1 examples 6-5
setting up 9-3 wildcard listener, defined 4-2
Topology load balancing mode, defined 7-6
topology records, defined 9-3 Z
topology score, and topology records 9-3
zone files, about synchronization 3-7
topology statement 9-3
zones, about DNSSEC 10-1
Topology Threshold option 9-4
zone-signing keys, about 10-1
traffic management shell, about 1-4
transparent mode 11-5

BIG-IP® Global Traffic ManagerTM Concepts Guide Index - 5

Das könnte Ihnen auch gefallen