Sie sind auf Seite 1von 45

networkingmaterials.blogspot.

com

CISCO ACE Load Balancer:


1. What is load balancer or server load balancing?
-Server load balancing (SLB) is the process of deciding to which server a load-balancing device
should send a client request for service.
- The job of the load balancer is to select the server that can successfully fulfill the client request
and do so in the shortest amount of time

2. What is server farm?


- It is the group or collection of real servers.
- Server within the server farms contain identical content so that if one server becomes
inoperative, another server can take its place immediately.
-

3. What is reserves?
- Real server means actual physical server.
- Real server provide the client services such as HTTP or XML content, hosting websites, FTP file
uploads or downloads, redirection for web pages.
- This server share the load within server farms.

4. What is sticky?
- Stickiness is an ACE feature that allows the same client to maintain multiple simultaneous or subsequent TCP or IP
connections with the same real server for the duration of a session. A session is defined as a series of transactions
between a client and a server over some finite period of time (from several minutes to several hours).
- This feature is particularly useful for e-commerce applications where a client needs to maintain multiple
connections with the same server while shopping online, especially while building a shopping cart and during the
checkout process.

5. What is health monitor or probe?


-We can instruct the ACE to check the health of servers and server farms by configuring health
probes (sometimes referred to as keep lives).
- After creating a probe, we assign it to a real server or a server farm.
-A probe can be one of many types, including TCP, ICMP, Telnet, HTTP, and so on.
-The ACE sends out probes periodically to determine the status of a server, verifies the server
response, and checks for other network problems that may prevent a client from reaching a
server.
-Based on the server response, the ACE can place the server in or out of service, and, based on
the status of the servers in the server farm, can make reliable load-balancing decisions.

6. Which platform cisco ACE works?


The Cisco ACE family includes highly scalable application switching modules for Cisco
Catalyst ® 6500 Series Switches and Cisco 7600 Series Routers. The Cisco ACE modules consolidate
a broad range of services in one device.

7. How cisco ACE load balancer works?

ACE performs a series of checks and calculations to determine which server can best service each
client request. The ACE bases server selection on several factors including the source or
destination address, cookies, URLs, HTTP headers, or the server with the fewest connections with
respect to load.

8. What is mean by load balancing method?

Load balancing methods are algorithms or mechanisms used to efficiently distribute an incoming
server request or traffic among servers from the server pool. Efficient load balancing is necessary
to ensure the high availability of Web services and the delivery of such services in a fast and
reliable manner. To meet a high traffic demand, servers are replicated. An incoming load or
request to a server is shared across such replicated servers, and this process is known as load
balancing. To effectively schedule the routing of requests from a client to the respective servers
in an optimized way, several load balancing methods are used such as round robin, least
connections, adaptive balancing, etc.

9. What are the different load balancing methods?


Round robin -- In this method, an incoming request is routed to each available server in a
sequential manner.

Weighted round robin -- Here, a static weight is preassigned to each server and is used with the
round robin method to route an incoming request.

Least connection -- This method reduces the overload of a server by assigning an incoming
request to a server with the lowest number of connections currently maintained.
Weighted least connection -- In this method, a weight is added to a server depending on its
capacity. This weight is used with the least connection method to determine the load allocated
to each server.

Least connection slow start time -- Here, a ramp-up time is specified for a server using least
connection scheduling to ensure that the server is not overloaded on startup.

Agent-based adaptive balancing -- This is an adaptive method that regularly checks a server
irrespective of its weight to schedule the traffic in real time.

Fixed weighted -- In this method, the weight of each server is preassigned and most of the
requests are routed to the server with the highest priority. If the server with the highest priority
fails, the server that has the second highest priority takes over the services.

Weighted response -- Here, the response time from each server is used to calculate its weight.

Source IP hash -- In this method, an IP hash is used to find the server that must attend to a
request.

10. How NAT function used in Cisco ACE load balancer?

11. What is one arm and two are mode?

One arm mode: the clients, the ACE, and the servers are all in the same VLAN. This configuration
is referred to as one-armed mode.
For one-arm mode, you must configure the ACE with client-source network address translation (NAT) or policy-based
routing (PBR) to send requests through the same VLAN to the server.
The ACE is not inline with the traffic and receives and sends requests through the Multilayer Switching Feature card
(MSFC) that acts as a default gateway to the servers. The MSFC routes requests to a VIP that is configured on the
ACE. When the ACE selects the server for the request based on the configured policy, it rewrites the source IP address
with an address in the NAT pool. Then the ACE forwards the request to the server on the same VLAN through the
default gateway on the MSFC.
The server sends a response to the default server gateway on the MSFC. The server response contains its source IP
address and the NAT address of the ACE as the destination IP address. The MSFC forwards the response to the ACE.
The ACE receives the response, changes the source IP address to the VIP, and sends it to the MFSC. Then the MFSC
forwards the response to the client.

Two arm mode:

We typically configure the ACE with a client-side VLAN and a server-side VLAN are different. This known as two arm
mode.

12. Explain Fail on all?


-By default, real servers with multiple probes configured on them have an OR logic associated
with them. This means that if one of the real server probes fails, the real server fails and enters
the PROBE-FAILED state. To configure a real server to remain in the OPERATIONAL state unless
all probes associated with it fail (AND logic), use the fail-on-all command in real server host
configuration mode.

The syntax of this command is as follows:


fail-on-all

For example, to configure the SERVER1 real server to remain in the OPERATIONAL state unless all associated probes
fail, enter:
host1/Admin(config)# rserver SERVER1
host1/Admin(config-rserver-host)# ip address 192.168.12.15
host1/Admin(config-rserver-host)# probe HTTP_PROBE
host1/Admin(config-rserver-host)# probe ICMP_PROBE
host1/Admin(config-rserver-host)# fail-on-all

13. How many connection limits on real server?


- The default is 4000000.

What is real server weight or meaning of weight?

-The ACE uses the weight value that you specify for a server in the weighted round-robin and least-connections load-
balancing predictors.
-Servers with a higher configured weight value have a higher priority with respect to connections than servers with
a lower weight. For example, a server with a weight of 5 would receive five connections for every one connection
for a server with a weight of 1.

-The syntax of this command is as follows:

weight number

The number argument is the weight value assigned to a real server in a server farm. Enter an integer from 1 to 100.
The default is 8.2-1

How to place Real server in Service?


- By using inservice command.

14. What is failaction purge and failaction reassign?


failaction purge command to instruct the ACE to purge Layer 3 and Layer 4 connections if a real
server fails. The default behavior of the ACE is to do nothing with existing connections if a real
server fails. We can also enter the failaction reassign command, which instructs the ACE to send
existing connections to a backup server (if configured) in case the primary server fails.
•purge—Instructs the ACE to remove all connections to a real server if that real server in the
server farm fails after you enter this command. The module sends a reset (RST) to both the client
and the server that failed.
•reassign—instructs the ACE to reassign the existing server connections to the backup real server
(if configured) on the same VLAN interface if the real server fails after you enter this command.
If a backup real server has not been configured for the failing server, this keyword has no effect
and leaves the existing connections untouched in the failing real server.
F5 load balancer:
https://support.f5.com

1. What is node, Pool member, Pool, Virtual server?


Node: A node is a logical object that identifies the IP address of a physical resource on the
network.

Pool: A pool is a logical set of devices, such as web servers, that you group together to receive
and process traffic.

Pool member: A pool member consists of a servers IP address and service port number. An
example of a pool member is 10.10.10.1:80.

Virtual server:
A virtual server is a traffic-management object on the BIG-IP system that is represented by an IP
address and a service. Clients on an external network can send application traffic to a virtual
server, which then directs the traffic according to your configuration instructions. The main
purpose of a virtual server is balance traffic load across a pool of servers on an internal
network. Virtual servers increase the availability of resources for processing client requests.

2. What is server load balancing?


-Server load balancing (SLB) is the process of deciding to which server a load-balancing device
should send a client request for service.
- The job of the load balancer is to select the server that can successfully fulfill the client request
and do so in the shortest amount of time

3. What is the health check?

The Health Check feature of the load balancer that allows you to set parameters to perform
diagnostic observations on the performance of web servers and web server farms associated with
each appliance. Health checking allows you to determine if a particular server or service is
running or has failed. When a service fails health checks, the SLB algorithm will stop sending
clients to that server until the service passes health checks again.
3. What is different between cisco ace and F5 load balancer?
4. What are the different pool member state on F5?

The pool member is set to Enabled, the parent node is up, and a monitor has marked the
pool member as up. (All Traffic Allowed)

The pool member is unavailable, but could become available later with no user interaction
required. This status occurs when the number of concurrent connections has exceeded the limit
defined in the pool members Connection Limit setting. (All Traffic Allowed)

The pool member is unavailable because either the parent node is down, a monitor has
marked the pool member as down, or a user has disabled the pool member. (All Traffic Allowed)

The pool member is set to Disabled and is offline because the parent node is down. To
resume normal operation, you must manually enable the pool member. [Forced Offline (Only
active connections allowed)]

The pool member is set to Disabled and is offline because either the parent node is down, or
a monitor has marked the pool member as down. To resume normal operation, you must
manually enable the pool member. [Forced Offline (Only active connections allowed)]

The pool member or node has no monitor associated with it, or no monitor results are
available yet. (All Traffic Allowed)

*F5 color codes :


*If probe doesn’t response within 16 seconds then this server declare as a dead. 5 second is
the interval.

5. What is SNAT in F5?


https://support.f5.com/kb/en-us/products/big-
ip_ltm/manuals/product/ltm_configuration_guide_10_0_0/ltm_snat.html

A secure network address translation (SNAT) translates the source IP address within a
connection to a BIG-IP system IP address that you define. The destination node then uses that
new source address as its destination address when responding to the request.
For inbound connections, that is, connections initiated by a client node, SNATs ensure that
server nodes always send responses back through the BIG-IP system, when the servers default
route would not normally do so. Because a SNAT causes the server to send the response back
through the BIG-IP system, the client sees that the response came from the address to which
the client sent the request, and consequently accepts the response.
For outbound connections, that is, connections initiated by a server node, SNATs ensure that
the internal IP address of the server node remains hidden to an external host when the server
initiates a connection to that host.

6. What are the deployment methods of F5?


7. What is iRule?
8. What are the different mode to access F5? (cli, GUI)
9. What are load balancing algorithm in F5?
10. What is monitor?
11. What is persistent?
12. What is Global Server Load Balancing (GSLB)?
-GSLB operates very similarly to SLB, but on a global scale. It allows you to load balance VIPs from
different geographical locations as a single entity. This provides geographical site fault tolerance
and scalability.

13. Explain SSL offloading?


SSL offloading: client does the SSL handshake with the load balancer instead of server. This is like
reducing the work of web server which does encryption and decryption.

14. What is Reverse Proxy Cache?


- Reverse Proxy Cache is a cache that is in front of the origin servers, hence the use of the term
reverse in the name. If a client requests a cached object, the proxy will service the request from
the cache instead of the origin server.

15. What is the difference between a Persistent Cookie policy and a QoS Cookie policy in array
network load balancer?
-A Persistent Cookie policy selects a group based on the cookie name. A QoS Cookie policy selects
a server group based on the cookie name and value assigned to that group.

16. When load balancing to a real server, which server will be accessed first?

-This depends on the load balancing method that you select. Here are a few examples:

Least connections (lc) method “The real server with the lowest number of concurrent
connections will receive the first connection. Round robin (rr) method “The real server with the
lowest entry index will get the first connection. Shortest response (sr) “The load balancer or
appliance will establish connections with each server and calculate the round trip time. The client
connection will go to the real server with the lowest response time.

17. What is floating and non-floating in F5, why we used in F5 LTM load balancer?
-floating IP will act as a default gateway as per F5configuration like HSRP VRRP Virtual IP.
-Non floating IP is self IP, which we will configure on separate boxes in HA environment
What is Virtual Server Precedence?
--> Virtual Server Precedence in BIG IP F5 is used to select a virtual server when inbound
connection is matching more than one virtual server.

--> In order to select particular virtual server, BIG IP F5 implements algorithm in the following
order,

i) Checks first Destination Address ( Virtual Server Address)

ii) Checks Source address if the destination address of all the virtual server is same

iii) Checks service port number in case of both destination address and the Source address is
same.

--> If the destination address is same on both the virtual servers but subnet mask is different
than in that case longest subnet mask is selected.
Introduction to SSL Offloading in F5
--> HTTPS requests are more processor intensive compared to HTTP requests, often on the
magnitude of at least 10 times slower than normal HTTP requests.

-->The main purpose of Web Servers to serve pages quickly, if they start processing SSL
traffic they tend to lose their efficiency.

--> SSL offloading removes the need of a Web server processing burden of encrypting
and/or decrypting traffic sent via SSL.

--> In simple terms SSL Offloading converts external HTTPS traffic into normal
HTTP traffic so that your web servers don't need to process SSL traffic.

--> A device is placed in between the Client-Server connection processes every SSL
request. In this case, F5 Load balancer is an example of a device that plays this role.

--> The F5 Load balancer decrypts the client's request before sending it to the web server,
then encrypts the server's connection before sending it back to the client.
--> This method is called 'Offload' because SSL processing is done entirely by F5 Load
Balancer, which results in a web server resource saving.

--> In this way, the web server can use its resources to process information that is really
relevant to your applications.

--> The only prerequisite which is, We need SSL Cert for the domain you are trying to
implement SSL offloading

--> Implementing the SSL Offload concept in F5 Load Balancer gives another advantage. In
addition to removing the SSL processing load from servers, F5 Load Balancer becomes a
certificate manager/centralizer.

--> Since, we can have all the certificates loaded in the balancer, creating a single point for
administering them.

--> F5 SSL offload and acceleration removes all the bottlenecks—including concurrent
users, bulk throughput, and new transactions per second along with supporting certificates
up to 4096-bits—for secure, wire-speed processing.

--> A fully loaded F5 VIPRION® chassis is the most powerful SSL-offloading engine on the
market today. Along with the F5 BIG-IP LTM Virtual Edition (VE), these platforms provide a
powerful solution to the SSL challenge.

HTTP Pipelining

--> HTTP pipelining is a method in which multiple HTTP requests are sent using a single
TCP connection without waiting for the HTTP responses.

--> The web server still needs to send its responses in the same order that the requests
were received — so the entire connection remains first-in-first-out.

--> HTTP pipelining improves the loading times of HTML pages, especially over high
latency connections such as satellite Internet connections.

--> HTTP pipelining requires both the client and the server to support it.

--> HTTP Pipelining was introduced in HTTP/1.1 and was not present in HTTP/1.0.

--> Most of the browsers such as Internet Explorer, Google Chrome, and Mozilla Firefox
disables HTTP Pipelining to avoid issues with the web server.

HTTP Pipelining Example without Proxy in Mozilla Firefox


--> HTTP pipelining is a method in which multiple HTTP requests are sent using a
single TCP connection without waiting for the HTTP responses.

--> The web server still needs to send its responses in the same order that the requests
were received — so the entire connection remains first-in-first-out.

--> HTTP pipelining improves the loading times of HTML pages, especially over high
latency connections such as satellite Internet connections.

--> HTTP pipelining requires both the client and the server to support it.

--> HTTP Pipelining was introduced in HTTP/1.1 and was not present in HTTP/1.0.

--> Most of the browsers such as Internet Explorer, Google Chrome and Mozilla Firefox
disables HTTP Pipelining to avoid issues with the web server.

HTTP Pipelining Example without Proxy in Mozilla Firefox

HTTP Pipelining Example with Proxy in Mozilla Firefox


What is X-Forwarded-For

--> A Secure Network Address Translation (SNAT) is an object that maps an original
client IP address to a translation address defined on the BIG-IP device.

--> When the BIG-IP system receives a request from a client IP address, and if the
client IP address in the request is defined in a SNAT, the BIG-IP system translates the
source IP address of the incoming packet to the SNAT address.

--> When the BIG-IP system translates the source IP address of the incoming packet to
the SNAT address, the web server sees the request as originating from the SNAT
address, not the original client IP address.

--> If the web servers are required to log the original client IP address for requests, the
SNAT address translation behavior can become problematic.

--> It may be necessary for the BIG-IP system to insert the original client IP address in an
HTTP header and configure the web server that is receiving the request to log the client
IP address instead of the SNAT address.

--> X-Forwarded-For is an HTTP header field used to identify the originating IP address
of a client connecting to any web server via Proxy Server or Load Balancer.
--> It is not possible to know the originating IP address of the user via proxy if X-
Forwarded-For is not used.
--> This header is used for debugging, statistics, and generating location-dependent
content and by design, it exposes privacy-sensitive information, such as the IP address
of the client. Therefore the user's privacy must be kept in mind when deploying this
header.

--> Many servers and applications expect only a single X-Forwarded-For header, per
request. However, the BIG-IP system appends a new X-Forwarded-For header to the
existing set of HTTP headers, even if there is an existing X-Forwarded-For header in the
request.

--> For applications expecting a single X-Forwarded-For header, it is possible to use an


iRule instead of the HTTP profile option to append the client IP value to the end of any
existing X-Forwarded-For header.

--> Syntax of X-Forwarded-For Header field can be shown in HTTP request as X-


Forwarded-For: <client>, <proxy1>, <proxy2>

--> Client is the client IP address and <proxy1>, <proxy2> is if a request goes through
multiple proxies, the IP addresses of each successive proxy is listed. This means, the
right-most IP address is the IP address of the most recent proxy and the left-most IP
address is the IP address of the originating client.

F5 Device Service Clustering

--> Device Service Clustering is a method of combining two or more physical F5 devices
into one logical device.

--> Device Service Clustering basically used to provide high availability to the Virtual
Servers.
--> In order to understand high availability in F5 we need to understand following terms,

i) Device Group
--> Collections of F5 devices that can share configuration and objects with each other in
the cluster. Device Group can be either sync-only or sync-failover.
--> Each and every device in the Device group contains device certificate installed on it.

--> Devices in the Device group share certificates to establish the trust relationship
between them to exchange configuration and policies.
--> sync-only Device Group contains the devices that synchronize only configuration data
and does not support failover between the devices.
--> For example, you can use a Sync-Only device group to synchronize a folder containing
policy data that you want to share across all BIG-IP devices in a local trust domain. A
Sync-Only device group supports a maximum of 32 devices.
--> sync-failover Device Group contains the devices that synchronize configuration data
as well as support failover between the devices.
--> We will use sync-failover Device Group in the case of High Availability.
--> A Sync-Failover device group supports a maximum of eight devices.

--> Before creating the device group, you should configure the configuration
synchronization (ConfigSync) and Failover IP addresses for each BIG-IP system in the
device group.
--> The ConfigSync address is the IP address that the system uses when synchronizing
configuration with peer devices, and the Failover address is the IP address that the
system uses for network failover.
--> F5 considers it best practice to select both the management address and a Traffic
Management Microkernel (TMM) network address to use for network failover.
ii) Traffic Group
--> Traffic Group determines how the traffic need to be forwarded between the devices in
the device group.
--> In general, a traffic group ensures that when a device becomes unavailable, all of the
failover objects in the traffic group fail over to any one of the devices in the device group,
based on the number of active traffic groups on each device.

--> Although a specific traffic group can be active on only one device in a device group
and all the other devices in the device group will be in standby state.
--> When a traffic group fails over to another device in the device group, the device that
the system selects is normally the device with the least number of active traffic groups.
--> When you initially create the traffic group on a device, however, you specify the device
in the group that you prefer that traffic group to run on in the event that the available
devices have an equal number of active traffic groups (that is, no device has fewer active
traffic groups than another).

--> Note that, in general, the system considers the most available device in a device group
to be the device that contains the fewest active traffic groups at any given time.

--> A Sync-Failover device group can support a maximum of 15 traffic groups.

F5 iApp

--> iApps is a template used to implement a complex application in F5 LTM by


answering a set of questions.

--> iApps are used to create Applications in F5 LTM quickly compared to traditional way
of creating applications.
--> iApps automate creating of node,pool,memeber,profile,virtual server, rule, monitors
by simply answering a set of questions.
--> For example, we can implement or deploy Microsoft Exchange or Lync by using iApp
template.
--> Most of the vendors/application developers provide the template to reduce the
complexity of creating applications in F5 LTM.
--> iApps removes the manual errors done by admin on any applications because iApp
does not allow us to change the parameter later on.
--> iApp reduces deployment errors that can be made when implementing a complex
application through LTM.

--> “Strict updates” feature locks you out of changing of an iApp, but unchecking this
feature lets you modify the application via iApp.
--> An iApp provides improved application monitoring to monitor your created applications
via iApps.
--> It is possible to create customized iApps using TCL/HTML/APL.

F5 Route Domains

--> Route domain concept is similar to VRF in Cisco, which is used to divide one routing
table into multiple routing tables.
--> With the help of Route domains we can provide hosting service for multiple
customers by separating each type of application traffic within a defined address space
on the network.
--> With route domains, you can also use duplicate IP addresses on the network,
because each of the duplicate addresses resides in a separate route domain and is
separated from the network through a separate VLAN.
--> For example, if you are processing traffic for two different customers, you can create
two separate route domains. The same node address (such as 10.0.10.1) can reside in
each route domain, in the same pool or in different pools, and you can assign a different
monitor to each of the two corresponding pool members.
--> Route domains are identified by using Domain ID in F5 whereas in cisco numbers
are used to identify VRF.
--> A route domain ID is a unique numerical identifier for a route-domain. You can
assign objects to IP addresses (such as self IP addresses, virtual addresses, pool
members, and gateway addresses) to a route domain by appending the %ID to the IP
address.

--> The format required for creating a route domain ID in an object’s IP address is
A.B.C.D%ID, where ID is the number of the relevant route domain.

--> For example, both the local traffic node object 10.10.10.30%2 and the pool member
10.10.10.30%2:80 pertain to route domain 2.
--> By default Route Domain ID "0" is used, if you do not manually create any route
domains, all routes in the system assigned to route domain 0.

--> Any BIG-IP addresses that do not include the route domain ID notation are
automatically associated with the default route domain.
--> A route domain ID must be unique on the BIG-IP system; that is, no two route
domains on the F5 LTM can have the same ID.

--> Each route domain can have a Parent route domain identified using Pparent ID.
The parent ID identifies another route domain on the F5 LTM that the system can
search to find a route if it is not available in the route domain (child).
--> Forwarding of traffic between route domains is by default enabled between route
domains in a parent-child relationship only. We can disable this behavior by enabling
strict isolation feature.

--> It is recommended to configure each route domain in the separate partition in F5


LTM.

What is Partitions in F5?

--> Partition is a logical container or storage area where you create a defined set of BIG-
IP system objects such as pools, nodes, profiles, and irules.
--> Partition is similar to local disks (c: D: E: F:) in Windows operating systems.
--> Partition gives a fine granularity of administrative control by allowing users to
manage the objects in particular partition rather than all partitions in F5.
--> By default F5 LTM automatically creates a partition called common, all the objects
that users create automatically reside in common partition.

--> Only the user role of Administrator, can also create, update, and delete objects in
partition Common. No user can delete partition Common itself.
--> Once we create partitions in F5 LTM then users can be mapped to the required
partition which requires access to it.
--> The objects created in one partition cannot be accessed or used by the other users
who don't have access to the partition.

--> Object names must be unique within the partition.

--> Objects can be partitioned based upon application requirements.

--> A BIG-IP system user account cannot reside in more than one partition
simultaneously. When you first install the BIG-IP system, every existing user account
(root and admin) resides in partition Common.

What are the different types of proxies?

--> Proxy is a hardware or software solution that acts as an intermediary between a


client and the server.

--> There are different types of proxy solutions are available.


i) Forward Proxy

--> Forward Proxy takes connections from the clients present in inside network
(Intranet) and connect them to servers outside on the Internet.

--> Forward Proxies generally web proxies that provide the number of services but
mainly focus on web services and content filtering.

--> Forward Proxies also authenticate and authorize clients before allowing the access
to the Internet.

ii) Reverse Proxy

--> Reverse Proxy takes connections from the clients present on outside network
(Internet) and connect them to servers present in the private network.

--> Reverse Proxies basically used to protect the web servers.

--> Load Balancers, Application Delivery Controllers and Web Application Firewalls are
good examples for Reverse Proxies.

--> Reverse proxies are good for traditional load balancing, optimization, SSL offloading,
server-side caching, and security functionality.

--> Reverse Proxies are mainly focused on HTTP based applications, but nowadays
they are supporting other applications such as RSTP, FTP and any general application
that is forwarded via UDP or TCP.
iii) Half Proxy

--> Half Proxy either it can be Forward Proxy or Reverse Proxy.

--> In the case of Half Proxy, requests are processed by the proxy but responses do not
return through the proxy, but rather are sent directly to the client.

--> This is useful particularly in case of streaming protocols.

--> This configuration is known as half proxy because only half the connection is proxies
while the other half, the response is not.

iv) Full Proxy


--> A full proxy maintains two separate connections - one between the proxy and the
client and one between the proxy and the destination server.

--> Full proxies can look at incoming requests and outbound responses and can
manipulate both if the solution allows it.

--> Many reverse and forward proxies use a full proxy model today.

--> A full proxy completely understands the protocols and is itself acts an endpoint and
an originator for the protocols.

--> Full proxies are named because of the completely proxy connections - incoming and
outgoing.

What is OneConnect in F5?

--> OneConnect feature of F5 LTM can increase network throughput by efficiently


managing connections created between the BIG-IP system and Servers.

--> OneConnect feature of F5 LTM increases server network connection efficiency by


keeping server-side connections open and reusing them for new client connections.

--> OneConnect Feature not only saves the system resources on F5 LTM as well as on
Servers also.

--> Let’s assume, client IP 1.1.1.1 is initiating a connection to the virtual server 8.8.8.8
which then gets load balanced to the server 10.10.10.10. Within the TCP connection,
the client will utilize multiple HTTP Requests to obtain the right content from the server
(HTTP 1.1 Keepalive).

--> After the transaction has been completed, the client closes the client side connection
(Client – F5). However, the F5 retains the server side connection (F5-Server).

--> If a new client (1.1.1.2) initiates a connection within a certain timeout interval, the F5
will re-use the server side connection that was retained for the 1.1.1.1 connection.
--> As you can see, the server side connection that was created when 1.1.1.1 made the
initial request was reused when the new client 1.1.1.2 made the request.

--> From the server’s perspective, HTTP Requests initiated by 1.1.1.2 is still assumed to
be over the connection initiated by 1.1.1.1 i.e., the client IP at the server level no longer
provides the right information about the true client IP.

--> In order to overcome this, “X-Forwarded-For” header insertion would have to be


utilized to insert the right “True Client IP” at the server level.

--> It is essential that the server logs and the application look for the client IP in the “X-
Forwarded-For” header and not the Client IP field.

What is Priority Group Activation in F5?


--> Priority Group Activation in F5 allows configuring the standby servers for the active
servers in the pool.

--> Standby Servers won't receive the traffic from F5 until primary servers are running.

--> Standby Servers are automatically activated once the defined number of primary
servers goes down.

--> Priority Group Activation feature mainly used to provide a web page with
maintenance to the clients when the number of primary servers were unavailable due to
load.

--> if priority group activation is set then F5 LTM will use available members with the
highest priority number first.
--> In this case, we configured three servers with a priority value of 10 and configured
the other servers with a priority value of 5.

--> Standby or Backup Servers (Priority 5) won't receive the traffic from F5 LTM
until two of the primary servers (Priority 10) goes down.

--> If all the members in the pool are unavailable then F5 LTM can use fall back
Host feature which redirects user traffic.

Traffic Processing in LTM?

i) A client starts by initiating traffic to the virtual server configured on F5 LTM.

ii) F5 LTM completes TCP three-way handshake with the client.

iii) F5 LTM then translates the destination IP address based on load balancing method.

iv) If Secure NAT is also configured then F5 LTM translates the source IP address also.

v) F5 LTM performs TCP three-way handshake with the selected server on behalf of the
client.

vi) F5 LTM receives the traffic back from the server and it changes the source IP
address back to Virtual IP address.

vii) F5 LTM also changes the destination IP address back to the original client IP
address.
What is the Self IP address in F5?

--> Self IP Address is the IP Address in F5 which is assigned to VLAN.

--> In Cisco terms, it is similar to SVI where we assign IP Address to VLAN.

--> In F5 LTM, we can assign an IP address to a VLAN or group of VLAN.

--> A static self IP address is an IP address that the BIG-IP system does not share with
another BIG-IP system.

--> For each self IP address that you create for a VLAN, the BIG-IP system
automatically assigns a media access control (MAC) address.
--> Self IP Address in F5 LTM is used mainly for three purposes,

i) F5 compares destination server IP address with VLAN self IP address to identify


which VLAN it belongs to.

ii) Self IP Address can also be used as the Default gateway for the servers if we
configure F5 in Inline Mode.

iii) Self IP Address is also used to send monitor probes to the group of servers in that
VLAN.

--> There are two types of Self IP Addresses used by F5,

i) Static Self IP Address: This is the IP Address is assigned only to one F5 LTM device.

ii) Floating Self IP Address: This is the IP Address is assigned to multiple F5 LTM
devices.

--> In Cisco terms, Floating IP Address is similar to Virtual IP Address which is used for
Gateway Redundancy.

--> Floating Self IP Address is used for configuration synchronization between F5 LTM.

--> For Every VLAN in F5 LTM, you need to create both Self IP Address and Floating
Self IP Address.

--> Each self IP address has a feature known as the port lockdown. Port lockdown is a
security feature that allows you to specify particular UDP and TCP protocols and
services from which the self IP address can accept traffic.
Understanding F5 Product Line

i) Local Traffic Manager:

--> Used to perform load balancing between the servers (Local Load
Balancing).

--> Monitoring Server Status (Checking Server is UP/Down).

--> Distributes client request in single data center.

--> SSL traffic termination.

--> RAM Cache and Compression.

ii) Global Traffic Manager

--> Used to provide Intelligent DNS Services based upon location and other
parameters (If the Server is not available then it won't send the request).

--> Distributes client request across multiple data centers.


--> Provides DNSSec and DDOS protection.

--> Global Load Balancing and High Availability.

iii) Application Security Manager

--> Application Security Manager is a Web Application Firewall that protects


web applications against unknown vulnerabilities.

--> Application Security Manager can stop attacks such as SQL injection and
DDOS attacks.

--> Application Security Manager stops hackers from any location and
ensures that only legitimate users can access the web applications.

--> Application Security Manager can run standalone or it can be integrated


into Local Traffic Manager.
iv) Access Policy Manager

--> Access Policy Manager in simple terms it provides SSL VPN functionality.

--> Access Policy Manager is used to provide secure access to internal


applications using a single authentication and provide control using a single
management interface.

--> Access Policy Manager can run standalone or it can be integrated into
Local Traffic Manager.

v) Web Accelerator

--> WebAccelerator is an advanced web application delivery solution that


provides an intelligent solution that overcomes performance issues involving
browsers, web application platforms, and WAN latency.

--> By decreasing page download times, WebAccelerator offloads servers,


decreases bandwidth usage, increases revenue, and ensures the productivity
of application end users.

--> WebAccelerator solves WAN content delivery issues by locating content


closer to users.

--> Web Accelerator tricks browser to increase TCP connections to provide


faster access to web applications.

--> It significantly increases the speed and reduces the cost of using
enterprise web applications in remote office and mobile deployments.

What is SAML?

--> SAML stands for Security Assertion Markup Language.

--> SAML is XML based framework used for exchanging user authentication and
attribute information.

--> The main purpose of SAML is to enable single sign-on for the web applications
across various domains.

--> The main reason behind the development of SAML is the limitation of browser
cookies.

--> Most single sign-on products use browser cookies to maintain authentication state
so that reauthentication is not required.

--> Browser cookies are not transferred between different DNS domains.

--> So, if there is a cookie for facebook.com then that cookie will not be sent in any
HTTP messages to Gmail.com.

--> To solve the authentication between cross domain single sign-on (CDSSO) we use
SAML.

--> SAML provides a framework to exchange authentication and authorization


parameters between different DNS domains.

What is Secure NAT in F5?

--> A Secure Network Address Translation (SNAT) is a method that changes the source
client IP address in a request to a translation address defined on the LTM.

--> Secure NAT changes only source IP address it does not change the source port
number.

--> Secure NAT is basically used in one arm deployment method to prevent asymmetric
routing.

--> By default, the LTM attempts to store the source port, but if the port is already in use
on the selected translation address, the LTM also translates the source
port.

--> Each SNAT address, like an IP address has only 65535 ports available.

--> If you require more than 65535 connections may require translation for one
particular virtual server, you should configure more than one SNAT addresses (SNAT
pool).

--> A SNAT pool is a group of translation addresses for a particular virtual server.

--> SNAT auto map automatically uses the egress interface IP address (floating IP
address) for the translation.

Understanding iRules in F5LTM

--> iRules provides more advanced capabilities to F5 LTM to meet specific requirements
in the network.

--> iRules feature allows F5LTM to modify server side and client side traffic all the way
upto application layer.

---> iRules is based upon programming language called Tool Command Language (
TCL).

---> iRules are composed of event declaration and tcl script code.
---> With the help of iRules we can do following things such as,
i) Load Balancing based upon Application Data

ii) Redirecting traffic based upon HTTP status codes

iii) Load Balancing based upon Country

iv) Load Balancing based upon type of browser or type of device


--> The ideal time to use an iRule is when you’re looking to add some form of
functionality to your application or app deployment, at the network layer, and that
functionality is not already readily available via the built in configuration options in your
BIG-IP.

-->The ideal time not to use an iRule is any time you can do something from within the
standard config options, profiles, GUI or CLI – do it there first. If you’re looking to
perform a task that can’t be accomplished via the “built-in” means of configuration, then
it is a perfect time to turn to iRules to expand the possibilities.

--> A simple example of iRule is to redirect traffic from http to https.

when HTTP_REQUEST {
HTTP::redirect "https://[HTTP::host][HTTP::uri]"
}

Understanding Persistence in F5 Load Balancer

--> The basic concept behind Persistence is the request from same client should go to
the same server.

--> This feature most commonly used if the application which we are using is stateful.

--> The persistence feature is used to change the load balancing behavior of a virtual
server.

--> New clients will be load balanced based on the load balancing method configured in
the pool.

--> All subsequent connection requests from the same client are direct back to the same
pool member if they occur prior to the persistence record time out.

--> Once the initial connection is established then F5 LTM will track and store session
data in persistence record.

--> The persistence record contains information like client characteristics and the pool
member that accepted the client request.

--> This information is used to identify a returning client and get it back to the same pool
member that initially accepted the client request.

Types of Persistence

1) Simple Persistence --- Persistence record is created based upon client ip address.

2) Cookie Persistence --- Persistence record is created by inserting the cookie when
sending the packet to the client.

3) SSL Session ID Persistence --- Persistence record is created based upon Session
ID.

4) Universal Persistence--- Persistence record is created based upon any piece of


data such as (network, application protocol, data) to persist a session.
5) Hash Persistence --- Hash persistence is the use of multiple values within a request
to enable persistence.

6) SIP, WTS, Username Persistence ---Session Initiation Protocol (SIP) and Windows
Terminal Server (WTS) persistence are application-specific persistence techniques that
use data unique to a session to persist connections. Username persistence is a similar
technique designed to address the needs of VDI - specifically VMware View solutions -
in which sessions are persisted (as one might expect) based on username.

F5 Load Balancing Methods Part 1

--> F5 Load Balancing Methods are mainly divided into two types

i) Static Load Balancing Methods

ii) Dynamic Load Balancing Methods


i) Static Load Balancing Methods

--> before processing client requests, F5 Load balancer does not check any
parameters of the server.

--> There are two static load balancing methods available.

1.Round robin

--> This is the default load balancing method on F5 LTM,

--> Client requests are distributed in a round robin fashion.

---> F5 LTM gets a new request and sends it to the first server in the pool.

---> The next request is sent to the next server which is available in the pool and this
action repeats.

---> This method is useful if all the servers are having same amount of memory, cpu and
other parameters.

2) Ratio

--> Client requests are distributed based on the ratio value.

--> We can do Ratio by Node as well as Ratio by Member.


--> For example if we set the ratio for three servers like in 1:2:3 ratio, then F5 LTM gets
a new request and sends it to the first server in the pool.

--> Second request goes to Second server, Third request to goes to Third Server.

--> Now Fourth request goes to Second Server and five, six client request goes to the
Third Server and the action repeats.

--> This method is useful if all the servers are not having same amount of memory, CPU
and other parameters.

F5 Load Balancing Methods Part 2

Dynamic Load Balancing Methods

--> In the case of dynamic load balancing methods, F5 Load balancer first checks the
server performance before making the decision to choose between the pool members.

--> There are following different types of dynamic load balancing methods are available

i) Least Connection

--> F5 LTM creates and maintains connection table to know how many connections are
active on each server.

--> In this method before forwarding the client to request to the server, F5 LTM checks
connection table and forwards the client request to the server which has least
connections on it.
--> If multiple servers are having same amount of connections count then F5 LTM
performs round robin algorithm for the servers which are having same amount of
connections.

ii) Fastest

--> Distributes new connections to the member or a node that currently has fewest
outstanding layer 7 requests.

--> When a connection request is received, the member with fewest connections is
chosen.

---> If associated virtual server does not have both TCP/ Layer 7 Profile then F5 LTM
cannot track requests and fastest load balancing will fail and fallback to least
connection method.

iii) Least Sessions

--> Least session distributes new connection to the members based on current
persistence record associated with the member.

--> This method results in round robin performance.

F5 Deployment Methods
--> F5 Load Balancer can be deployed in following methods

i) One Arm Method

ii) Two Arm Method

iii) n path/DSR (Direct Server Return) Method

i) One Arm Method

--> Only one interface of F5 Load balancer is used in this method of deployment.

--> Simple to implement.

--> Virtual IP address should be in the same subnet of physical servers (Nodes).

--> For Physical Servers default gateway is not F5 Load Balancer IP Address, it can be
Router or Firewall IP address.

--> Asymmetric routing occurs without Source NAT configured in F5 Load Balancer.

--> Client IP address is not retained (Changed).


ii) Two Arm Deployment

--> We use more than one interface of F5 Load balancer is used in this method of
deployment.

--> Virtual IP address need not to be in the same subnet of physical servers (Nodes).

--> For Physical Servers default gateway is F5 Load Balancer IP Address.

--> Asymmetric routing does not occurs in this method.

--> Client IP address is retained.

iii) n path/DSR (Direct Server Return) Method

--> It is similar to One Arm Method of deployment, only one interface of F5 Load
Balancer is used.

--> Virtual IP address should be in the same subnet of physical servers (Nodes).
--> For Physical Servers default gateway is not F5 Load Balancer IP Address, it can be
Router or Firewall IP address.

--> In this method we don’t implement Source Nat on F5 Load Balancer, so the traffic
directly goes to client from the router or firewall.

--> Asymmetric routing occurs in this method.

--> Client IP address is retained.

Monitor Types:
Load Balancing methods:
Virtual Server type:

Das könnte Ihnen auch gefallen