Sie sind auf Seite 1von 8

The cluster components

Several components -- the Cluster Manager, Cluster Database Directory, Cluster Database Directory Manager, Cluster
Administrator, and Cluster Replicator -- work together to make clustering function correctly. In addition, the Internet
Cluster Manager clusters Domino servers that run Internet protocols.

The Cluster Manager


A Cluster Manager runs on each server in a cluster and tracks the state of all the other servers in the cluster. It keeps a list
of which servers in the cluster are currently available and maintains information about the workload on each server.
When you add a server to a cluster, IBM Lotus Domino automatically starts the Cluster Manager on that server. As
long as the server is part of a cluster, the Cluster Manager starts each time you start the server.
Each Cluster Manager monitors the cluster by exchanging messages, called probes, with the other servers in the cluster.
Through these probes, the Cluster Manager determines the workload and availability of the other cluster servers. When it
is necessary to redirect a user request to a different replica, the Cluster Manager looks in the Cluster Database Directory to
determine which cluster servers contain a replica of the requested database. The Cluster Manager then informs the client
which servers contain a replica and the availability of those servers. This lets the client redirect the request to the most
available server that contains a replica.

Determining which servers belong to the cluster. It does this by periodically monitoring the Domino Directory
for changes to the ClusterName field in the Server document and the cluster membership list.
Monitoring server availability and workload in the cluster.
Informing other Cluster Managers of changes in server availability.
Informing clients about available replicas and availability of cluster servers so the clients can redirect database
requests based on the availability of cluster servers (failover).
Balancing server workloads in the cluster based on the availability of cluster servers.
Logging failover and workload balance events in the server log file.

When it starts, the Cluster Manager checks the Domino Directory to determine which servers belong to the cluster. It
maintains this information in memory in the server's Cluster Name Cache. The Cluster Manager uses this information to
exchange probes with other Cluster Managers. The Cluster Manager also uses the Cluster Name Cache to store the
availability information it receives from these probes. This information helps the Cluster Manager perform the functions
listed above, such as failover and workload balancing.

The Cluster Database Directory


A replica of the Cluster Database Directory (CLDBDIR.NSF) resides on every server in a cluster. The Cluster Database
Directory contains a document about each database and replica in the cluster. This document contains such information as
the database name, server name, path, replica ID, and other replication and access information. The cluster components
use this information to perform their functions, such as determining failover paths, controlling access to databases, and
determining which events to replicate and where to replicate them to.

The Cluster Database Directory Manager


The Cluster Database Directory Manager on each server creates the Cluster Database Directory and keeps it up-to-date
with the most current database information. When you first add a server to a cluster, the Cluster Database Directory
Manager creates the Cluster Database Directory on that server. When you add a database to a clustered server, the Cluster
Database Directory Manager creates a document in the Cluster Database Directory that contains information about the
new database. When you delete a database from a clustered server, the Cluster Database Directory Manager deletes this

document from the Cluster Database Directory. The Cluster Database Directory Manager also tracks the status of each
database, such as databases marked "Out of service" (unavailable for user access) or "Pending delete" from the cluster.
When there is a change to the Cluster Database Directory, the Cluster Replicator immediately replicates that change to the
Cluster Database Directory on each server in the cluster. This ensures that each cluster member has up-to-date information
about the databases in the cluster.

The Cluster Administrator


The Cluster Administrator performs many of the housekeeping tasks associated with a cluster. For example, when you add
a server to a cluster, the Cluster Administrator starts the Cluster Database Directory Manager and the Cluster Replicator.
The Cluster Administrator also starts the Administration Process, if it is not already running. When you remove a server
from a cluster, the Cluster Administrator stops the Cluster Database Directory Manager and the Cluster Replicator. It also
deletes the Cluster Database Directory on that server and cleans up records of the server in the other servers' Cluster
Database Directories.

The Cluster Replicator


The Cluster Replicator constantly synchronizes data among replicas in a cluster. Whenever a change occurs to a database
in the cluster, the Cluster Replicator quickly pushes the change to the other replicas in the cluster. The Cluster Replicator
also replicates changes to private folders that are stored in a database. Each server in a cluster runs one Cluster Replicator
by default, although you can run more Cluster Replicators if there is a lot of activity in the cluster.
The Cluster Replicator stores this information in memory and uses it to replicate changes to other servers. Periodically
(every 15 seconds by default), the Cluster Replicator checks for changes in the Cluster Database Directory. When the
Cluster Replicator detects a change in the Cluster Database Directory -- for example, an added or deleted database or a
database that now has Cluster Replication disabled -- it updates the information it has stored in memory.
The Cluster Replicator pushes changes to servers in the cluster only. The standard replicator task (REPLICA) replicates
changes to and from servers outside the cluster.

Managing failover in a cluster


Manual failover to occur
1.

To cause failover to occur, you can use the Server_Restricted setting.

2.

set config server_restricted=n

Other ways to manage failover


SERVER_AVAILABILITY_THRESHOLD.

Using the availability threshold when you restart a server in a cluster


When you restart a server in a cluster, it is a good idea to make the server BUSY until all replication to the server is
complete. This ensures that users access up-to-date information in the databases on the server. You can make a server
BUSY by setting the availability threshold to 100. When replication is complete, make the server available to users.
SERVER_AVAILABILITY_THRESHOLD

Using the server availability threshold to control failover to specific servers


In some cases, you may want to limit failover to a server. For example, if you set up a cluster over a WAN and one of the
cluster servers is more distant than the other servers, you may want to limit failover to the distant server. You can limit
failover to this server by setting its availability threshold very high.
For example, if you have three servers -- one in Boston, one in New York, and one in Hong Kong -- the Boston server
would fail over to the Hong Kong server if it is more available than the New York server. However, if you set the
availability threshold on the Hong Kong server to 100, the other cluster servers will not fail over to the Hong Kong server
unless no other available cluster server contains a replica of the requested database.
When you control failover in this manner, be sure that the other cluster servers (the servers in Boston and New York in the
example) have enough resources to handle most of the failover in the cluster

Making a database unavailable for user access


Mark database out of service
Server_Restricted=1
- No new opens are allowed.
- Existing opens still work.
- [New to Release 6.0] Allows the Administrator to connect using remote console.
- The restricted server will be able to initiate replication with other servers.
- Other servers will not be able to initiate replication with the restricted server.
- The server will be able to route existing mail in its mail.box(es) for transfer or delivery.
- Other servers (without manager access) will not be able to route mail to the restricted server.
- Is set back to 0 (unrestricted) when the server is recycled.
Server_Restricted=2
- Has all attributes of Server_Restricted=1; however, recycling the server does not change the server state.
Server_Restricted=3
- Has all attributes of Server_Restricted=1, plus starting with 8.5.2 Fix Pack 3 and 8.5.3 this blocks all
replication that is not coming from an ID for an administrator. This blocks the local replication of mail databases
from Notes clients.
Server_Restricted=4
- Has all attributes of Server_Restricted=3; however, recycling the server does not change the server state.

Notes:
This setting is not restricted to clustered servers.
"No new opens are allowed":
This applies to User sessions and Server sessions. However if the User or Server has manager access
to the database, they are attempting to access, a new session will be established.
BusyTime and Clubusy.nsf
The busytime system is responsible for providing an accurate picture of the availability of a person, room, or
resource so that C&S can function efficiently.

The busytime system consists of three functional components: the Notes client/Domino server, the Schedule
Manager (SchedMgr), and the Calendar Connector (CalConn). The Notes client makes a single busytime request to
the user's home Domino server, and the home server is responsible for fulfilling the request. This means that the
information is either found locally and returned immediately, or the request is passed along to the Domino server
that can fulfill it. When the requested busytime data is not found locally, CalConn is used to communicate between
the home server and the appropriate Domino servers.
SchedMgr is the background part of the busytime system. It is responsible for monitoring local calendars for updates
and updating the busytime database to keep it in sync with the calendar contents. It operates independently of any
busytime lookups because its job is to maintain the busytime data, not serve it up.
A busytime request consists of a list of users to get busytime data for, the interval of time the client is interested in,
and a few other parameters. The Domino server attempts to fulfill the request while respecting access controls set by
each user of who can see his busytime. Any information about calendar entries outside the specified interval is not
returned. Besides actual calendar entries, other information such as the user's profiled work hours and time zone are
also returned to the requesting client. The client is responsible for taking all the data that is returned and using it to
provide the user with an accurate view of each person's availability.
For servers that are in a cluster, the request is automatically sent to a cluster mate if the desired server is unavailable.
This means that even when a user's home server is down, the busytime lookup can still proceed by getting the data
from a cluster mate.

DAOS
DAOS is a feature of the IBM Lotus Domino Server that allows attachments to be stored outside the NSF file in
Notes Large Object (NLO) files. This provides two major benefits: (1) It allows multiple copies of the same
attachment to be stored as a single copy to save storage, and (2) it segregates the relatively static attachment data
from the NSF data to allow flexibility in data storage and backup processing.
Introduced in Domino 8.5.0, DAOS works with any NSF-based application and is transparent to clients and other
servers. It can be enabled on an individual-NSF basis, and not all NSF files on the server are required to participate.

Prerequisites
1.

Disable SCOS Shared Mail. Single Copy Object Store (SCOS) is an older approach to attachment consolidation.
This feature is not compatible with DAOS and must be disabled before you enable DAOS

2. Disable NSFDB2. NSFDB2 is a feature that allowed storing NSF data in DB2. This feature is also not compatible
with DAOS and must be disabled on any NSF application that will participate in DAOS. For information on migrating
from the NSFDB2 feature

3. Upgrade. Although DAOS was introduced in Domino 8.5.0, many important stability and performance
improvements have been made in subsequent releases. Hence, it is strongly recommended that all new DAOS
deployments be done on the 8.5.2 (or later) Domino release.
If you are restricted to Domino version 8.5.1, ensure that you are at the FP3 or later level. If you have Notes 8.5.1
clients, make sure they are at least at the FP3 level also. There are no other client requirements, and pre-8.5.1 clients as
well as Web clients can also be used with DAOS.
4. Enable transaction logging. DAOS depends on transaction logging for proper operation. Since DAOS must update
several locations simultaneously, it is important that all those updates succeed or fail (and are subsequently rolled back)
as a unit.
Transactions provide this ability, and therefore transaction logging is required for all NSF files that participate in
DAOS.
5. Establish backup/restore processes.It is important to have reliable backup and restore procedures in a production
environment, to avoid the possibility of data loss. DAOS adds some complexity to the backup/restore process, so it is
important that a well established backup/restore foundation exists for DAOS to build on. Transaction logging
introduces some additional features that provide even better recovery options.
6. Upgrade Names.nsf design. The design of the Names.nsf file has been changed to accommodate DAOS, and the
Server document has a new tab that covers the DAOS settings. Names.nsf must use the new Names.ntf template on all
Domino servers that will be enabled for DAOS.
If Names.nsf is replicated in your environment, ensure that the update is done with respect to the replication flow. If a
master hub copy of Names.nsf contains the old design, it may overwrite replicas on other servers that have been
updated with the new design.
Additional recommendations
1.

Enable LZ1 compression. If no attachment compression is enabled on the NSF files, or if Huffman compression is
being used, then enabling LZ1 compression can save a significant amount of disk space. This is done by use of the
compact command, and the -Zu flag. For more information, refer to Technote #1256241, Upgrading existing
attachments from Huffman to LZ1 compression. While this is an independent operation from DAOS, if you leave
them as different encoding types, you get two copies of the attachment in DAOS. If you re-encode them after the NSF
files are enabled for DAOS, you will end up with all references pointing to one NLO, and the other NLO having 0
references that doesn't go away until the deferred deletion interval. This will work, but it is not optimal. For that reason
it is recommended that you standardize on LZ1 before you move to DAOS.
2. Enable design and data document compression. Another Domino space-saving feature is design and data
document compression. Enabling these compressions can also save a significant amount of disk space. The savings
from these features are independent from DAOS and are worth investigating. For more details on these options, refer to
the Domino 8.5 Administrator Information Center topics, Using database design compression and Using document
body compression.
3. Use Domino Domain Monitoring (DDM). DAOS diagnostic information is included in DDM events. The events
are logged to the ddm.nsf file, which provides a convenient environment for monitoring the operation of DAOS. For
information on managing and configuring DDM.

Running the DAOS estimator


The DAOS estimator tool analyzes your existing NSF content and estimates the results of enabling DAOS. It is

important to run this estimation because it helps you choose a minimum participation size (discussed in detail below),
and provides an estimate of the resulting size of the NSF and NLO files.
load daosest -i foo.ind -c

where the -i option indicates that a list of NSF files to include in the scan is stored in the file foo.ind, and the -c
parameter tells daosest to save the raw data after it is collected.
daosest -a DAOSEST_08_12_2010_10_57_55_AM.csv

where the argument to the -a option specifies the name of the .csv raw data file created by the previous run. The .csv
file can be moved to another machine to run the analysis phase. Additionally, multiple .csv files can be specified in a
.ind file for the -a parameter.
You can alter the analysis bucket sizes by adding/modifying the following Notes.ini variable:
DAOSEST_BUCKETS=64,128,256,512,1024,2048,3072,4096,8192,16384

There must be exactly 10 values specified, and the values are interpreted in kilobytes. This example would estimate
results for an assortment of participation sizes between 64K and 16M. These values can be altered to focus the
report on a specific range of sizes.

By default, the minimum size setting for an attachment to make use of DAOS is 4096 bytes. While we recommend
using 64000 as the lowest value you should use here (1048576 on iSeries), there are a number of things to consider
when determining the best DAOS minimum size setting for your system.
1. Do not set the minimum size lower than the default setting.
Due to attachment file overhead, setting the minimum size to anything lower than the default size would
actually be less efficient than storing the attachment in the NSF file.
2. Set a minimum size that is a multiple of your file system's disk block size.
By choosing a minimum size that is a multiple of the disk block size, you optimize disk usage. To ascertain
the disk block size for your file system, on a Windows NTFS, use fsutil fsinfo ntfsinfo and take note of the
Bytes Per Cluster. This is the disk block size. On Solaris, you could use df -g and take note of the block
size. On AIX you need be super user to do determine block size, then use lsfs -q and look for block size. On
Linux you also need to be super user to find the block size, then use df -k to determine the device name of
your filesystem and the uses dumpe2fs | grep 'Block size' to determine the block size.
3. Take note of possible limitations on number of files.
The smaller you make the setting, the more attachments will qualify for DAOS consolidation. The larger you
make the setting, the fewer will qualify. In Domino 8.5, the DAOS repository allows for one container with up
to 1,000 subcontainers, each with a maximum of 40,000 NLO files. Thus the storage capacity of DAOS is
limited to 40 million distinct objects. This is a significant number of files, so if you expect to come anywhere
close to approaching it, you should check the limits on your backup and restore solution, as some
applications and file systems have limitations on maximum number of files. Refer to your operating system
and/or backup application guidelines.

Tell DAOSMgr Status Displays status of various DAOS Manager operations.

Tell DAOSMgr Statusdatabase_name Displays DAOS status of the database


with database_name, for example, mymail.nsf.

Tell DAOSMgr Dbsummary Displays status of all DAOS-enabled databases.


Tell DAOSMgr Status Catalog Displays status of the DAOS catalog.
Tell DAOSMgr Databases Displays status of all DAOS-enabled databases with
additional details, for example, a database's last resynchronization point.

MX
Mail Xchange record:-it is a record in DNS which specify mail server responsible for accepting email
messages on behalf of recipient domain.
A RECORD

When we align a name to IP address it is called A record


www 192.168.0.1
SRV Service record
CNAME
Name to name
RBL / DNSBL
SPF

DNS Domain name system/service


DNS is network service which translates names into ip address.
Names can be
Domain names and Host names
Host = has host name and IP address
Replication Types
6.5 8 8.5

Das könnte Ihnen auch gefallen