Beruflich Dokumente
Kultur Dokumente
User Guide
No part of this document may be reproduced or transmitted in any form or by any means,
electronic or mechanical, for any purpose, without the express written permission of TEMENOS Holdings NV.
Table of Contents
Application Overview ........................................................................................................................... 3
Application Server................................................................................................................................ 3
Port Settings..................................................................................................................................... 3
TSA Definitions................................................................................................................................. 3
Database Server .................................................................................................................................. 4
Remote Oracle/DB2 ......................................................................................................................... 4
Configuring remote access to DB2 on the DB2 Server:................................................................... 4
Configuring remote access to DB2 on the DB2 client:..................................................................... 5
Configuring remote access to Oracle............................................................................................... 5
Configuring remote access to Oracle on the Oracle server ............................................................. 5
Configuring remote access to Oracle on the Oracle client............................................................... 6
jRFS Configuration ........................................................................................................................... 7
jRFS on the Database Server .......................................................................................................... 7
jRFS configuration on Client ............................................................................................................ 8
JRFS Additional Configuration ......................................................................................................... 9
Browser Connectivity ......................................................................................................................... 10
Installing the TCServer................................................................................................................... 10
Using IBM WMQ .......................................................................................................................... 10
Setup of the Browser for high availability ....................................................................................... 13
Using Web Server Load Balancing ................................................................................................ 15
Introduction
Application Overview
To ensure resilience and to allow horizontal scalability T24 can be implemented in a multi- server
environment. In this scenario multiple application servers are configured to run the T24 application
connected to a single (or multiple if the DBMS supports it) database server(s). In addition multiple web
servers and multiple MQ servers can also be configured.
To enable the multiple application server environment you must first have installed the module MS.
Application Server
The application server will actually run the T24 application. Hence you need the T24 environment
(xxx.run directory) with the T24 bins and libs on all application servers. Ideally this should be setup on
one server and then copied to all servers participating in the multi-server environment. If this is NFS
mounted, although more convenient, you will introduce a single point of failure (see T24 Resilience)
Port Settings
You MUST define the port number allocation. It is essential that each application server has a different
port number range as T24 relies on the port number as a unique reference. You do this:
1. In 4.1 and above by setting the environment variable JBCPORTNO in the .profile. You should
set it to 1 for server A, 1000 for server B, 2000 for server C and so on. Do not set it to 0. jBASE
will then allocate port numbers starting from the value of JBCPORTNO. You shouldnt require
more than 1000 T24 sessions per server (remember this is not the same as the number of users
unless youre still using Desktop).
2. In 4.0 you need to set the jPML config file(Config_jPML) as follows:
TSA Definitions
To enable the tSAs to operate on more than one application server you must define this in the
TSA.SERVICE record for the COB (or whatever service you are running). You simply enter the name
of the server (as returned by hostname) and the workload profile (the number of agents) that goes with
it. Note: You must start a tSM on each application server (START.TSM) this will abort with an error
message if you try to run multiple tSMs without installing MS.
Database Server
The database server (Server D in our scenario) will host Oracle/DB2 database or the j4 file system of
jBASE. This is essentially where the T24 data is stored. There are two mechanisms to support a
database server: Remote Oracle/DB2 and jBASE jRFS
Remote Oracle/DB2
When using Oracle or DB2 the Unix files representing the Customer or Account table, for example, are
simply stubs which point to the actual database table. There is a stub file for every database table and
the stub files are stored in the xxxx.data directory. The stub file directory (xxxx.data) MUST reside on
the database server and be NFS mounted to all application servers. This is to support row locking
across all applications servers. The jRLA (locking) demon MUST be turned off on all servers this
forces the row level locks to be promoted to all servers. jRLA should only be used for single server
installations. In release 4.1 of jBASE a new locking arbiter is available (jDLA) which runs on the
database server and manages row level locks across all application servers.
In addition the VOC, &COMO& and &HOLD& must also reside on the database server this should
ideally be where the stub files are located. Hence the environment variable JEDIFILENAME_MD must
point to this location. (ie ../demo.data/VOC). Change the voc entries for &COMO& and &HOLD&
accordingly (or set up links).
2. Then set the protocol information in database manager configuration, for TCPIP:
Either
specify the socket port number directly: db2 update dbm cfg using svcename 5000
Or
specify a service name (db2icddb2) db2 update dbm cfg using svcename db2icdb2
and add following service name and port number to /etc/services file "db2icdb2 5000/tcp"
6. If DCS database (Database Connection Service) is required (e.g. for OS/390, AS/400 etc..)
db2 - CATALOG DCS DATABASE <database> AS dsn_<database>
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ServerHostName)(PORT = 1521))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_NAME = REMOTEDB)
(ORACLE_HOME = /home/oracle/9.2)
(SID_NAME = REMOTEDB)
)
)
Ensure that the listener is running [lsnrctl status / lsnrctl start] and that it is listening:
'tnsping IP Address'
REMOTEDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ServerHostName)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = REMOTEDB)
)
)
Verify the the client can contact the listener on the remote system use:
"tnsping ServerHostName"
Complete the Oracle configuration for client installation using the config-file utility and specify the TNS
name configured above.
jRFS Configuration
Add the jRFS service entry into the /etc/services file on both the clients and the server
REMOTEACC
001 D
Add SYSTEM file reference to the jRFS.init.d script and copy script into /etc/rc areas automatic restart
of the jRFS service on system reboot.
jRFS -ib
Note: if jRFS process is required to execute as differing users and groups then the process needs to
be started as root/administrator user.
These settings will write a logfile on server machine, that can be compared with client log when
debugging.
jnet_config
trace=on
log=on
logfile=/tmp/jnetlog
accesschk=on
This last setting causes a security check to be performed to see if the connecting machine/user id is
allowed access. If accesschk is set to off then no security checks are carried out.
On Unix
There are two methods available, ruserok provided by the OS, and jnetok method provided by jRFS.
The first method is normally sufficient for users needs. This involves adding the client hostname to the
/etc/hosts.equiv file on the server, along with a reference to which users are allowed access, as
follows:
/etc/hosts.equiv
Note the ruserok method can also be used with .rhost entries located in the 'home' directories of the
specified server user.
On Windows
Only the jnetok method is supported on Windows, which will require the jnet_access file to be
configured. This file contains account and password to map to, note that account must have profile set
up with home directory. In this case account=jrfsuser and password=jbase
Additional environment variables can be explicitly configured in the jnet_env file for the server process.
The jnet_map file is not used on the server.
export JEDIFILENAME_SYSTEM=$HOME/SYSTEM
MBDEMOACC
001 R
003
FBNK.ACCOUNT
001 Q
Alternatively use Remote STUB files in place of the normal hash files similar to Oracle/DB2 Stub files
e.g.
../mbdemo.data/ac/FBNK.ACCOUNT
Where RemoteFileName can be FBNK.ACCOUNT for resolution via the VOC or an absolute
pathname. RemoteSystemName is either HostName or IP Address of remote system.
jnet_config
trace=on
log=on
logfile=/tmp/jnetlog
The above configuration will cause detailed log information to be written to the logfile.
The jnet_map file allows the client connection to map the local username to a different username for
remote database. Any access to the remote files will then be carried out using the permissions of that
user.
jnet_map
The jRFS client process expects an entry to contain two client user names a local user name and
remote user name for authentication, in this case both are clientuser. This entry is then mapped during
the client connection process to the user dbuser on the host dbserver.
The jrfs_config file allows you to turn on a screen trace which outputs messages to stdout which can
be useful in noting progress. This is done by putting in settings as follows:
jrfs_config
trace=on
display=on
Browser Connectivity
The final configuration is for the Browser, using the TCS, to access the multiple application servers.
In order to beneficiate load-balancing facilities, the T24 uses IBM WMQ as communication channel
between the Client (Servlet) and the Server.
The philosophy is to put queues in clusters. And let WMQ load balance the message. By having two
(or more) clusters, guarantees that if machine becomes unavailable, the message will always find a
route to the host.
Having 2 Q.OUT at the client (TCC) level is due to the fact that the Browser Servlet has a retry
mechanism : If the response is not available in a given amount of time, it retries on another Queue !
This facility is available in the Browser setup (see specific chapter later in this document). The above
schema shows how we can use the MQ Clustering functionality with the TCServer and TCClient API.
There is 4 MQ Managers : 2 Server MQM and 2 Client MQM. On each of the Servers one, we
define a local Queue (Q.IN) - Note that both queues have the same name. On the Client MQM, we
define once a queue called Q.OUT.1 and once a Queue called Q.OUT.2. This is important to have 2
different names.
At the client level, the channel definition will look like this (file : /BrowserWeb/WEB-
INF/conf/channels.xml) :
Client 1 :
<CHANNEL>
<NAME> mq.1 </NAME>
<TIMEOUT> 15 </TIMEOUT>
<ADAPTER type=mqseries>
<MQHOST>localhost(3232)</MQHOST>
<MQMANAGER>qm.client.1</MQMANAGER>
<MQQUEUE>q.in</MQQUEUE>
<MQCHANNEL>SYSTEM.DEF.SVRCONN</MQCHANNEL>
<CONSUMER>
Client 2:
<CHANNEL>
<NAME> mq.2 </NAME>
<TIMEOUT> 15 </TIMEOUT>
<ADAPTER type=mqseries>
<MQHOST>localhost(3232)</MQHOST>
<MQMANAGER>qm.client.2</MQMANAGER>
<MQQUEUE>q.in</MQQUEUE>
<MQCHANNEL>SYSTEM.DEF.SVRCONN</MQCHANNEL>
<CONSUMER>
Note the REPLYQUEUEPARAMETERS tag. Setting it to QUEUE means that the Queue name
defined in MQQUEUE (for the Consumer) will be defined in the ReplyToQueue property of the MQMD.
When the servlet ask the TCC to use the channel called mq.1, in fact, it ask the TCC to post a
message in the queue called q.in. this queue is in the queue manager called qm.client.1. Because
q.in is a remote queue, on both the Server 1 and 2, and there are in a cluster, WMQ will physically
put the message either on the server 1 or 2, applying a round-robin. If one of the 2 server is not
available, the message will not be round-robin and will always be physically on the other server. The
TCServer will know where to publish the response as the replyToQueue MQMD setting is set in the
message.
Now, if the queue manager qm.client.1 is not available, the client will fail to publish the message and
will retry on the next one (qm.client.2).
Server 1 :
Server 2 :
Note that there is no CONSUMER. This means that the Host and the channel used to publish the
response will be the one of the listener. If a ReplyToQueueManager is defined, it will be used for the
response. Otherwise, it will be the same as the one define in the listener. Same logic for the
ReplyToQueueName.
<REPLYQUEUEPARAMETERS>QUEUE</REPLYQUEUEPARAMETERS>
The Reply Queue name will be passed with the message. The server will use its own queue manager
and the queue name defined as replyToQueue in the message descriptor.
In our situation, q.out.1 and q.out.2, the queues for publishing the response, are remote queues.
Thus, the TCServer can publish a message in one of these queues by using its own queue manager.
The message will be made available on the queue manager of the correct client. This explain why is
has been previously mentioned that this is important to have two different queue name for the reply. If
the queue name where the same, we would have the same round-robin functionality as when we post
the message to the host. (The response of a request made by client.1 could be on client.2 and vice
versa).
You first have to edit the file browserConnection.xml (in the /BrowserWeb/WEB-INF/conf directory)
to look like this :
<instances>
<instance name="production">
<encrypt>false</encrypt>
<compress>false</compress>
<retryperchannel>2</retryperchannel>
<channels>
<channel>mq.1</channel>
<channel>mq.2</channel>
Note the <channels> tag. It contains a list of channels. You can list as many channels as you wish,
and they dont need to be all of the same type. In our sample, we just list the two channels described
earlier in this document. The <compress> tag indicates to the TCClient API that the messages will be
compressed (zipped) before being posted. Note the instance name; in our sample : production.
. . .
<parameter>
<parameterName>Server Connection Method</parameterName>
<parameterValue>INSTANCE</parameterValue>
<!-- Options: GLOBUSCONNECTOR / INSTANCE / SOCKET / EJB -->
</parameter>
<parameter>
<parameterName>Instance</parameterName>
<parameterValue>production</parameterValue>
The parameter Server Connection Method indicates what method to use to connect. You have to
specify INSTANCE to say to the Browser to use the browserParameter.xml file. The parameter
GC_CHANNEL is ignored in that case. If you were using the GLOBUSCONNECTOR parameter,
you should specify this GC_CHANNEL parameter. However, you would not have the benefits of the
retry on multiple channels. Then, in the parameter Instance, specify the instance name defined
earlier in the file browserParameter.xml.
By doing so, the Browser servlet will use the channel named mq.1 to access the server, and will do
so while everythings good. If, for any reason, the response times-out (no answer), the servlet will
automatically retry on the channel mq.2. If successful, it will continue on this channel.
(S) : Success
(F) : Failure
a) mq.1(S) -> Success
b) mq.1(F) - mq.2(S) -> Success (next request will start on mq.2)
c) mq.1(F) - mq.2(F) - mq.1(S) -> Success
d) mq.1(F) - mq.2(F) - mq.1(F) - mq.2(F) -> Error in communication (retryperchannel = 2)
Please refer to the T24 Temenos Connector User Guide for further instruction on how to configure the
above.