Sie sind auf Seite 1von 6

The following is the third installment on the topic of High Availability within

an OBIEE environment. As I mentioned in previous posts, much of what I ll be discu


ssing was covered in an Oracle eSeminar which I recently viewed on the subject.
To quickly summarize, our basic goal within an HA implementation is to provide m
ultiple instances of all components from the BI Server all the way to the end us
er, so if anything fails, we have another instance of the same component ready t
o go. The first two posts on the subject can be found in the blog archives if yo
u d like to rewind. This installment will feature the HA connections between the P
resentation Servers and BI Scheduler Servers as well as between the Presentation
Servers and BI Servers. We ll also look closer at the BI Scheduler Cluster config
uration.
First we ll look at the connection between the BI Servers and the Presentation Ser
vers. This again is handled by the Cluster Controller. In a Windows environment
you ll need to go into the Administrative Tools on each Presentation Server and se
t up the clustered ODBC data source. To do so, simply ensure that Is this a clust
ered DSN? is checked, then specify the primary and secondary cluster controllers
and ports. On a Unix/Linux box, you ll need to make the following changes to the o
dbc.ini file:
IsClusteredDSN=Yes
PrimaryCCS=<PrimaryCCS>
PrimaryCCSPort=9706
SecondaryCCS=<SecondaryCCS>
SecondaryCCSPort=9706
One important note to keep in mind is that if you have any clients from which yo
u want to access your repository in online mode, you ll need the clustered DSN set
up on these as well. They ll need the same connection to the BI Servers as the Pr
esentation Servers will.
The BI Scheduler Cluster Controller assigns the active Scheduler server. In an H
A environment, you would have two cluster controllers, a primary and a secondary
, in an active/passive relationship. The secondary server will not be used unles
s the primary is unavailable. The client Controller ports are specified in the r
espective NQClusterConfig.INI files. The Scheduler configuration will be handled
by the Cluster Controllers, so all we need to do in the instanceconfig.xml file
is point to the Cluster Controllers, as shown below:
<Alerts>
<ScheduleServer>

ccsPrimary= <Primary Cluster Controller>


ccsPrimaryPort= <Client Controller Port>
ccsSecondary= <Secondary Cluster Controller>
ccsSecondaryPort= <Client Controller Port>
</Alerts>
</ScheduleServer>

The other task you would need to complete is to add the BI Scheduler Administrat
or credentials to the credential store of each ps. The quickest and easiest way
would probably be to copy the credential store file from one instance to all oth
er presentation servers.

A few notes about what will occur when the Active Scheduler fails. The transitio
n from one server to another is seamless to the user s perspective. The users won t
receive any errors, the Cluster Controller will simply detect the failure on the
active server and point to the secondary. Any jobs which didn t complete will be
picked up where they left off. One important note to remember is that once the p
rimary server is back up, it will not automatically resume the primary role. Onl
y after the services have been restarted will the primary Scheduler resume its p
roper role. If any Java, command line, or script jobs are being run during an in
terruption, they will be restarted when another server is activated and given a
new job ID. Take a look at the diagram of the basic HA architecture for the Sche
duler Servers.
To configure the Cluster Controller to talk with multiple Schedulers, you must m
ake the following entry in the NQSClusterConfig.INI file on each Cluster Control
ler:
SCHEDULERS = "scheduler1:9705:9708", "scheduler2:9705:9708";
The first port number for each scheduler will be the rpc port, where the Schedul
er will be listening for connections from the Scheduler. The second will be the
monitor port, which simply listens for a heartbeat from the Scheduler, to confirm
it s still available. This is how the Cluster Controller determines that the prima
ry server has gone down and it needs to look to the secondary server. The defaul
t ports for the rpc and monitor ports are 9705 and 9708, respectively.
Configuration of the BI Scheduler Servers themselves can be completed through th
e Job Manager or via command line using schconfig. All configuration settings ar
e saved in the instanceconfig.xml file of the Scheduler folder, not to be confus
ed with the file of the same name in the web\config folder. The Scheduler can be
configured to talk with multiple Presentation Servers and Java Hosts, which wil
l be necessary if you are setting up a true HA environment. If you are using the
Scheduler to run script files, you must place them in a shared network file, so
that multiple Scheduler Servers can access them. All Schedulers should have rea
d/write access to these files.

That s going to wrap it up for this post. I ll finish things up next time with a clo
ser look at the BI Server cluster and how it s integrated into the HA environment.
0 comments
Labels: architecture, cluster, High Availability, Performance, Presentation Serv
er, Scheduler
WEDNESDAY, JANUARY 21, 2009
Oracle BI High Availability: Part 2
Posted by Brian Ferin at Wednesday, January 21, 2009
This is the second installment in a series of posts in which I ve been discussing
the implementation of High Availability within an OBIEE environment. Much of wha
t we ll be discussing was in included in an Oracle eSeminar which I recently viewe
d on the topic. In my original post, I gave the broad strokes in regards to HA a
nd provided the basic overall architecture of a High Availability deployment. Th
is time, we ll start to dive into some of the specifics regarding configuration wh
ich will be necessary to implement a true shared nothing HA environment.
Each Presentation Server can be configured to talk with multiple web servers, Ja
va hosts, BI Servers, and BI Schedulers. In this installment we ll cover the Prese
ntation Catalog, web server, and Java host connections to the Presentation Serve
rs as well as how the user is affected when a Presentation Server fails. The dia
gram below is a subset of the one shown in my original post on the subject. This
figure shows only the components of the HA architecture which we ll be looking at
today.

First, let s discuss the web client behavior in a High Availability environment. W
hen a user begins a session, the web client is bound to a specific Presentation
Service and subsequent requests will be sent to that same service. When a Presen
tation Service failure occurs, the error is relayed back to the browser and any
unsaved data will be lost. Upon logging in again, the user will be bound to anot
her available Presentation Service. Two exceptions to this rule would be if the
user is using SSO or if the Presentation Services plug-in is configured to autom
atically reconnect to another server. In these cases, there may still be a loss
of session state. There will also be a time lag to recognize the failed server.
This lag will be dependent on plug-in ping settings which we ll get to eventually.
Any iBots which fail to complete as a result of a Presentation Service failure w
ill result in an error being passed to the BI Scheduler Server and will be inclu
ded in the log file. When the next available Presentation Service becomes availa
ble, the job is rerun without impact and will start again at the step in which i
t originally failed.
Next, we ll look at how we would like our Presentation Services to share the Prese
ntation Catalog. There are two basic options which can be deployed. The first op
tion is to use a shared file system. All presentation servers have access to the
same shared files. This is the simplest approach and, as I mentioned in my last
post, is recommended by Oracle. Alternatively, a more complex method of catalog
replication can be deployed through the use of replication agents on each insta
nce which will monitor a single instance for changes and sync other copies as ne
cessary. Two-way replication, which involves making changes to multiple copies o
f the Presentation Catalog and attempting to keep them all in sync, is highly di
scouraged and should be avoided. As you can imagine, this method would make main
taining data integrity much more difficult and complicated.

If you ll be using the shared file approach, the first step necessary will be to p
oint each presentation server to the shared file path by editing the <Catalog> e
lement of the instanceconfig.xml file. In addition, we should also make changes
the Presentation Service cache settings. Keep in mind that each instance will ha
ve its own cache, which we ll want to configure to ensure it won t get stale. Oracle
recommends adding the following settings to each configuration file:
<Catalog>
<ReportIndexRefreshSecs>120 </ReportIndexRefreshSecs>
<AccountCacheTimeoutSecs>180 </AccountCacheTimeoutSecs>
<PrivilegeCacheTimeoutSecs>180 </PrivilegeCacheTimeoutSecs>
<CacheTimeoutSecs>120</CacheTimeoutSecs>
<CacheCleanupSecs>600</CacheCleanupSecs>
</Catalog>
Another piece of the puzzle will be to configure the presentation servers to wor
k with multiple Java hosts. Once again, we must edit instanceconfig.xml to compl
ete this task. This will involve listing the java host instances as shown below.
The default Java Host port is 9810, but you can verify this by checking the Ora
cleBI_Home\web\javahost\config\config.xml file. Simple load balancing will be pe
rformed in a round robin fashion between all instances listed in the config file
.
<JavaHostProxy>
<Hosts>
<Host address= <Javahost Machine1>

port= 9810 />

<Host address= <Javahost Machine2>

port= 9810 />

</Hosts>
</JavaHostProxy>
You may also add an optional LoadBalance/Ping element. This element specifies th
e criteria for determining whether a Java Host is reachable. The ping element is
not necessary if you wish to keep the default, which is 5 pings at 20 second in
tervals.
The final component we ll look at today is the BI pres Services plug-in, which sit
s on the web servers. Here I ll outline the changes necessary on each web server i
nstance both for IIS and Java-based servers. IIS web servers will use the ISAPI
plug-in, and the config file for this plug-in can be found in the OracleBIData_H
ome\web\config directory. The only element you must configure is the Hosts eleme
nt, in which you will list the host and port of all Presentation Service instanc
es. You may also optionally configure the LoadBalancer element which controls th
e autoroute feature. The default setting is false, which means that the user wil
l receive an error if the current Presentation Server goes down. Setting this op
tion to true would cause the server to attempt to connect to the next available
Presentation Server without impact to the user. You also have the option of addi
ng the ping element, which is the same element we just discussed when examining
the Java host configuration.

The Java Servlet changes necessary for Java-based web server configuration are v
ery similar to the ISAPI configuration mentioned above. You ll need to edit the co
nfig file found in the OracleBI_Home\web\app\WEB-INF directory to include all Pr
esentation Server host and port pairs. The <oracle.bi.presentation.sawconnect.lo
adbalance.AlwaysKeepSessionAffiliation> element is equivalent to the <LoadBalanc
er> element with the ISAPI plug-in and should be set to Y or N .
Next time we ll continue to discuss the BI Presentation Server and how it will be
configured to talk with the BI Server and Scheduler

0 comments
Labels: architecture, High Availability, iBots, Performance, Presentation Server
WEDNESDAY, JANUARY 7, 2009
Oracle BI High Availability
Posted by Brian Ferin at Wednesday, January 07, 2009
I recently viewed an eSeminar on Oracle BI High Availability given by Oracle and
thought I'd discuss the fundamentals of High Availability for anyone who's unfa
miliar with the concept. High Availability, in the broadest possible terms, is a
protocol for system design which will ensure a system is running acceptably for
a certain percentage of a given time period.
Within OBIEE, availability can be defined basically as the ability to log into t
he system and perform normal operations at an acceptable and consistent level. T
his can be accomplished by employing system fault tolerance, which means that SP
OF (single points of failure) must be eliminated. The goal with HA is to create
a shared nothing environment in which any single box can temporarily fail without
a major impact to users. During the eSeminar, it was explained that a general go
al for a High Availability implementation might be 99.9% availability for a 24/7
system, although I'm sure service level agreements vary greatly from case to ca
se. Using the "three nines" availability percentage, this would calculate to onl
y 8.76 hours of downtime for an entire year, or about 43 minutes a month.

The above diagram should look familiar to most, this is a very simple representa
tion of the OBIEE architecture showing the major components. A high availability
deployment would have multiple instances of each of these objects in the case o
f an instance failing. An HA implementation will also use a clustered configurat
ion for the BI server, including both a primary and secondary cluster controller
. See the figure below which is another simplified look at the architecture with
the redundant nodes added.

Notice that each of the objects or nodes is connected to multiple instances of e


very object it must talk with. I ve left the catalog, repository, and scheduler da
tabase out of this diagram for simplicity s sake, but each of these will be shared
by its respective servers. You also may have noticed that the secondary cluster
controller isn t depicted here either, but should be included in any clustered HA
setup. It is also possible for each presentation server to have its own copy of
the presentation catalog, but due to the complicated setup and the difficulty w

ith keeping the files in sync, the easier (and Oracle recommended) approach is t
o use a shared file system. Although the redundant web servers and their load ba
lancer fall outside of the OBIEE scope, they are necessary to complete a true sha
red nothing environment all the way back to the user.
In future posts, I plan to drill into some of the details surrounding the config
uration of the separate components of an HA deployment. We ll be looking at some o
f the configuration file changes which will be necessary as well as exactly what
types of impacts will be seen when specific failures do occur in an HA environm
ent. Stay tuned for the next installment which will highlight the Presentation S
ervices component .
0 comments
Labels: architecture, cluster, High Availability, Performance

Das könnte Ihnen auch gefallen