Beruflich Dokumente
Kultur Dokumente
Table of Contents
Introduction ......................................................................................................................................................................... 5
Hardware Recommendations for Implementing Oracle BI Applications............................................................................. 5
Storage Considerations for Oracle Business Analytics Warehouse ................................................................................. 6
Shared Storage Impact Benchmarks ........................................................................................................................... 7
Conclusion ................................................................................................................................................................... 8
Application Server Sizing and Capacity Planning ................................................................................................................. 8
Introduction ..................................................................................................................................................................... 8
WebLogic Admin Server: Memory Settings ..................................................................................................................... 9
Managed BI Server: Memory Settings ............................................................................................................................. 9
Managed ODI Server: Memory Settings .......................................................................................................................... 9
Managed BI Server: Other Recommended Settings ...................................................................................................... 10
Source Environments Recommendations for Better Performance ................................................................................... 11
Introduction ................................................................................................................................................................... 11
Change Data Capture Considerations for Source Databases ........................................................................................ 11
Source Dependent DataStore with Oracle Golden Gate ........................................................................................... 12
Materialized View Logs ............................................................................................................................................. 12
Database Triggers on Source Tables ......................................................................................................................... 13
Extract Workload Impact on Data Sources .................................................................................................................... 14
Allocate Sufficient TEMP space OLTP Data Sources .................................................................................................. 14
Replicate Source Tables to SDS Schema on Target Tier ............................................................................................ 14
Custom Indexes in Oracle EBS for Incremental Loads Performance ............................................................................. 14
Introduction .............................................................................................................................................................. 14
Custom OBIEE indexes in EBS 11i and R12 systems .................................................................................................. 14
Custom EBS indexes in EBS 11i source systems ........................................................................................................ 18
Oracle EBS tables with high transactional load......................................................................................................... 19
Additional Custom EBS indexes in EBS 11i source systems ...................................................................................... 20
Oracle Warehouse Recommendations for Better Performance ....................................................................................... 20
Database configuration parameters.............................................................................................................................. 20
REDO Log Files Sizing Considerations ............................................................................................................................ 21
Oracle RDBMS System Statistics .................................................................................................................................... 22
Parallel Query configuration.......................................................................................................................................... 23
Oracle Business Analytics Warehouse Tablespaces ...................................................................................................... 23
Oracle BI Applications Best Practices for Oracle Exadata .................................................................................................. 25
Database Requirements for Analytics Warehouse on Exadata ..................................................................................... 25
Handling BI Applications Indexes in Exadata Warehouse Environment ....................................................................... 26
Gather Table Statistics for BI Applications Tables ......................................................................................................... 26
Oracle Business Analytics Warehouse Storage Settings in Exadata .............................................................................. 26
Parallel Query Use in BI Applications on Exadata.......................................................................................................... 27
Compression Implementation Oracle Business Analytics Warehouse in Exadata ........................................................ 27
OBIEE Queries Performance Considerations on Exadata .............................................................................................. 27
Exadata Smart Flash Cache ............................................................................................................................................ 28
Oracle BI Applications High Availability ............................................................................................................................. 28
Introduction ................................................................................................................................................................... 28
High Availability with Oracle Data Guard and Physical Standby Database ................................................................... 28
Conclusion.......................................................................................................................................................................... 31
Introduction
Oracle Business Intelligence (BI) Applications Version 11g delivers a number of adapters to various business applications on
Oracle database. Each Oracle BI Applications implementation requires very careful planning to ensure the best performance
during ETL, end user queries and dashboard executions.
This article discusses performance topics for Oracle BI Applications 11g (11.1.1.7.1 and higher), using Oracle Data Integrator
(ODI) 11g 11.1.1.7.1, and using Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.x. Most of the recommendations
are generic for BI Applications 11g contents and its BI techstack. Release specific topics refer to exact version numbers.
Note: The document is intended for experienced Oracle BI Administrators, DBAs and Applications implementers. It covers
advanced performance tuning techniques in ODI, OBIEE and Oracle RDBMS, so all recommendations must be carefully verified
in a test environment before applied to a production instance. Customers are encouraged to engage Oracle Expert Services to
review their configurations prior to implementing the recommendations to their BI Applications environments.
SMALL
MEDIUM
LARGE
Up to 200 Gb
200 Gb to 1 Tb
1 Tb and higher
# CPU cores
16
32
64*
Physical RAM
32-64 Gb
64-128 Gb
256+ Gb*
Storage Space
Up to 400 Gb
400 Gb 2 Tb
2T b and higher
Storage System
# CPU cores
16
32
Physical RAM
24 Gb
32 Gb
64 Gb
Storage Space
100 Gb local
200 Gb local
400 Gb local
* Consider implementing Oracle RAC with multiple nodes to accommodate large numbers of concurrent users, accessing web
reports and dashboards.
Important!
The configurations above cover ODI Agent, Load Plan Generator (LPG) and BIA Configuration Manager (BIACM), all
collocated on the same hardware as the OBIEE. The recommended specifications above accommodate primarily for
OBIEE workload. Neither ODI nor LPG and CM would generate noticeable overhead. If you plan to deploy OBIEE on a
separate server (farm), then you can use less powerful configuration for ODI, LPG and CM. Refer to Oracle Weblogic
documentation for more hardware requirements.
The internal benchmarks did not show any noticeable workload from ODI agent processes. Oracle to Oracle
configurations can effectively use a database link knowledge module, even further minimizing the impact from ODI
processes.
ODI deployments with agent processes, running on separate servers, or agents load balancing on multiple servers, are
not covered in this document. Refer to BI Applications and ODI documentation for more information for such
configurations.
Depending on the number of planned concurrent users running OBIEE reports, you may have to plan for more memory
on the target tier to accommodate for the queries workload.
To ensure the queries scalability on OBIEE tier, consider implementing OBIEE Cluster or Oracle Exalytics. Refer to OBIEE
and Exalytics documentation for more details.
It is recommended to set up all Oracle BI Applications tiers in the same local area network. Deploying any of its tiers
over Wide Area Network (WAN) may cause additional delays during ETL Extract mappings execution and impact Load
Plan windows.
Setting excessive parallel query processes (refer to Parallel Query Configuration section for more details)
32 GB RAM
Shared NetApp filer volumes, volume1 and volume2, are mounted as EXT3 file systems:
o
Set record block size for I/O operations to 8K or 16K, the recommended block size in the target database.
Execute parallel load using eight child processes to imitate average workload during ETL run.
Random Read: read a file with accesses made to random locations in the file.
Random Write: write a file with accesses made to random locations in the file.
Mixed workload: read and write a file with accesses made to random locations in the file.
Strided Read: read a file with a strided access behavior, for example: read at offset zero for a length of 4 Kbytes, seek
200 Kbytes, read for a length of 4 Kbytes, then seek 200 Kbytes and so on.
Test #1
Test #2
46087.10 KB/sec
30039.90 KB/sec
"Rewrite "
70104.05 KB/sec
30106.25 KB/sec
"Read "
3134220.53 KB/sec
2078320.83 KB/sec
"Re-read "
3223637.78 KB/sec
3038416.45 KB/sec
1754192.17 KB/sec
1765427.92 KB/sec
1783300.46 KB/sec
1795288.49 KB/sec
1724525.63 KB/sec
1755344.27 KB/sec
2704878.70 KB/sec
2456869.82 KB/sec
68053.60 KB/sec
25367.06 KB/sec
"Pwrite "
45778.21 KB/sec
23794.34 KB/sec
"Pread "
2837808.30 KB/sec
2578445.19 KB/sec
Total Time
110 min
216 min
Initial Write, Rewrite, Initial Read, Random Write, and Pwrite (buffered write operation) were impacted the most, while
Reverse Read, Stride Read, Random Read, Mixed Workload and Pread (buffered read operation) were impacted the least by
the concurrent load.
Read operations do not require specific RAID sync-up operations therefore read requests are less dependent on the number of
concurrent threads.
Conclusion
You should carefully plan for storage deployment, configuration and usage for the Oracle BI Applications environment. Avoid
sharing the same RAID controller(s) across multiple databases. Set up periodic monitoring of your I/O system during both ETL
and end user queries load for any potential bottlenecks.
ODI Console
ODI Agent
Important! It is not recommended to collocate the Application Server tier with the Data Warehouse tier for the same reasons.
The next sections cover the above components sizing parameters for better performance.
-Dweblogic.Name=AdminServer
OBIEE and LPG are the most critical components deployed under bi_server1, which may require the largest amounts of
memory. They are covered in the section below.
OBIEE Memory Recommendations
Oracle BI Enterprise Edition end-user reports and dashboards, executed concurrently, may result in higher memory
consumption. OBIEE Caching can mitigate the workload impact and reduce the memory footprint. Business-wide BI
implementations with complex end user query patterns, ad-hoc reports from BI Answers, regular ibot queries, as well as
stored reports and dashboards, running against multiple functional areas, typically get 25-40% OBIEE cache hit ratio.
Important! Oracle has published a separate paper (Oracle BI EE 11g Architectural Deployment: Capacity Planning Doc ID
1323646.1) with BI Server sizing calculations, so OBIEE scalability benchmarks are outside the scope of this technote.
The internal benchmarks for 150 VUsers with think time = 5sec, running non-cached reports against medium-sized data
warehouse, showed ~5.5Gb peak of used memory by OBIEE. So, the recommended memory allocation for OBIEE with
min=2Gb and max = 6Gb should be sufficient for most initial rollouts. Make sure you monitor your Application Server tier for
workload and memory usage during BI reports querying, and increase the memory settings as needed.
Load Plan Generator Memory Recommendations
LPG generates an execution plan for a chosen adapter and functional areas. Usually it gets run one or two times, primarily
during BI Applications implementation phase. LPG requires up to 3Gb max heap size to complete generating a Load Plan.
Otherwise the LPG process may hang up, running out of memory.
Note, Load Plan generation is expected to be performed during implementation only, so LPG should not compete with OBIEE
for available memory.
10
11
Note: you have to update your ODI repository to use the replicated persistent staging tables or materialized views instead of
the original source tables in ODI scenarios.
Source Dependent DataStore with Oracle Golden Gate
Oracle BI Applications 11g provide support for Source Dependent DataStore(SDS) with Oracle Golden Gate (GG) option out-ofthe box. This is the recommended configuration, especially if Source OLTP Applications do not have native CDC support. Such
option provides the best flexibility and performance for CDC, and the least impact on source databases. Golden Gate parses
each captured record and marks it as an insert, update or delete. Refer to Golden Gate / OLPT Source documentation for more
details on integrating and configuring Golden Gate for your Source database.
Materialized View Logs
Introduction
Oracle Materialized View (MV) Logs capture the changing data in base source tables and supply the critical CDC volumes to the
extract mappings.
Important! MV Logs present additional challenges, when used in OLTP environments. You should carefully test MV Log based
CDC before implementing it in your production environment.
Review the following constraints for using MV Logs:
1. MV Logs can cause additional overhead on business transactions performance, if created on heavy volume
transactional tables in busy OLTP sources.
2. Ensure regular MV refresh to purge MV Logs. Otherwise they will grow in size and generate even more overhead for
OLTP applications.
3. Avoid sharing an MV Log between two or more fast refreshable MVs. The MV Log will not be purged until all
depending MVs are refreshed.
Refer to Oracle documentation for more details on MV and MV Logs implementation.
The next sections will use an example for using an MV Log on PS_PROJ_RESOURCE in PeopleSoft to speed up incremental
extract for SDE_PSFT_ProjectBudgetFact mapping.
MV Log CDC Implementation
PeopleSoft ESA Application does not maintain DTTM_STAMP column in PS_PROJ_RESOURCE, which is used in
SDE_PSFT_ProjectBudgetFact extract logic. As the result, the optimizer uses an expensive full table scan for during an
incremental extract SQL execution.
The following steps describe the CDC implementation using MV Log approach:
1. Create An MV log on PS_PROJ_RESOURCE source table:
CREATE MATERIALIZED VIEW LOG ON PS_PROJ_RESOURCE NOCACHE LOGGING NOPARALLEL WITH SEQUENCE;
3. Create a Materialized View using PS_PROJ_RESOURCE definition and an additional LAST_UPDATE_DT column. The
latter will be populated using SYSDATE values:
12
5. Create a database view on the MV, which will be used in the SDE Fact Source Qualifier query:
CREATE VIEW OBIEE_PS_PROJ_RESOURCE_VW AS SELECT * FROM OBIEE_PS_PROJ_RESOURCE_MV;
6. Run the complete refresh for the MV. The subsequent daily ETLs will perform fast refresh using the MV Log.
exec
dbms_mview.refresh(OBIEE_PS_PROJ_RESOURCE_MV,C);
7. Update the SDE fact extract logic and replace the original table with the MV, and add an additional filter:
LAST_UPDATE_DT > to_date('$$LAST_EXTRACT_DATE', 'MM/DD/YYYY HH24:MI:SS')\
1.
Save the changes and re-generate the updated scenario in ODI Studio.
13
Consider adding a unique index on the auxiliary CDC table primary column will speed up updates.
Measure carefully the impact on your source OLTP workload before you choose the trigger CDC approach, as it can
easily generate significant overhead and impact transactional business users.
Tables that do not have indexes on LAST_UPDATE_DATE in the latest EBS releases, but there are no performance
implications reported with indexes on LAST_UPDATE_DATE column.
Tables that have indexes on LAST_UPDATE_DATE columns, introduced in Oracle EBS Release 12.
Tables that cannot have indexes on LAST_UPDATE_DATE because of serious performance degradations in the source
EBS environments.
14
EBS R12
EBS 11i release 11.5.9 or lower and it has been migrated to OATM*
AP.AP_EXPENSE_REPORT_HEADERS_ALL(LAST_UPDATE_DATE)
AP.AP_INVOICE_PAYMENTS_ALL(LAST_UPDATE_DATE)
15
16
There is one more custom index, recommended for Supply Chain Analytics on AP_NOTES.SOURCE_OBJECT_ID column:
CREATE index AP.OBIEE_AP_NOTES ON AP.AP_NOTES (SOURCE_OBJECT_ID) tablespace <IDX_TABLESPACE> ;
17
Important! You must use FND_STATS to compute statistics on the newly created indexes and update statistics on
newly indexed table columns in the EBS database.
Important! All indexes introduced in this section have the prefix OBIEE_ and they do not follow the standard Oracle
EBS Index naming conventions. If a future Oracle EBS patch creates an index on LAST_UPDATE_DATE columns for the
tables listed below, Oracle EBSs Autopatch may fail. In such cases the conflicting OBIEE_ indexes must be dropped,
and the Autopatch can be restarted.
Custom EBS indexes in EBS 11i source systems
The second category covers tables, which have indexes on LAST_UPDATE_DATE, officially introduced Oracle EBS Release 12.
All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided below. Do not change
the index name avoid any future patch or upgrade failures on the source EBS side.
If your source system is one of the following:
-
EBS R12
EBS 11i release 11.5.9 or lower and it has been migrated to OATM*
18
Important! You should use FND_STATS to compute statistics on the newly created indexes and update statistics on
newly indexed table columns in the EBS database.
Since all custom indexes above follow Oracle EBS index standard naming conventions, any future upgrades would not be
affected.
*) Oracle Applications Tablespace Model (OATM):
Oracle EBS release 11.5.9 and lower uses two tablespaces for each Oracle Applications product, one for the tables and
one for the indexes. The old tablespace model standard naming convention for tablespaces is a product's Oracle
schema name with the suffixes D for Data tablespaces and X for Index tablespaces. For example, the default
tablespaces for Oracle Payables tables and indexes are APD and APX, respectively.
Oracle EBS 11.5.10 and R12 use the new Oracle Applications Tablespace Model. OATM uses 12 locally managed
tablespaces across all products. Indexes on transaction tables are held in a separate tablespace APPS_TS_TX_IDX,
designated for transaction table indexes.
Customers running pre-11.5.10 releases can migrate to OATM using OATM Migration utility. Refer to Oracle Support
Note 248857.1 for more details.
Oracle EBS tables with high transactional load
The following Oracle EBS tables are used for high volume transactional data processing, so introduction of indexes on
LAST_UPDATE_DATE may cause additional overhead for some OLTP operations. For the majority of all customer
implementations the changes will not have any significant impact on OLTP Applications performance. Oracle BI Applications
customers may consider creating custom indexes on LAST_UPDATE_DATE for these tables only after benchmarking
incremental ETL performance and analyzing OLTP applications impact.
To analyze the impact on EBS source database, you can generate an Automatic Workload Repository (AWR) report during the
execution of OLTP batch programs, producing heavy inserts / updates into the tables below, and review Segment Statistics
section for resource contentions caused by custom LAST_UPDATE_DATE indexes. Refer to Oracle RDBMS documentation for
more details on AWR usage.
Make sure you use the following pattern for creating custom indexes on the listed tables below:
CREATE index <Ppod>.OBIEE_<Table_Name> ON <Prod>.<Table_Name> (LAST_UPDATE_DATE) tablespace
<IDX_TABLESPACE> ;
Prod
AP
AP
AP
AP
AR
AR
AR
AR
BOM
BOM
CST
GL
GL
Table Name
AP_EXPENSE_REPORT_LINES_ALL
AP_INVOICE_DISTRIBUTIONS_ALL
AP_AE_LINES_ALL
AP_PAYMENT_HIST_DISTS
AR_PAYMENT_SCHEDULES_ALL
AR_RECEIVABLE_APPLICATIONS_ALL
RA_CUST_TRX_LINE_GL_DIST_ALL
RA_CUSTOMER_TRX_LINES_ALL
BOM_COMPONENTS_B
BOM_STRUCTURES_B
CST_ITEM_COSTS
GL_BALANCES
GL_DAILY_RATES
19
GL
INV
INV
ONT
PER
PO
WSH
WSH
GL_JE_LINES
MTL_MATERIAL_TRANSACTIONS
MTL_SYSTEM_ITEMS_B
OE_ORDER_LINES_ALL
PAY_PAYROLL_ACTIONS
RCV_SHIPMENT_LINES
WSH_DELIVERY_ASSIGNMENTS
WSH_DELIVERY_DETAILS
Both 11g and 12c customers can use the following template:
db_name
= <database name>
control_files
db_block_size
= 8192
processes
= 500
db_files
= 1024
cursor_sharing
= EXACT
cursor_space_for_time
= FALSE
session_cached_cursors
= 500
open_cursors
= 500
nls_sort
= BINARY
trace_enabled
= FALSE
audit_trail
= NONE
_trace_files_public
= TRUE
20
timed_statistics
= TRUE
statistics_level
= TYPICAL
sga_target
= 8G
pga_aggregate_target
= 4G
workarea_size_policy
= AUTO
db_block_checking
= FALSE
db_block_checksum
= TYPICAL
db_writer_processes
= 2
log_checkpoint_timeout
= 1800
log_checkpoints_to_alert
= TRUE
undo_management
= AUTO
undo_tablespace
undo_retention
= 90000
job_queue_processes
= 10
parallel_adaptive_multi_user = FALSE
parallel_max_servers
= 16
parallel_min_servers
= 0
star_transformation_enabled
= TRUE
query_rewrite_enabled
= TRUE
query_rewrite_integrity
= TRUSTED
_b_tree_bitmap_plans
= FALSE
plsql_code_type
= NATIVE
disk_asynch_io
= FALSE
fast_start_mttr_target
= 3600
Review the template file above and adjust your target database parameters specific to your data warehouse tier hardware.
Note: init.ora template for Exadata / 11gR2 is provided in Exadata section of this document.
21
Gb
---------280.49
Most of ODI scenario SQLs perform conventional inserts into i$ interface tables during ETL runs. With sub-optimal size of REDO
logs you may get a lot of log file switch (checkpoint incomplete) wait events in your AWR reports during ETL
runs.
To minimize the impact from log file switch (checkpoint incomplete) wait events and improve performance for conventional
inserts, increase the size for your REDO files. You can query your database dictionary to find the optimal size (in Mb):
select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;
If your data warehouse hardware does not support asynchronous I/O, then you can improve the conventional inserts by
setting DBWR_IO_SLAVES in init.ora to non-zero value.
The internal benchmarks for running large inserts into an i$ table without asynchronous I/O support showed the best
performance for conventional inserts with 2-3 DB Writer processes, 3 x 1Gb Redo Logs, and DBWR_ID_SLAVES = 1:
Number DBWR_IO_SLAVES
insert append (2 dbwr and 6x100M Redo log)
insert append (2 dbwr and 3x1G Redo log)
insert append (3 dbwr and 3x1G Redo log)
insert new rows (conventional - 2 dbwr and 6x100M Redo Logs)
insert new rows (conventional - 4 dbwr and 6x100M Redo Logs)
insert new rows (conventional - 3 dbwr and 6x100M Redo Logs)
insert new rows (conventional - 3 dbwr and 3x1G Redo Logs)
insert new rows (conventional - 3 dbwr and 4x500M Redo Logs)
insert new rows (conventional - 3 dbwr and 6x500M Redo Logs)
insert new rows (conventional - 2 dbwr and 3x1G Redo Logs)
0
113
71
80
530
587
524
236
1
2
Runtime In Sec
67
69
251
257
254
160
178
175
166
255
252
259
339
258
254
Run the dbms_stats.gather_system_stats('start') procedure at the beginning of the workload window, then the
dbms_stats.gather_system_stats('stop') procedure at the end of the workload window.
Important! Execute dbms_stats.gather_system_stats, when the database is not idle. Oracle computes desired system statistics
when database is under significant workload. Usually half an hour is sufficient to generate the valid statistic values.
22
OBJECTS
PURPOSE
Temporary
Database temporary
segments
Initial ETL scenarios process very complex SQLs with multiple join operations, which
actively use temporary segments, stored in TEMP tablespace. TEMP can get filled very
fast, when processing multiple concurrent SQLs with heavy joins
SDS
If you implemented SDS option, use a separate tablespace for replicated source objects
and their indexes. SDS tablespace should be sized based on the source tables footprint
in OLTP.
Interface
ODI interface tables are dropped and re-created for each ETL run. By separating them
into a dedicated tablespace you can resize your interface tablespace after initial ETL, or
create the tablespace as compressed.
Stage
Staging tables are always truncated in each ETL run, so they can be deployed in a
separate tablespace. Note, that Stage tablespace can grow very large in size during
initial ETL.
Target Data
Target Index Target Index Segments Target Index tablespace stores all indexes on Data warehouse tables.
23
1. BI Applications ODI scenarios use ODI interface tables (c$, i$, e$, etc) for data processing, transformations and error
logging operations. The typical data movement in 11g ETL can presented as: source -> c$ -> stage table -> i$ -> target
table. When sizing your Data Warehouse, you need to plan for additional space for ODI interface tables:
BI Applications Load Plan Initial executions bypass i$ tables and load data directly into the target tables, so i$
segments do not consume any space in initial ETLs.
Important! You can conserve additional space and improve performance for your extract scenarios by
switching from default JDBC Load Knowledge Module (LKM) to Database Link KM. DBLink KM creates views on
source and c$ synonyms to the source views over a database link. The use of DBLink KM further reduces ETL
data movement, saves space and improves extract (SDE) scenarios performance. Refer to ODI KMs
documentation for more details.
BI Apps Load Plans drop and re-create the interface tables within every single scenario for each ETL run.
If you use JDBC LKM, estimate the extracted volumes for your largest facts, executed in parallel, and then sum
up the volumes to find out the maximum space, consumed by c$ tables in your initial ETLs. Ignore this step, if
you use DBLink KM.
Estimate your facts incremental volumes, processed concurrently, and then sum them up to find out the
maximum space, consumed by i$ tables.
2. While stage segments consume space, almost equivalent to target segments footprint during initial ETL, they get
truncated in subsequent incremental runs, so their space allocation will be driven by the incremental volumes. If all
scenarios support incremental logic, then stage objects space may consume from 5 to 20% of its initially allocated
tablespace. So, you can resize your stage tablespace after completing initial ETL.
3. Depending on your hardware configuration, you may consider isolating staging tablespace target Data tablespace on
different controllers. Such configuration would help to speed up Target Load (SIL) mappings for fact tables by
balancing I/O load on multiple RAID controllers.
4. Temporary tablespace needs to be sized to accommodate for initial ETL. Since BI Applications scenarios do all
transformations in database, they may produce heavy joins, and often use temporary segments for storing interim
result sets, while going through execution plan operations. Make sure you allocate enough space in your Temporary
tablespace(s) to accommodate for parallel processing during initial ETL. Typically Fact tables processing in parallel
consumes the most TEMP space in initial loads.
5. SDS tablespace sizing is not covered in this document, since its footprint depends on implemented functional areas.
You can estimate its size by checking space of source tables and indexes, which will be replicated to SDS.
6. During incremental loads, by default, Load Plan drops and rebuilds indexes, so you should separate all indexes in a
dedicated tablespace and, if you have multiple RAID / IO Controllers, move the INDEX tablespace to a separate
controller.
7. Note that the Target INDEX Tablespace may increase, if you enable more query indexes in your data warehouse.
The following table summarizes uncompressed space allocation estimates in a data warehouse by its target data volume
range:
Target Data Volume
SMALL
MEDIUM
LARGE
50 Gb and higher
1 Tb and higher
30+ Gb
150+ Gb
400+ Gb
Temporary Tablespace
50+ Gb
100+ Gb
300+ Gb
24
Stage Tablespace
50+ Gb
200+ Gb
1+ Tb
Interface Tablespace
20 Gb
50 Gb
100+ Gb
Important! You should use Locally Managed tablespaces with AUTOALLOCATE clause. DO NOT use UNIFORM extents size, as it
may cause excessive space consumption and result in queries slower performance.
Use standard (primary) block size for your warehouse tablespaces. DO NOT build your warehouse on nonstandard block tablespaces.
= <database name>
= /<dbf file loc>/ctrl01.dbf, /<dbf file loc>/ctrl02.dbf
db_block_size
db_block_checking
db_block_checksum
deferred_segment_creation
=
=
=
=
user_dump_dest
background_dump_dest
core_dump_dest
max_dump_file_size
=
=
=
=
/<DUMP_HOME>/admin/<dbname>/udump
/<DUMP_HOME>/admin/<dbname>/bdump
/<DUMP_HOME>/admin/<dbname>/cdump
20480
processes
sessions
db_files
session_max_open_files
dml_locks
cursor_sharing
cursor_space_for_time
session_cached_cursors
open_cursors
db_writer_processes
aq_tm_processes
job_queue_processes
=
=
=
=
=
=
=
=
=
=
=
=
500
4
1024
100
1000
EXACT
FALSE
500
1000
2
1
2
timed_statistics
statistics_level
sga_max_size
sga_target
shared_pool_size
shared_pool_reserved_size
workarea_size_policy
pre_page_sga
pga_aggregate_target
=
=
=
=
=
=
=
=
=
true
typical
45G
40G
2G
100M
AUTO
FALSE
16G
log_checkpoint_timeout
log_checkpoints_to_alert
log_buffer
= 3600
= TRUE
= 10485760
undo_management
undo_tablespace
undo_retention
= AUTO
= UNDOTS1
= 90000
parallel_adaptive_multi_user
parallel_max_servers
= FALSE
= 128
25
parallel_min_servers
= 32
ETL indexes for optimizing ETL performance and ensuring data integrity
Exadata Storage Indexes functionality cannot be considered as unconditional replacement for BI Apps indexes. You can employ
storage indexes only in those cases when BI Applications query indexes deliver inferior performance and you ran the
comprehensive tests to ensure no regressions for all other queries without the query indexes.
Do not drop any ETL indexes, as you may not only impact your ETL performance but also compromise data integrity in your
warehouse.
The best practices for handling BI Applications indexes in Exadata Warehouse:
Turn on Index usage monitoring to identify any unused indexes and drop / disable them in your env. Refer to the
corresponding section in the document for more details.
Consider building custom aggregates to pre-aggregate more data and simplify queries performance.
Drop selected query indexes and disable them in ODI LPs to use Exadata Storage Indexes / Full Table Scans only after
running comprehensive benchmarks and ensuring no impact on any other queries performance.
The recommended database block size (db_block_size parameter) is 8K. You may consider using 16K block size as well,
primarily to increase for better compression rate, as Oracle applies compression at block level. Refer to init.ora
template in the section below.
Make sure you use locally managed tablespaces with AUTOALLOCATE option. DO NOT use UNIFORM extent size for
your warehouse tablespaces.
Use your primary database block size 8k (or 16k) for your warehouse tablespaces. It is NOT recommended to use nonstandard block size tablespaces for deploying production warehouse.
Use 8Mb large extent size for partitioned fact tables and non-partitioned large segments, such as dimensions,
hierarchies, etc. You will have to manually specify INITIAL and NEXT extents size of 8Mb for non-partitioned segments.
26
Set deferred_segment_creation = TRUE to defer a segment creation until the first record is inserted. Refer to init.ora
section below.
You should benchmark the query performance prior to implementing the changes in your Production environment.
Consider implementing compression after running an Initial ETL. The initial ETL plan contains several mappings with heavy
updates, which could impact your ETL performance.
Implement large facts table partitioning and compress inactive historic partitions only. Make sure that the active ones
remain uncompressed.
Choose Basic, Advanced or HCC compression types for your compression candidates.
Review periodically the allocated space for a compressed segment, and check such stats as num_rows, blocks and
avg_row_len in user_tables view. For example, the following compressed segment needs to be re-compressed, as it
consumes too many blocks:
Num_rows
Avg_row_len
Blocks
Compression
541823382
181
13837818
ENABLED
The simple calculation (num_rows * avg_row_len / 8k block size) + 16% (block overhead) gives ~13.8M blocks for an
uncompressed segment. This segment should be re-compressed reduce its footprint and improve its queries performance.
Refer to Table Compression Implementation Guidelines section in this document for additional information on compression
for BI Applications Warehouse.
27
4. Verify that your generated explain (and execution) plan use hash join operators, rather than nested loops.
Important! You should conduct comprehensive testing with all recommended techniques in place before dropping your query
indexes.
The Exadata Storage Server will cache data for W_PARTY_D table more aggressively and will try to keep the data from this
table longer than cached data from other tables.
Important! Use manual Flash Cache pinning only for the most common critical tables.
End user data could be inconsistent during ETL runs, causing invalid or incomplete results on dashboards
ETL runs may result in significant hardware resource consumption, slowing down end user queries
The time to execute periodic incremental loads depends on a number of factors, such as number of source databases, each
source database incremental volume, hardware specifications, environment configuration, etc. As the result, incremental
loads may not always complete within a predefined blackout window and cause extended downtime.
Global businesses, operating 24 hours around oclock not always could afford few hours downtime. Such customers can
consider implementing high availability solution using Oracle Data Guard with a physical Standby database.
High Availability with Oracle Data Guard and Physical Standby Database
Oracle Data Guard configuration contains a primary database and supports up to nine standby databases. A standby database
is a copy of a production database, created from its backup. There are two types of standby databases, physical and logical.
A physical standby database must be physically identical to its primary database on a block-for-block basis. Data Guard
synchronizes a physical standby database with its primary one by applying the primary database redo logs. The standby
database must be kept in recovery mode for Redo Apply. The standby database can be opened in read-only mode in-between
redo synchronizations.
The advantage of a physical standby database is that Data Guard applies the changes very fast using low-level mechanisms and
bypassing SQL layers.
A logical standby database is created as a copy of a primary database, but it later can be altered to a different structure. Data
Guard synchronizes a logical standby database by transforming the data from the primary database redo logs into SQLs and
executing them in the standby database.
A logical standby database has to be open all the times to allow Data Guard to perform SQL updates.
Important! A primary database must run in ARCHIVELOG mode all the times.
Data Guard with Physical Standby Database option provides both efficient and comprehensive disaster recovery as well as
reliable high availability solution to Oracle BI Applications customers. Redo Apply for Physical Standby option synchronizes a
28
Standby Database much faster compared to SQL Apply for Logical Standby. OBIEE does not require write access to BI
Applications Data Warehouse for either executing end user logical SQL queries or developing additional contents in RPD or
Web Catalog.
The internal benchmarks on a low-range outdated hardware have showed four times faster Redo Apply on a physical standby
database compared to ETL execution on a primary database:
Step Name
Row Count
Redo Size
Primary DB Run
Time
SDE_ORA_SalesProductDimension_Full
2621803
621 Mb
01:59:31
00:10:20
SDE_ORA_CustomerLocationDimension_Full
4221350
911 Mb
04:11:07
00:16:35
SDE_ORA_SalesOrderLinesFact_Full
22611530
12791 Mb
09:17:19
03:16:04
n/a
610 Mb
00:24:31
00:08:23
Total
29454683
14933 Mb
15:52:28
03:51:22
The target hardware was configured intentionally on a low-range Sun server, with both Primary and Standby databases
deployed on the same server, to imitate heavy incremental load. The modern production systems with primary and standby
database, deployed on separate servers, are expected to deliver up to 8-10 times better Redo Apply time on a physical
standby database, compared to the ETL execution time on the primary database.
The diagram below describes Data Guard configuration with Physical Standby database:
-
The primary instance runs in FORCE LOGGING mode and serves as a target database for routine incremental ETL or
any maintenance activities such as patching or upgrade.
The Physical Standby instance runs in read-only mode during ETL execution on the Primary database.
When the incremental ETL load into the Primary database is over, DBA schedules the downtime or blackout window
on the Standby database for applying redo logs.
DBA shuts down OBIEE tier and switches the Physical Standby database into RECOVERY mode.
DBA starts Redo Apply in Data Guard to apply the generated redo logs to the Physical Standby Database.
DBA opens Physical Standby Database in read-only mode and starts OBIEE tier:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE OPEN;
29
Easy-to-manage switchover and failover capabilities in Oracle Data Guard allow quick role reversals between primary and
standby, so customers can consider switching OBIEE from the Standby to Primary, and then start applying redo logs to the
Standby instance. In such configuration the downtime can be minimized to two short switchovers:
-
Switch OBIEE from Standby to Primary after ETL completion into Primary database, and before starting Redo Apply
into Standby database.
Additional considerations for deploying Oracle Data Guard with Physical Standby for Oracle BI Applications:
1. FORCE LOGGING mode would increase the incremental load time into a Primary database, since Oracle would logging
index rebuild DDL queries.
2. Primary database has to be running in ARCHIVELOG mode to capture all REDO changes.
3. Such deployment results in more complex configuration; it also requires additional hardware to keep two large
volume databases and store daily archived logs.
However it offers these benefits:
1. High Availability Solution to Oracle BI Applications Data Warehouse
2. Disaster recovery and complete data protection
3. Reliable backup solution
30
Conclusion
This document consolidates the best practices and recommendations for improving performance for Oracle Business
Intelligence Applications Version 11g.This list of areas for performance improvements is not complete. If you observe any
performance issues with your Oracle BI Applications implementation, you should trace various components, and carefully
benchmark any recommendations or solutions discussed in this article or other sources, before implementing the changes in
the production environment.
31
Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com
32