Sie sind auf Seite 1von 3

REPORTING FROM MULTIPLE DATABASES

Author JP Vijaykumar Oracle DBA


Written September 14th 2010
In our project, I need to capture some values from around 20 production database
s, every 30 mts
and save it in an oracle table for analysis by the management team.
These are my options:
01 Create database links from our reporting database to each production database
.
Write pl/sql or sql scripts to capture the required data from the production
databases
into our reporting database, schedule the script to run every 30 mts.
02 For each unix server where the production database is running,
schedule a cron job to capture the required data from the database.
Ftp the spool file into a central server. Load the data from the flat file us
ing sql*loader
into an oracle table.
02 Schedule a cron job on a central unix server, that loops through a set databa
se names.
Connects to each database using sql*net connectivity and captures the require
d data
into a spool file. For this I used the system/manager userid and password acr
oss the databases.
(In reality, I use a less powerful userid identified by a complex password)
I created an external table in the reporting database, in our central unix se
rver.
After every 30 mts, the new spool file is appended to the external table's lo
g file.
I used a sql query to capture the session_cached_cursors, soft parses value,
hard parse value
for my report generation. Depending on the data, captured in the shell script
, the external
table creation syntax changes.
--------------------------------------------------------------------------------
------------------
I will narrate the steps for option 03:
Executed the following sql commands in our reporting database:
create user veeksha identified by veeksha account unlock;
alter user veeksha default tablespace users;
grant create session to veeksha;
grant resource to veeksha;
create or replace directory data_pump_dir
as '/app/oracle/data_pump_dir';
grant read,write on directory data_pump_dir to veeksha;
drop table veeksha.xternal_cursor_cache_hits;
create table veeksha.xternal_cursor_cache_hits (
name varchar2(20),
run_time date,
setting number,
cursor_cache_hits varchar2(20),
soft_parses varchar2(20),
hard_parses varchar2(20))
organization external
( default directory data_pump_dir
access parameters
( records delimited by newline
badfile data_pump_dir:'xternal_cursor_cache_hits.bad'
logfile data_pump_dir:'xternal_cursor_cache_hits.log'
discardfile data_pump_dir:'xternal_cursor_cache_hits.dsc'
fields terminated by '|'
( "NAME",
"RUN_TIME" DATE "MM-DD-YYYY HH24:MI",
"SETTING",
"CURSOR_CACHE_HITS",
"SOFT_PARSES",
"HARD_PARSES"
)
)
location ('xternal_cursor_cache_hits.dat')
) reject limit unlimited;

select * from veeksha.xternal_cursor_cache_hits;


--------------------------------------------------------------------------------
------------------
From time to time, I use the following unix command to empty the *.bad files fro
m our data_pump directory:
ls -1 *.bad|awk '{print "cp /dev/null "$1}'|ksh
--------------------------------------------------------------------------------
------------------
This is the cron setting to run the script from Monday - Friday between 08:00 AM
- 06:00 PM.
00,30 8-18 * * 1-5 /app/oracle/scripts/xternal_cursor_cache_hits.sh >/app/oracle
/scripts/xternal_cursor_cache_hits.log 2>&1
--------------------------------------------------------------------------------
------------------
cat xternal_cursor_cache_hits.sh
#!/usr/bin/ksh
cd /app/oracle/scripts
cp /dev/null xternal_cursor_cache_hits.log
export MAIL_LIST=jp_vijaykumar@mycorp.com
for LINE in PROD01 .....PROD99
do
export ORACLE_SID=`echo $LINE|tr 'a-z' 'A-Z'`
export ORACLE_HOME=/app/oracle/product/10.2.0/db_1
$ORACLE_HOME/bin/sqlplus -s veeksha/veeksha@$ORACLE_SID >> xternal_cursor_cache_
hits.log <<EOF
set linesize 200 pagesize 200 feedback off head off echo off
select name||'|'||run_time||'|'||setting||'|'||
to_char(100 * sess / calls, '999999999990.00')||'|'||
to_char(100 * (calls - sess - hard) / calls, '999990.00') ||'|'||
to_char(100 * hard / calls, '999990.00')
from
(select name,to_char(sysdate,'mm-dd-yyyy hh24:mi') run_time from v\$database),
(select value setting from v\$parameter where name='session_cached_cursors'),
( select value calls from v\$sysstat where name = 'parse count (total)' ),
( select value hard from v\$sysstat where name = 'parse count (hard)' ),
( select value sess from v\$sysstat where name = 'session cursor cache hits' )
/
EOF
done
#mailx -s "Cursor Cache hit ratio of dbs " $MAIL_LIST < xternal_cursor_cac
he_hits.log
cat xternal_cursor_cache_hits.log|grep -v "^$" >> /app/oracle/data_pump_dir/xte
rnal_cursor_cache_hits.dat
exit

References:
http://www.scribd.com/jp_vijaykumar WORKING IN OTHER SCHEMAS
http://www.scribd.com/jp_vijaykumar PASSING VARIABLES
http://www.scribd.com/jp_vijaykumar WORKING IN OTHER DATABASES

Das könnte Ihnen auch gefallen