Sie sind auf Seite 1von 41

SQL*Loader is a tool used by DBAs and developers to populate Oracle tables with data from flat files.

The SQL*Loader gives a lot of flexibility to selectively load certain columns but not others, or to exclude certain records entirely. SQL*Loader has some advantages over programming languages that allow embedded SQL statements, as well. SQL*Loader consist of understanding its elements. The first is the data to be loaded, which is stored in the datafile (this is basically a flat/text file, and should not be confused with Oracle Server datafiles, that make up the database). The second is the control file (this is basically a text file, acting as a directive to the Loader, it should not be confused with Oracle Servers control file, which holds database-related information). SQL*Loader accepts special parameters that can affect how the load occurs, called command-line parameters. These parameters which include the ID (username/password or commonly know as the schema) to use while loading data, the name of the data file, and the name of the control file. SQL*Loader in action consist of several additional items. If, in the course of performing data loads, SQL*Loader encounters records it cannot load, the record is rejected and the SQL*Loader puts it in special file called bad file. Additionally, SQL*Loader gives the user options to reject data based on special criteria. These criteria are defined in the control file as a part of the when clause. Is the SQL*Loader encounters a record that fails a specified when clause, the record is placed in a special file called discard file Modes of operation SQL*Loader operates in two modes: Conventional loads In a conventional load, SQL*Loader reads multiple data records from the input file (flat/text file) into a bind array. When the array fills, SQL*Loader passes the data to the Oracle SQL processing mechanism for insertion into the database tables. Oracle SQL processing mechanism will in-turn generates equivalent INSERT statements. All records have to pass through database buffer cache, and only DBWR writes the records to the physical datafiles. Since all data passes through SGA, in case of instance failure recovery is possible. The conventional path load is nondisruptive and work on the same principles that a normal database insert (DML) works, only much faster. Some of its characteristics are outlined below: When loading data across network (client/server), it is better to user conventional load. When loading data in clustered tables, only conventional load can be used. Records loaded will update the associated indexes, enforce any database integrity rules defined (Primary Key, Foreign Key, Check constraints), as the records are loaded. No exclusive locks are acquired when the conventional path loads are performed. Direct loads During direct loads, SQL*Loader reads records from the datafile, converts those records directly into Oracle data blocks, and writes them directly to the disk. Since direct path loading bypasses SGA (database buffer cache), in case of instance failure recovery is not possible. Direct loads are typically used to load large amount of data in a short time. The direct path load works much faster compared to conventional loads. Some of its characteristics of Direct Load are outlined below: At the beginning of the direct load, SQL*Loader makes a call to Oracle to put a lock on the tables which are being loaded, at the end it makes another call to release the lock. It may call Oracle intermediately to get extent information. So a direct path load makes very few calls to Oracle. Direct path loader checks the integrity constraints only in the end (i.e. after loading entire data). The constraints are disabled by the direct load process, before starting loading. Once done the constraints are enabled.

If enabling of constraints fails due to data errors, then the state of the constraints are left disabled, which is undesirable. So it is always better to check the state of your constraints after direct path loads. Direct load process, will not update indexes as the data is loaded, it does this operation after all the data is loaded. In fact it rebuilds the indexes associated with the table. Due to reasons like: not finding more space to load data (datafile being full), or instance failure or duplicate values in primary key columns. If the load fails, then the indexes are left in direct Load State and are unusable, a DBA or developers should note this and remove culprit records and enable his constraints and rebuild his indexes. Insert triggers are disabled at the beginning of direct Load State. For example you have an insert trigger which fires after each row is inserted to update the time and userid field of a table, the insert trigger will not be functional when you are loading the records. You may have to write an update trigger or stored procedures to handle these records after direct loading. Any Referential integrity constraint defined on a table, is not enforced during direct loads. Command Line parameters The following parameters are accepted by SQL * Loader on the command line. I.e. at the "$ "or "dos" prompt. USERID - Oracle userid and password i.e. schema CONTROL - Control filename LOG - Log filename BAD - Bad filename DATA - Data filename DISCARD - Discard filename DISCARDS - Number of discards to terminate the load SKIP - Number of logical records to skip (default: 0) LOAD - Number of logical records to load ROWS - Number of rows in the conventional path bind array or between direct path saves (conventional path : 64, direct path : all) BINDSIZE - Size of conventional path bind array in bytes SILENT - Suppress messages between run DIRECT - Use direct path load (default :FALSE) PARFILE - Parameter filename PARALLEL - Perform parallel load (default :FALSE) FILE - Datafile to allocate extents Control files The control file provides information to the SQL*Loader for loading the data from flat files. It provides information regarding : datafile/flat file names and format, character set used, data types of the fields, how each field is delimited, which tables and columns to load. The following parameters can be included in the control file as directives: -- - Any comments to be placed option - Command-line parameters to be placed in the control file as options unrecoverable - Specifies whether to create redo log entries for loaded data. Unrecoverable can be used on direct loads only recoverable - Specifies whether to create redo log entries for loaded data. load - Load or continue_load must be specified continue_load - Load or continue_load must be specified data - Provided for readability

characterset - Specifies the character set of the data file preserve blanks - Retains leading white spaces from the datafile in cases where enclosure delimiter are not present begin data - Keyword denoting the beginning of data records to be loaded infile [name] - Keyword to specify name(s) of input files. An asterisk (*) following this keyword indicates data records are in the control file. badfile [name] - Keyword to specify the name of bad file. They are interchangable discardfile [name] - Keyword to specify name of discard file. They are interchangable discards [x] - Allows X discards before opening the next datafile. discardmax [x] - Allows X discards before terminating the load insert - Puts rows into an empty table append - Appends to an existing table truncate - Deletes current rows in the table, and loads new data replace - Deletes current rows in the table, and loads new data sorted indexes - For direct path loads, indicates that data has already been sorted in specified indexes singlerow - For use when appending rows or loading a small amount of data into a large table (rowcount ration of 1:20 or less) Data files Datafiles, or text file, which is used to load data intoDatabase can be in two formats: fixed-length fields Field 1 : Column 1-7 Field 2 : Column 8-15 variable-length fields - variable-length records are terminated by special character or enclosed by special characters eg. |Deepak|Chebbi| Return codes When executing sqlldr within UNIX, the return code values have changed. In 7.x the sqlldr utility returned a 0 if successful and a 1 if not successful. If there were records that were rejected, then the sqlldr utility would return a successful return code of 0. In Oracle 8 and above, it has four return code values. They are: 0 - successful 1 - failed 2 - warn 3 - fatal Here are the conditions and how the return code work: All rows loaded successfully - 0 All/some rows discarded - 2 All/some row rejected - 2 Discontinued load - 2 Command line/syntax errors - 1 Errors Fatal to Sql*Loader - 1 OS related errors - 3

Examples Combined data and control file The example below is a combined data and control file (i.e. data part of the file is present in the control file itself). The control file holds key words and data as a directive to SQL * Loader. The table under consideration is "empmast" and has fields "emp_no number(6), emp_lname varchar2(24)". The sequence of steps are listed below: Create a control file, to hold the directives. You can use your favorite editor to create the file. I advice you to follow a naming standard to identify the file. In this example I have named the file "case1.ctl". The contents of the control file are: --This control file hold data to be loaded into empmast table -- * is use only is the data is contained in the control file LOAD DATA INFILE * APPEND INTO TABLE empmast FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' (emp_no, emp_lname) BEGINDATA 100, "Chebbi" 200, Grant Invoke SQl * Loader sqlldr / control=case1.ctl Fixed width data, one data file - one table The example below is a control file for fixed width data, the datafile and control fiel are separate (i.e. data part of the file is not present in the control file). The table under consideration is "empmast" and has fields "emp_no number(6), emp_lname varchar2(24)". The data file is named "xyz.dat" and the control file "case2.ctl". The sequence of steps are listed below: The data file contents is as shown 100000Chebbi 200000Grant 300000Zinky Create a control file, to hold the directives. You can use your favorite editor to create the file. --case2.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log INSERT INTO TABLE empmast (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR) Invoke SQl * Loader sqlload / control=case2.ctl

Fixed width data, one data file - two tables The example below is a control file for fixed width data, the datafile and control file are separate (i.e. data part of the file is not present in the control file). The tables under consideration is "empmast" having fields "emp_no number(6), emp_lname varchar2(24)" and empsal having fields "emp_no number(6), salary number(6)". A single datafile is being loaded into two tables. The sequence of steps are listed below: The data file is named "xyz.dat" and the control file "case3.ctl" The data file contents is listed below 100000Chebbi 200000GrantBBBBBBBBBBBBBBBBBBB3000 300000Zinky 4000 Create a control file, to hold the directives. You can use your favorite editor to create the file. --case3.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log INSERT INTO TABLE empmast (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR) INTO TABLE emp_sal (emp_no POSITION(1:6) INTEGER, empsal POSITION(32:37) INTEGER) Invoke SQl * Loader sqlload / control=case3.ctl Selective Load The example illustrates the use of when clause, in this case if the first field is blank the record is not loaded. The tables under consideration is "empmast" having fields "emp_no number(6), emp_lname varchar2(24)". The data file is named "xyz.dat" and the control file "case4.ctl". The sequence of steps are listed below: The data file contents is listed below Chebbi 2000 Grant 3000 CarpinoBBBBBBBBBBBBBBBBB Create a control file, to hold the directives. You can use your favorite editor to create the file. --case4.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log DISCARDFILE 'xyz.dsc' INSERT INTO TABLE empmast WHEN emp_no != ' ' (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR)

Invoke SQl * Loader sqlload / control=case4.ctl Use Of Functions The example illustrates the use of function initcap, in this case no matter the case of the second filed it is converted as upper case and stored in the database. The table under consideration is "empmast" having fields "emp_no number(6), emp_lname varchar2(24)". The data file is named "xyz.dat" and the control file "case5.ctl". The sequence of steps are listed below: The data file contents is listed below 100000chebbi 200000grantBBBBBBBBBBBBBBBBBBB 300000zinky Create a control file, to hold the directives. You can use your favorite editor to create the file. --case5.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log INSERT INTO TABLE empmast (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR "initcap(:emp_name)") Invoke SQl * Loader sqlload / control=case5.ctl Assigning Constants The example illustrates the use of plugging in a constant value to a field. The table under consideration is "empmast" having fields "emp_no number(6), emp_lname varchar2(24), alive char(1)". The data file is named "xyz.dat" and the control file "case6.ctl". The sequence of steps are listed below: The data file contents is listed below 100000chebbi 200000grantBBBBBBBBBBBBBBBBBBB 300000zinky Create a control file, to hold the directives. You can use your favorite editor to create the file. --case6.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log INSERT INTO TABLE empmast (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR, alive CONSTANT "Y")

Invoke SQl * Loader sqlload / control=case6.ctl Use Of Sequence The example illustrates the use of plugging in a sequence number to a field. The table under consideration is "empmast" having fields "emp_no number(6), emp_lname varchar2(24), seq_no NUMBER". The data file is named "xyz.dat" and the control file "case7.ctl". The sequence of steps are listed below: The data file contents is listed below 100000chebbi 200000grantBBBBBBBBBBBBBBBBBBB 300000zinky Create a control file, to hold the directives. You can use your favorite editor to create the file. --case7.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log INSERT INTO TABLE empmast (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR, seq_no SEQUENCE(MAX,1)) Invoke SQl * Loader sqlload / control=case7.ctl Integer Feeds treated as date The example illustrates the use of date .The table under consideration is "empmast" having fields "emp_no number(6), emp_lname varchar2(24), hire_date DATE". The data file is named "xyz.dat" and the control file "case8.ctl". I have often found in situations where in when a flat file comes from mainframes as feed to be loaded into Oracle database, the date format is in chracter or integer format (010100) . To loads this data here are the steps: The data file contents is listed below 100000chebbiAAAAAAAAAAAAAAAAAA040699 200000grantBBBBBBBBBBBBBBBBBBB040599 Create a control file, to hold the directives. You can use your favorite editor to create the file. --case8.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log INSERT INTO TABLE empmast (emp_no POSITION(1:6) INTEGER, emp_name POSITION(7:31) CHAR,

hire_date POSITION(32:37) DATE 'RRMMDD' nullif (hire_date = '000000') Invoke SQl * Loader sqlload / control=case8.ctl Conditional Load In some situation you may want to load the records depending on the value of the first field, for example say you want to load only records having first field value as "100".The table under consideration is "invoice_detail" having fields "inv_no number(6), inv_quantity number(6), line_no number(6)". The data file is named "xyz.dat" and the control file "case9.ctl". The data file contents is listed below 1000002001 2000003001 Create a control file, to hold the directives. You can use your favorite editor to create the file. In the following control file records with only invoice number "1000000" are loaded. --case9.ctl LOAD DATA INFILE 'xyz.dat' BADFILE 'xyz.bad' LOG xyz.log DISCARDFILE 'xyz.dsc' INSERT INTO TABLE invoice_detail WHEN inv_no = 100000 (inv_no POSITION(1:6) INTEGER, inv_quantity POSITION(7:12) INTEGER, line_no POSITION(13:18) INTEGER) Invoke SQl * Loader sqlload / control=case9.ctl SQL *Loader FAQ What is SQL*Loader and what is it used for? SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database. Its syntax is similar to that of the DB2 load utility, but comes with more options. SQL*Loader supports various load formats, selective loading, and multi-table loads. SQL*Loader (sqlldr) is the utility to use for high performance data loads. The data can be loaded from any text file and inserted into the database. How does one use the SQL*Loader utility? One can load data into an Oracle database by using the sqlldr (sqlload on some platforms) utility. Invoke the utility without arguments to get a list of available parameters. Look at the following example: sqlldr username@server/password control=loader.ctl sqlldr username/password@server control=loader.ctl

This sample control file (loader.ctl) will load an external data file containing delimited data: load data infile 'c:\data\mydata.csv' into table emp fields terminated by "," optionally enclosed by '"' ( empno, empname, sal, deptno ) The mydata.csv file may look like this: 10001,"Scott Tiger", 1000, 40 10002,"Frank Naude", 500, 20 Optionally, you can work with tabulation delimited files by using one of the following syntaxes: fields terminated by "\t" fields terminated by X'09' Additionally, if your file was in Unicode, you could make the following addition. load data CHARACTERSET UTF16 infile 'c:\data\mydata.csv' into table emp fields terminated by "," optionally enclosed by '"' ( empno, empname, sal, deptno ) Another Sample control file with in-line data formatted as fix length records. The trick is to specify "*" as the name of the data file, and use BEGINDATA to start the data section in the control file: load data infile * replace into table departments ( dept position (02:05) char(4), deptname position (08:27) char(20) ) begindata COSC COMPUTER SCIENCE ENGL ENGLISH LITERATURE MATH MATHEMATICS POLY POLITICAL SCIENCE How does one load MS-Excel data into Oracle? Open the MS-Excel spreadsheet and save it as a CSV (Comma Separated Values) file. This file can now be copied to the Oracle machine and loaded using the SQL*Loader utility. Possible problems and workarounds: The spreadsheet may contain cells with newline characters (ALT+ENTER). SQL*Loader expects the entire record to be on a single line. Run the following macro to remove newline characters (Tools -> Macro -> Visual Basic Editor): ' Removing tabs and carriage returns from worksheet cells

Sub CleanUp() Dim TheCell As Range On Error Resume Next For Each TheCell In ActiveSheet.UsedRange With TheCell If .HasFormula = False Then .Value = Application.WorksheetFunction.Clean(.Value) End If End With Next TheCell End Sub Tools: If you need a utility to load Excel data into Oracle, download quickload from sourceforge at http://sourceforge.net/projects/quickload Is there a SQL*Unloader to download data to a flat file? Oracle does not supply any data unload utilities. Here are some workarounds: Using SQL*Plus You can use SQL*Plus to select and format your data and then spool it to a file. This example spools out a CSV (comma separated values) file that can be imported into MS-Excel: set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on spool oradata.txt select col1 || ',' || col2 || ',' || col3 from tab1 where col2 = 'XYZ'; spool off You can also use the "set colsep" command if you don't want to put the commas in by hand. This saves a lot of typing. Example: set colsep ',' set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on spool oradata.txt select col1, col2, col3 from tab1 where col2 = 'XYZ'; spool off Using PL/SQL PL/SQL's UTL_FILE package can also be used to unload data. Example: declare fp utl_file.file_type; begin fp := utl_file.fopen('c:\oradata','tab1.txt','w'); utl_file.putf(fp, '%s, %sn', 'TextField', 55); utl_file.fclose(fp); end; /

Using Oracle SQL Developer The freely downloadable Oracle SQL Developer application is capable of exporting data from Oracle tables in numerous formats, like Excel, SQL insert statements, SQL loader format, HTML, XML, PDF, TEXT, Fixed text, etc. It can also import data from Excel (.xls), CSV (.csv), Text (.tsv) and DSV (.dsv) formats directly into a database. Third-party programs You might also want to investigate third party tools to help you unload data from Oracle. Here are some examples:

WisdomForce FastReader - http://www.wisdomforce.com IxUnload from ixionsoftware.com - http://www.ixionsoftware.com/products/ FAst extraCT (FACT) for Oracle from CoSort - http://www.cosort.com/products/FACT
Unicenter (also ManageIT or Platinum) Fast Unload for Oracle from CA Keeptool's Hora unload/load facility (part v5 to v6 upgrade) can export to formats such as Microsoft Excel, DBF, XML, and text. TOAD from Quest SQLWays from Ispirer Systems PL/SQL Developer from allroundautomation Can one load variable and fixed length data records? Loading delimited (variable length) data In the first example we will show how delimited (variable length) data can be loaded into Oracle: LOAD DATA INFILE * INTO TABLE load_delimited_data FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS ( data1, data2 ) BEGINDATA 11111,AAAAAAAAAA 22222,"A,B,C,D," NOTE: The default data type in SQL*Loader is CHAR(255). To load character fields longer than 255 characters, code the type and length in your control file. By doing this, Oracle will allocate a big enough buffer to hold the entire column, thus eliminating potential "Field in data file exceeds maximum length" errors. Example: ... resume char(4000), ... Loading positional (fixed length) data If you need to load positional data (fixed length), look at the following control file example: LOAD DATA INFILE * INTO TABLE load_positional_data ( data1 POSITION(1:5),

data2 POSITION(6:15) ) BEGINDATA 11111AAAAAAAAAA 22222BBBBBBBBBB For example, position(01:05) will give the 1st to the 5th character (11111 and 22222). Can one skip header records while loading? One can skip unwanted header records or continue an interrupted load (for example if you run out of space) by specifying the "SKIP=n" keyword. "n" specifies the number of logical rows to skip. Look at these examples: OPTIONS (SKIP=5) LOAD DATA INFILE * INTO TABLE load_positional_data ( data1 POSITION(1:5), data2 POSITION(6:15) ) BEGINDATA 11111AAAAAAAAAA 22222BBBBBBBBBB ... sqlldr userid=ora_id/ora_passwd control=control_file_name.ctl skip=4 If you are continuing a multiple table direct path load, you may need to use the CONTINUE_LOAD clause instead of the SKIP parameter. CONTINUE_LOAD allows you to specify a different number of rows to skip for each of the tables you are loading. Can one modify data as the database gets loaded? Data can be modified as it loads into the Oracle Database. One can also populate columns with static or derived values. However, this only applies for the conventional load path (and not for direct path loads). Here are some examples: LOAD DATA INFILE * INTO TABLE modified_data ( rec_no "my_db_sequence.nextval", region CONSTANT '31', time_loaded "to_char(SYSDATE, 'HH24:MI')", data1 POSITION(1:5) ":data1/100", data2 POSITION(6:15) "upper(:data2)", data3 POSITION(16:22)"to_date(:data3, 'YYMMDD')" ) BEGINDATA 11111AAAAAAAAAA991201 22222BBBBBBBBBB990112 LOAD DATA INFILE 'mail_orders.txt' BADFILE 'bad_orders.txt'

APPEND INTO TABLE mailing_list FIELDS TERMINATED BY "," ( addr, city, state, zipcode, mailing_addr "decode(:mailing_addr, null, :addr, :mailing_addr)", mailing_city "decode(:mailing_city, null, :city, :mailing_city)", mailing_state, move_date "substr(:move_date, 3, 2) || substr(:move_date, 7, 2)" ) Can one load data from multiple files/ into multiple tables at once? Loading from multiple input files One can load from multiple input files provided they use the same record format by repeating the INFILE clause. Here is an example: LOAD DATA INFILE file1.dat INFILE file2.dat INFILE file3.dat APPEND INTO TABLE emp ( empno POSITION(1:4) INTEGER EXTERNAL, ename POSITION(6:15) CHAR, deptno POSITION(17:18) CHAR, mgr POSITION(20:23) INTEGER EXTERNAL ) Loading into multiple tables One can also specify multiple "INTO TABLE" clauses in the SQL*Loader control file to load into multiple tables. Look at the following example: LOAD DATA INFILE * INTO TABLE tab1 WHEN tab = 'tab1' ( tab FILLER CHAR(4), col1 INTEGER ) INTO TABLE tab2 WHEN tab = 'tab2' ( tab FILLER POSITION(1:4), col1 INTEGER ) BEGINDATA tab1|1 tab1|2 tab2|2 tab3|3 The "tab" field is marked as a FILLER as we don't want to load it.

Note the use of "POSITION" on the second routing value (tab = 'tab2'). By default field scanning doesn't start over from the beginning of the record for new INTO TABLE clauses. Instead, scanning continues where it left off. POSITION is needed to reset the pointer to the beginning of the record again. In delimited formats, use "POSITION(1)" after the first column to reset the pointer. Another example: LOAD DATA INFILE 'mydata.dat' REPLACE INTO TABLE emp WHEN empno != ' ' ( empno POSITION(1:4) INTEGER EXTERNAL, ename POSITION(6:15) CHAR, deptno POSITION(17:18) CHAR, mgr POSITION(20:23) INTEGER EXTERNAL ) INTO TABLE proj WHEN projno != ' ' ( projno POSITION(25:27) INTEGER EXTERNAL, empno POSITION(1:4) INTEGER EXTERNAL ) Can one selectively load only the records that one needs? Look at this example, (01) is the first character, (30:37) are characters 30 to 37: LOAD DATA INFILE 'mydata.dat' BADFILE 'mydata.bad' DISCARDFILE 'mydata.dis' APPEND INTO TABLE my_selective_table WHEN (01) <> 'H' and (01) <> 'T' and (30:37) = '20031217' ( region CONSTANT '31', service_key POSITION(01:11) INTEGER EXTERNAL, call_b_no POSITION(12:29) CHAR ) NOTE: SQL*Loader does not allow the use of OR in the WHEN clause. You can only use AND as in the example above! To workaround this problem, code multiple "INTO TABLE ... WHEN" clauses. Here is an example: LOAD DATA INFILE 'mydata.dat' BADFILE 'mydata.bad' DISCARDFILE 'mydata.dis' APPEND INTO TABLE my_selective_table WHEN (01) <> 'H' and (01) <> 'T' ( region CONSTANT '31', service_key POSITION(01:11) INTEGER EXTERNAL, call_b_no POSITION(12:29) CHAR ) INTO TABLE my_selective_table WHEN (30:37) = '20031217' (

region service_key call_b_no )

CONSTANT '31', POSITION(01:11) INTEGER EXTERNAL, POSITION(12:29) CHAR

Can one skip certain columns while loading data? One cannot use POSITION(x:y) with delimited data. Luckily, from Oracle 8i one can specify FILLER columns. FILLER columns are used to skip columns/fields in the load file, ignoring fields that one does not want. Look at this example: LOAD DATA TRUNCATE INTO TABLE T1 FIELDS TERMINATED BY ',' ( field1, field2 FILLER, field3 ) BOUNDFILLER (available with Oracle 9i and above) can be used if the skipped column's value will be required later again. Here is an example: LOAD DATA INFILE * TRUNCATE INTO TABLE sometable FIELDS TERMINATED BY "," trailing nullcols ( c1, field2 BOUNDFILLER, field3 BOUNDFILLER, field4 BOUNDFILLER, field5 BOUNDFILLER, c2 ":field2 || :field3", c3 ":field4 + :field5" ) How does one load multi-line records? One can create one logical record from multiple physical records using one of the following two clauses: CONCATENATE - use when SQL*Loader should combine the same number of physical recordstogether to form one logical record. CONTINUEIF - use if a condition indicates that multiple records should be treated as one. Eg. by having a '#' character in column 1. How does one load records with multi-line fields? Using Stream Record format, you can define a record delimiter, so that you're allowed to have the default delimiter ('\n') in the field's content. After the INFILE clause set the delimiter: load data infile "test.dat" "str '|\n'" into test_table

fields terminated by ';' TRAILING NULLCOLS ( desc, txt ) test.dat: one line;hello dear world;| two lines;Dear world, hello!;| Note that this doesn't seem to work with inline data (INFILE * and BEGINDATA). How can one get SQL*Loader to COMMIT only at the end of the load file? One cannot, but by setting the ROWS= parameter to a large value, committing can be reduced. Make sure you have big rollback segments ready when you use a high value for ROWS=. Can one improve the performance of SQL*Loader? A very simple but easily overlooked hint is not to have any indexes and/or constraints (primary key) on your load tables during the load process. This will significantly slow down load times even with ROWS= set to a high value. Add the following option in the command line: DIRECT=TRUE. This will effectively bypass most of the RDBMS processing. However, there are cases when you can't use direct load. For details, refer to the FAQ about the differences between the conventional and direct path loader below. Turn off database logging by specifying the UNRECOVERABLE option. This option can only be used with direct data loads. Run multiple load jobs concurrently. What is the difference between the conventional and direct path loader? The conventional path loader essentially loads the data by using standard INSERT statements. The direct path loader (DIRECT=TRUE) bypasses much of the logic involved with that, and loads directly into the Oracle data files. More information about the restrictions of direct path loading can be obtained from the Oracle Server Utilities Guide. Some of the restrictions with direct path loads are: Loaded data will not be replicated Cannot always use SQL strings for column processing in the control file (something like this will probably fail: col1 date "ddmonyyyy" "substr(:period,1,9)"). Details are in Metalink Note:230120.1. How does one use SQL*Loader to load images, sound clips and documents? SQL*Loader can load data from a "primary data file", SDF (Secondary Data file - for loading nested tables and VARRAYs) or LOBFILE. The LOBFILE method provides an easy way to load documents, photos, images and audio clips into BLOB and CLOB columns. Look at this example: Given the following table: CREATE TABLE image_table ( image_id NUMBER(5),

file_name VARCHAR2(30), image_data BLOB); Control File: LOAD DATA INFILE * INTO TABLE image_table REPLACE FIELDS TERMINATED BY ',' ( image_id INTEGER(5), file_name CHAR(30), image_data LOBFILE (file_name) TERMINATED BY EOF ) BEGINDATA 001,image1.gif 002,image2.jpg 003,image3.jpg How does one load EBCDIC data? Specify the character set WE8EBCDIC500 for the EBCDIC data. The following example shows the SQL*Loader controlfile to load a fixed length EBCDIC record into the Oracle Database: LOAD DATA CHARACTERSET WE8EBCDIC500 INFILE data.ebc "fix 86 buffers 1024" BADFILE data.bad' DISCARDFILE data.dsc' REPLACE INTO TABLE temp_data ( field1 POSITION (1:4) INTEGER EXTERNAL, field2 POSITION (5:6) INTEGER EXTERNAL, field3 POSITION (7:12) INTEGER EXTERNAL, field4 POSITION (13:42) CHAR, field5 POSITION (43:72) CHAR, field6 POSITION (73:73) INTEGER EXTERNAL, field7 POSITION (74:74) INTEGER EXTERNAL, field8 POSITION (75:75) INTEGER EXTERNAL, field9 POSITION (76:86) INTEGER EXTERNAL )

Oracle database FAQ How does one create a new database? One can create and modify Oracle databases using the Oracle DBCA (Database Configuration Assistant) utility. The dbca utility is located in the $ORACLE_HOME/bin directory. The Oracle Universal Installer (oui) normally starts it after installing the database server software to create the starter database. One can also create databases manually using scripts. This option, however, is falling out of fashion as it is quite involved and error prone. Look at this example for creating an Oracle 9i or higher database: CONNECT SYS AS SYSDBA ALTER SYSTEM SET DB_CREATE_FILE_DEST='/u01/oradata/'; ALTER SYSTEM SET DB_CREATE_ONLINE_LOG_DEST_1='/u02/oradata/'; ALTER SYSTEM SET DB_CREATE_ONLINE_LOG_DEST_2='/u03/oradata/'; CREATE DATABASE; Also see Creating a New Database. What database block size should I use? Oracle recommends that your database block size match, or be multiples of your operating system block size. One can use smaller block sizes, but the performance cost is significant. Your choice should depend on the type of application you are running. If you have many small transactions as with OLTP, use a smaller block size. With fewer but larger transactions, as with a DSS application, use a larger block size. If you are using a volume manager, consider your "operating system block size" to be 8K. This is because volume manager products use 8K blocks (and this is not configurable). What database aspects should be monitored? One should implement a monitoring system to constantly monitor the following aspects of a database. This can be achieved by writing custom scripts, implementing Oracle's Enterprise Manager, or buying a third-party monitoring product. If an alarm is triggered, the system should automatically notify the DBA (e-mail, text, etc.) to take appropriate action. Infrastructure availability: Is the database up and responding to requests Are the listeners up and responding to requests Are the Oracle Names and LDAP Servers up and responding to requests Are the Application Servers up and responding to requests Etc. Things that can cause service outages: Is the archive log destination filling up?

Objects getting close to their max extents Tablespaces running low on free space / Objects that would not be able to extend User and process limits reached Etc. How does one rename a database? Follow these steps to rename a database: Start by making a full database backup of your database (in case you need to restore if this procedure is not working). Execute this command from sqlplus while connected to 'SYS AS SYSDBA': ALTER DATABASE BACKUP CONTROLFILE TO TRACE RESETLOGS; Locate the latest dump file in your USER_DUMP_DEST directory (show parameter USER_DUMP_DEST) - rename it to something like dbrename.sql. Edit dbrename.sql, remove all headers and comments, and change the database's name. Also change "CREATE CONTROLFILE REUSE ..." to "CREATE CONTROLFILE SET ...". Shutdown the database (use SHUTDOWN NORMAL or IMMEDIATE, don't ABORT!) and run dbrename.sql. Rename the database's global name: ALTER DATABASE RENAME GLOBAL_NAME TO new_db_name; Can one rename a database user (schema)? No, this is listed as Enhancement Request 158508. Workaround (including 9i): Do a user-level export of user A create new user B import the user while renaming it import system/manager fromuser=A touser=B drop user A Workaround (starting 10g) Do a data pump schema export of user A expdp system/manager schemas=A [directory=... dumpfile=... logfile=...] import the user while renaming it impdp system/manager schemas=A remap_schema=A:B [directory=... dumpfile=... logfile=...] drop user A

Can one rename a tablespace? From Oracle 10g Release 1, users can rename tablespaces. Example:

ALTER TABLESPACE ts1 RENAME TO ts2; However, you must adhere to the following restrictions: COMPATIBILITY must be set to at least 10.0.1 Cannot rename SYSTEM or SYSAUX Cannot rename an offline tablespace Cannot rename a tablespace that contains offline datafiles For older releases, use the following workaround: Export all of the objects from the tablespace Drop the tablespace including contents Recreate the tablespace Import the objects How does one see the uptime for a database? Look at the following SQL query: SELECT to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') "DB Startup Time" FROM sys.v_$instance; Can one resize tablespaces and data files? Add more files to tablespaces To add more space to a tablespace, one can simply add another file to it. Example: ALTER TABLESPACE USERS ADD DATAFILE '/oradata/orcl/users1.dbf' SIZE 100M; Resize datafiles One can manually increase or decrease the size of a datafile from Oracle 7.2 using the following command: ALTER DATABASE DATAFILE 'filename2' RESIZE 100M; Because you can change the sizes of datafiles, you can add more space to your database without adding more datafiles. This is beneficial if you are concerned about reaching the maximum number of datafiles allowed in your database. Manually reducing the sizes of datafiles allows you to reclaim unused space in the database. This is useful for correcting errors in estimations of space requirements. Extend datafiles Also, datafiles can be allowed to automatically extend if more space is required. Look at the following commands: CREATE TABLESPACE pcs_data_ts DATAFILE 'c:ora_appspcspcsdata1.dbf' SIZE 3M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED DEFAULT STORAGE ( INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0) ONLINE

PERMANENT; ALTER DATABASE DATAFILE 1 AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED; How do I find the overall database size? The biggest portion of a database's size comes from the datafiles. To find out how many megabytes are allocated to ALL datafiles: select sum(bytes)/1024/1024 "Meg" from dba_data_files; To get the size of all TEMP files: select nvl(sum(bytes),0)/1024/1024 "Meg" from dba_temp_files; To get the size of the on-line redo-logs: select sum(bytes)/1024/1024 "Meg" from sys.v_$log; Putting it all together into a single query: select a.data_size+b.temp_size+c.redo_size "total_size" from ( select sum(bytes) data_size from dba_data_files ) a, ( select nvl(sum(bytes),0) temp_size from dba_temp_files ) b, ( select sum(bytes) redo_size from sys.v_$log ) c; Another query ("Free space" reports data files free space): col "Database Size" format a20 col "Free space" format a20 select round(sum(used.bytes) / 1024 / 1024 ) || ' MB' "Database Size" , round(free.p / 1024 / 1024) || ' MB' "Free space" from (select bytes from v$datafile union all select bytes from v$tempfile union all select bytes from v$log) used , (select sum(bytes) as p from dba_free_space) free group by free.p / How do I find the used space within the database size? Select from the DBA_SEGMENTS or DBA_EXTENTS views to find the used space of a database. Example: SELECT SUM(bytes)/1024/1024 "Meg" FROM dba_segments; Where can one find the high water mark for a table? There is no single system table which contains the high water mark (HWM) for a table. A table's HWM can be calculated using the results from the following SQL statements:

SELECT BLOCKS FROM DBA_SEGMENTS WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table); ANALYZE TABLE owner.table ESTIMATE STATISTICS; SELECT EMPTY_BLOCKS FROM DBA_TABLES WHERE OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table); Thus, the tables' HWM = (query result 1) - (query result 2) - 1 NOTE: You can also use the DBMS_SPACE package and calculate the HWM = TOTAL_BLOCKS UNUSED_BLOCKS - 1. How do I find used/free space in a TEMPORARY tablespace? Unlike normal tablespaces, true temporary tablespace information is not listed in DBA_FREE_SPACE. Instead use the V$TEMP_SPACE_HEADER view: SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free) FROM V$temp_space_header GROUP BY tablespace_name; To report true free space within the used portion of the TEMPFILE: SELECT A.tablespace_name tablespace, D.mb_total, SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used, D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free FROM v$sort_segment A, ( SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total FROM v$tablespace B, v$tempfile C WHERE B.ts#= C.ts# GROUP BY B.name, C.block_size )D WHERE A.tablespace_name = D.name GROUP by A.tablespace_name, D.mb_total; How can one see who is using a temporary segment? For every user using temporary space, there is an entry in SYS.V_$LOCK with type 'TS'. All temporary segments are named 'ffff.bbbb' where 'ffff' is the file it is in and 'bbbb' is first block of the segment. If your temporary tablespace is set to TEMPORARY, all sorts are done in one large temporary segment. For usage stats, see SYS.V_$SORT_SEGMENT From Oracle 8, one can just query SYS.v$sort_usage. Look at these examples: select s.username, u."USER", u.tablespace, u.contents, u.extents, u.blocks from sys.v_$session s, sys.v_$sort_usage u where s.saddr = u.session_addr /

select s.osuser, s.process, s.username, s.serial#, sum(u.blocks)*vp.value/1024 sort_size from sys.v_$session s, sys.v_$sort_usage u, sys.v_$parameter vp where s.saddr = u.session_addr and vp.name = 'db_block_size' and s.osuser like '&1' group by s.osuser, s.process, s.username, s.serial#, vp.value / Who is using which UNDO or TEMP segment? Execute the following query to determine who is using a particular UNDO or Rollback Segment: SQL> SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial, 2 NVL(s.username, 'None') orauser, 3 s.program, 4 r.name undoseg, 5 t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo" 6 FROM sys.v_$rollname r, 7 sys.v_$session s, 8 sys.v_$transaction t, 9 sys.v_$parameter x 10 WHERE s.taddr = t.addr 11 AND r.usn = t.xidusn(+) 12 AND x.name = 'db_block_size' SID_SERIAL ORAUSER PROGRAM UNDOSEG ---------- ---------- ------------------------------ --------------- ------260,7 SCOTT sqlplus@localhost.localdomain _SYSSMU4$ (TNS V1-V3) Undo 8K

Execute the following query to determine who is using a TEMP Segment: SQL> SELECT b.tablespace, 2 ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE", 3 a.sid||','||a.serial# SID_SERIAL, 4 a.username, 5 a.program 6 FROM sys.v_$session a, 7 sys.v_$sort_usage b, 8 sys.v_$parameter p 9 WHERE p.name = 'db_block_size' 10 AND a.saddr = b.session_addr 11 ORDER BY b.tablespace, b.blocks; TABLESPACE SIZE SID_SERIAL USERNAME PROGRAM ---------- ------- ---------- -------- -----------------------------TEMP 24M 260,7 SCOTT sqlplus@localhost.localdomain (TNS V1-V3) How does one get the view definition of fixed views/tables? Query v$fixed_view_definition. Example:

SELECT * FROM v$fixed_view_definition WHERE view_name='V$SESSION'; How full is the current redo log file? Here is a query that can tell you how full the current redo log file is. Handy for when you need to predict when the next log file will be archived out. SQL> SELECT le.leseq "Current log sequence No", 2 100*cp.cpodr_bno/le.lesiz "Percent Full", 3 cp.cpodr_bno "Current Block No", 4 le.lesiz "Size of Log in Blocks" 5 FROM x$kcccp cp, x$kccle le 6 WHERE le.leseq =CP.cpodr_seq 7 AND bitand(le.leflg,24) = 8 8 / Current log sequence No Percent Full Current Block No Size of Log in Blocks ----------------------- ------------ ---------------- --------------------416 48.6669922 49835 102400 Tired of typing sqlplus '/as sysdba' every time you want to do something? If you are tired of typing sqlplus "/as sysdba" every time you want to perform some DBA task, implement the following shortcut: On Unix/Linux systems: Add the following alias to your .profile or .bash_profile file: alias sss='sqlplus "/as sysdba"' On Windows systems: Create a batch file, sss.bat, add the command to it, and place it somewhere in your PATH. Whenever you now want to start sqlplus as sysdba, just type "sss". Much less typing for ya lazy DBA's. Note: From Oracle 10g you don't need to put the "/AS SYSDBA" in quotes anymore. What patches are installed within an Oracle Home? DBA's often do not document the patches they install. This may lead to situations where a feature works on machine X, but not on machine Y. This FAQ will show how you can list and compare the patches installed within your Oracle Homes. All patches that are installed with Oracle's OPatch Utility (Oracle's Interim Patch Installer) can be listed by invoking the opatch command with the lsinventory option. Here is an example: $ cd $ORACLE_HOME/OPatch $ opatch lsinventory Invoking OPatch 10.2.0.1.0 Oracle interim Patch Installer version 10.2.0.1.0 Copyright (c) 2005, Oracle Corporation. All rights reserved.. ... Installed Top-level Products (1):

Oracle Database 10g 10.2.0.1.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. OPatch succeeded. NOTE: If OPatch is not installed into your Oracle Home ($ORACLE_HOME/OPatch), you may need to download it from Metalink and install it yourself. How does one give developers access to trace files (required as input to tkprof)? The alter session set sql_trace=true command generates trace files in USER_DUMP_DEST that can be used by developers as input to tkprof. On Unix the default file mask for these files are "rwx r-- ---". There is an undocumented INIT.ORA parameter that will allow everyone to read (rwx r-- r--) these trace files: _trace_files_public = true Include this in your INIT.ORA file and bounce your database for it to take effect. Oracle database Backup and Recovery FAQ General Backup and Recovery questions Why and when should I backup my database? Backup and recovery is one of the most important aspects of a DBA's job. If you lose your company's data, you could very well lose your job. Hardware and software can always be replaced, but your data may be irreplaceable! Normally one would schedule a hierarchy of daily, weekly and monthly backups, however consult with your users before deciding on a backup schedule. Backup frequency normally depends on the following factors: Rate of data change/ transaction rate Database availability/ Can you shutdown for cold backups? Criticality of the data/ Value of the data to the company Read-only tablespace needs backing up just once right after you make it read-only If you are running in archivelog mode you can backup parts of a database over an extended cycle of days If archive logging is enabled one needs to backup archived log files timeously to prevent database freezes Etc. Carefully plan backup retention periods. Ensure enough backup media (tapes) are available and that old backups are expired in-time to make media available for new backups. Off-site vaulting is also highly recommended. Frequently test your ability to recover and document all possible scenarios. Remember, it's the little things that will get you. Most failed recoveries are a result of organizational errors and miscommunication. What strategies are available for backing-up an Oracle database? The following methods are valid for backing-up an Oracle database:

Export/Import - Exports are "logical" database backups in that they extract logical definitions and data
from the database to a file. See the Import/ Export FAQ for more details.

Cold or Off-line Backups - shut the database down and backup up ALL data, log, and control files. Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the tablespaces
into backup mode and backup their files. Also remember to backup the control files and archived redo log files.

RMAN Backups - while the database is off-line or on-line, use the "rman" utility to backup the
database. It is advisable to use more than one of these methods to backup your database. For example, if you choose to do on-line database backups, also cover yourself by doing database exports. Also test ALL backup and recovery scenarios carefully. It is better to be safe than sorry. Regardless of your strategy, also remember to backup all required software libraries, parameter files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup archived log files. What is the difference between online and offline backups? A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode. A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode. It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent. Nevertheless, on-line backups are less disruptive and don't require database downtime. Point-in-time recovery (regardless if you do on-line or off-line backups) is only available when the database is in ARCHIVELOG mode. What is the difference between restoring and recovering? Restoring involves copying backup files from secondary storage (backup media) to disk. This can be done to replace damaged files or to copy/move a database to a new location. Recovery is the process of applying redo logs to the database to roll it forward. One can roll-forward until a specific point-in-time (before the disaster occurred), or roll-forward until the last transaction recorded in the log files. SQL> connect SYS as SYSDBA SQL> RECOVER DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP CONTROLFILE; RMAN> run { set until time to_date('04-Aug-2004 00:00:00', 'DD-MON-YYYY HH24:MI:SS'); restore database; recover database; } My database is down and I cannot restore. What now? This is probably not the appropriate time to be sarcastic, but, recovery without backups are not supported. You know that you should have tested your recovery strategy, and that you should always backup a corrupted database before attempting to restore/recover it.

Nevertheless, Oracle Consulting can sometimes extract data from an offline database using a utility called DUL (Disk UnLoad - Life is DUL without it!). This utility reads data in the data files and unloads it into SQL*Loader or export dump files. Hopefully you'll then be able to load the data into a working database. Note that DUL does not care about rollback segments, corrupted blocks, etc, and can thus not guarantee that the data is not logically corrupt. It is intended as an absolute last resort and will most likely cost your company a lot of money! DUDE (Database Unloading by Data Extraction) is another non-Oracle utility that can be used to extract data from a dead database. More info about DUDE is available at http://www.ora600.nl/. How does one backup a database using the export utility? Oracle exports are "logical" database backups (not physical) as they extract data and logical definitions from the database into a file. Other backup strategies normally back-up the physical data files. One of the advantages of exports is that one can selectively re-import tables, however one cannot roll-forward from an restored export. To completely restore a database from an export file one practically needs to recreate the entire database. Always do full system level exports (FULL=YES). Full exports include more information about the database in the export file than user level exports. For more information about the Oracle export and import utilities, see the Import/ Export FAQ. How does one put a database into ARCHIVELOG mode? The main reason for running in archivelog mode is that one can provide 24-hour availability and guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode before one can start to use online database backups. Issue the following commands to put a database into ARCHIVELOG mode: SQL> CONNECT sys AS SYSDBA SQL> STARTUP MOUNT EXCLUSIVE; SQL> ALTER DATABASE ARCHIVELOG; SQL> ARCHIVE LOG START; SQL> ALTER DATABASE OPEN; Alternatively, add the above commands into your database's startup command script, and bounce the database. The following parameters needs to be set for databases in ARCHIVELOG mode: log_archive_start = TRUE log_archive_dest_1 = 'LOCATION=/arch_dir_name' log_archive_dest_state_1 = ENABLE log_archive_format = %d_%t_%s.arc NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode. Without it one would not be able to recover. Also, implement an archivelog backup to prevent the archive log directory from filling-up. NOTE 2:' ARCHIVELOG mode was introduced with Oracle 6, and is essential for database point-in-time recovery. Archiving can be used in combination with on-line and off-line database backups. NOTE 3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=... NOTE 4: You can change the archive log destination of a database on-line with the ARCHIVE LOG START TO 'directory'; statement. This statement is often used to switch archiving between a set of directories.

NOTE 5: When running Oracle Real Application Clusters (RAC), you need to shut down all nodes before changing the database to ARCHIVELOG mode. See the RAC FAQ for more details. I've lost an archived/online REDO LOG file, can I get my DB back? The following INIT.ORA/SPFILE parameter can be used if your current redologs are corrupted or blown away. It may also be handy if you do database recovery and one of the archived log files are missing and cannot be restored. NOTE: Caution is advised when enabling this parameter as you might end-up losing your entire database. Please contact Oracle Support before using it. _allow_resetlogs_corruption = true This should allow you to open the database. However, after using this parameter your database will be inconsistent (some committed transactions may be lost or partially applied). Steps: Do a "SHUTDOWN NORMAL" of the database Set the above parameter Do a "STARTUP MOUNT" and "ALTER DATABASE OPEN RESETLOGS;" If the database asks for recovery, use an UNTIL CANCEL type recovery and apply all available archive and on-line redo logs, then issue CANCEL and reissue the "ALTER DATABASE OPEN RESETLOGS;" command. Wait a couple of minutes for Oracle to sort itself out Do a "SHUTDOWN NORMAL" Remove the above parameter! Do a database "STARTUP" and check your ALERT.LOG file for errors. Extract the data and rebuild the entire database User managed backup and recovery This section deals with user managed, or non-RMAN backups. How does one do off-line database backups? Shut down the database from sqlplus or server manager. Backup all files to secondary storage (eg. tapes). Ensure that you backup all data files, all control files and all log files. When completed, restart your database. Do the following queries to get a list of all files that needs to be backed up: select name from sys.v_$datafile; select member from sys.v_$logfile; select name from sys.v_$controlfile; Sometimes Oracle takes forever to shutdown with the "immediate" option. As workaround to this problem, shutdown using these commands: alter system checkpoint; shutdown abort startup restrict shutdown immediate

Note that if your database is in ARCHIVELOG mode, one can still use archived log files to roll forward from an off-line backup. If you cannot take your database down for a cold (off-line) backup at a convenient time, switch your database into ARCHIVELOG mode and perform hot (on-line) backups. How does one do on-line database backups? Each tablespace that needs to be backed-up must be switched into backup mode before copying the files out to secondary storage (tapes). Look at this simple example. ALTER TABLESPACE xyz BEGIN BACKUP; ! cp xyzFile1 /backupDir/ ALTER TABLESPACE xyz END BACKUP; It is better to backup tablespace for tablespace than to put all tablespaces in backup mode. Backing them up separately incurs less overhead. When done, remember to backup your control files. Look at this example: ALTER SYSTEM SWITCH LOGFILE; -- Force log switch to update control file headers ALTER DATABASE BACKUP CONTROLFILE TO '/backupDir/control.dbf'; NOTE: Do not run on-line backups during peak processing periods. Oracle will write complete database blocks instead of the normal deltas to redo log files while in backup mode. This will lead to excessive database archiving and even database freezes. My database was terminated while in BACKUP MODE, do I need to recover? If a database was terminated while one of its tablespaces was in BACKUP MODE (ALTER TABLESPACE xyz BEGIN BACKUP;), it will tell you that media recovery is required when you try to restart the database. The DBA is then required to recover the database and apply all archived logs to the database. However, from Oracle 7.2, one can simply take the individual datafiles out of backup mode and restart the database. ALTER DATABASE DATAFILE '/path/filename' END BACKUP; One can select from V$BACKUP to see which datafiles are in backup mode. This normally saves a significant amount of database down time. See script end_backup2.sql in the Scripts section of this site. From Oracle9i onwards, the following command can be used to take all of the datafiles out of hotbackup mode: ALTER DATABASE END BACKUP; This command must be issued when the database is mounted, but not yet opened. Does Oracle write to data files in begin/hot backup mode? When a tablespace is in backup mode, Oracle will stop updating its file headers, but will continue to write to the data files. When in backup mode, Oracle will write complete changed blocks to the redo log files. Normally only deltas (change vectors) are logged to the redo logs. This is done to enable reconstruction of a block if only half of it was backed up (split blocks). Because of this, one should notice increased log activity and archiving during online backups. To solve this problem, simply switch to RMAN backups. [RMAN backup and recovery This section deals with RMAN backups:

What is RMAN and how does one use it? Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and recovering Oracle Databases. RMAN ships with the database server and doesn't require a separate installation. The RMAN executable is located in your ORACLE_HOME/bin directory. In fact RMAN, is just a Pro*C application that translates commands to a PL/SQL interface. The PL/SQL calls are stallically linked into the Oracle kernel, and does not require the database to be opened (mapped from the ?/rdbms/admin/recover.bsq file). RMAN can do off-line and on-line database backups. It cannot, however, write directly to tape, but various 3rdparty tools (like Veritas, Omiback, etc) can integrate with RMAN to handle tape library management. RMAN can be operated from Oracle Enterprise Manager, or from command line. Here are the command line arguments: Argument Value Description ----------------------------------------------------------------------------target quoted-string connect-string for target database catalog quoted-string connect-string for recovery catalog nocatalog none if specified, then no recovery catalog cmdfile quoted-string name of input command file log quoted-string name of output message log file trace quoted-string name of output debugging message log file append none if specified, log is opened in append mode debug optional-args activate debugging msgno none show RMAN-nnnn prefix for all messages send quoted-string send a command to the media manager pipe string building block for pipe names timeout integer number of seconds to wait for pipe input ----------------------------------------------------------------------------Here is an example: [oracle@localhost oracle]$ rman Recovery Manager: Release 10.1.0.2.0 - Production Copyright (c) 1995, 2004, Oracle. All rights reserved. RMAN> connect target; connected to target database: ORCL (DBID=1058957020) RMAN> backup database; ... How does one backup and restore a database using RMAN? The biggest advantage of RMAN is that it only backup used space in the database. RMAN doesn't put tablespaces in backup mode, saving on redo generation overhead. RMAN will re-read database blocks until it gets a consistent image of it. Look at this simple backup example. rman target sys/*** nocatalog run { allocate channel t1 type disk; backup format '/app/oracle/backup/%d_t%t_s%s_p%p'

(database); release channel t1; } Example RMAN restore: rman target sys/*** nocatalog run { allocate channel t1 type disk; # set until time 'Aug 07 2000 :51'; restore tablespace users; recover tablespace users; release channel t1; } The examples above are extremely simplistic and only useful for illustrating basic concepts. By default Oracle uses the database controlfiles to store information about backups. Normally one would rather setup a RMAN catalog database to store RMAN metadata in. Read the Oracle Backup and Recovery Guide before implementing any RMAN backups. Note: RMAN cannot write image copies directly to tape. One needs to use a third-party media manager that integrates with RMAN to backup directly to tape. Alternatively one can backup to disk and then manually copy the backups to tape. How does one backup and restore archived log files? One can backup archived log files using RMAN or any operating system backup utility. Remember to delete files after backing them up to prevent the archive log directory from filling up. If the archive log directory becomes full, your database will hang! Look at this simple RMAN backup scripts: RMAN> run { 2> allocate channel dev1 type disk; 3> backup 4> format '/app/oracle/archback/log_%t_%sp%p' 5> (archivelog all delete input); 6> release channel dev1; 7> } The "delete input" clause will delete the archived logs as they are backed-up. List all archivelog backups for the past 24 hours: RMAN> LIST BACKUP OF ARCHIVELOG FROM TIME 'sysdate-1'; Here is a restore example: RMAN> run { 2> allocate channel dev1 type disk; 3> restore (archivelog low logseq 78311 high logseq 78340 thread 1 all); 4> release channel dev1; 5> }

[How does one create a RMAN recovery catalog? Start by creating a database schema (usually called rman). Assign an appropriate tablespace to it and grant it the recovery_catalog_owner role. Look at this example: sqlplus sys SQL> create user rman identified by rman; SQL> alter user rman default tablespace tools temporary tablespace temp; SQL> alter user rman quota unlimited on tools; SQL> grant connect, resource, recovery_catalog_owner to rman; SQL> exit; Next, log in to rman and create the catalog schema. Prior to Oracle 8i this was done by running the catrman.sql script. rman catalog rman/rman RMAN> create catalog tablespace tools; RMAN> exit; You can now continue by registering your databases in the catalog. Look at this example: rman catalog rman/rman target backdba/backdba RMAN> register database; One can also use the "upgrade catalog;" command to upgrade to a new RMAN release, or the "drop catalog;" command to remove an RMAN catalog. These commands need to be entered twice to confirm the operation. How does one integrate RMAN with third-party Media Managers? The following Media Management Software Vendors have integrated their media management software with RMAN (Oracle Recovery Manager):

Veritas NetBackup - http://www.veritas.com/ EMC Data Manager (EDM) - http://www.emc.com/ HP OMNIBack/ DataProtector - http://www.hp.com/ IBM's Tivoli Storage Manager (formerly ADSM) - http://www.tivoli.com/storage/ EMC Networker - http://www.emc.com/ BrightStor ARCserve Backup - http://www.ca.com/us/data-loss-prevention.aspx Sterling Software's SAMS:Alexandria (formerly from Spectralogic) - http://www.sterling.com/sams/ SUN's Solstice Backup - http://www.sun.com/software/whitepapers/backup-n-storage/ CommVault Galaxy - http://www.commvault.com/
etc... The above Media Management Vendors will provide first line technical support (and installation guides) for their respective products. A complete list of supported Media Management Vendors can be found at: http://www.oracle.com/technology/deploy/availability/htdocs/bsp.htm When allocating channels one can specify Media Management spesific parameters. Here are some examples: Netbackup on Solaris: allocate channel t1 type 'SBT_TAPE' PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so.1'; Netbackup on Windows:

allocate channel t1 type 'SBT_TAPE' send "NB_ORA_CLIENT=client_machine_name"; Omniback/ DataProtector on HP-UX: allocate channel t1 type 'SBT_TAPE' PARMS='SBT_LIBRARY= /opt/omni/lib/libob2oracle8_64bit.sl'; or: allocate channel 'dev_1' type 'sbt_tape' parms 'ENV=OB2BARTYPE=Oracle8,OB2APPNAME=orcl,OB2BARLIST=machinename_orcl_archlogs)'; How does one clone/duplicate a database with RMAN? The first step to clone or duplicate a database with RMAN is to create a new INIT.ORA and password file (use the orapwd utility) on the machine you need to clone the database to. Review all parameters and make the required changed. For example, set the DB_NAME parameter to the new database's name. Secondly, you need to change your environment variables, and do a STARTUP NOMOUNT from sqlplus. This database is referred to as the AUXILIARY in the script below. Lastly, write a RMAN script like this to do the cloning, and call it with "rman cmdfile dupdb.rcv": connect target sys/secure@origdb connect catalog rman/rman@catdb connect auxiliary / run { set newname for datafile 1 to '/ORADATA/u01/system01.dbf'; set newname for datafile 2 to '/ORADATA/u02/undotbs01.dbf'; set newname for datafile 3 to '/ORADATA/u03/users01.dbf'; set newname for datafile 4 to '/ORADATA/u03/indx01.dbf'; set newname for datafile 5 to '/ORADATA/u02/example01.dbf'; allocate auxiliary channel dupdb1 type disk; set until sequence 2 thread 1; duplicate target database to dupdb logfile GROUP 1 ('/ORADATA/u02/redo01.log') SIZE 200k REUSE, GROUP 2 ('/ORADATA/u03/redo02.log') SIZE 200k REUSE; } The above script will connect to the "target" (database that will be cloned), the recovery catalog (to get backup info), and the auxiliary database (new duplicate DB). Previous backups will be restored and the database recovered to the "set until time" specified in the script. Notes: the "set newname" commands are only required if your datafile names will different from the target database. The newly cloned DB will have its own unique DBID. Can one restore RMAN backups without a CONTROLFILE and RECOVERY CATALOG? Details of RMAN backups are stored in the database control files and optionally a Recovery Catalog. If both these are gone, RMAN cannot restore the database. In such a situation one must extract a control file (or other files) from the backup pieces written out when the last backup was taken. Let's look at an example:

Let's take a backup (partial in our case for ilustrative purposes): $ rman target / nocatalog Recovery Manager: Release 10.1.0.2.0 - 64bit Production Copyright (c) 1995, 2004, Oracle. All rights reserved. connected to target database: ORCL (DBID=1046662649) using target database controlfile instead of recovery catalog RMAN> backup datafile 1; Starting backup at 20-AUG-04 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=146 devtype=DISK channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00001 name=/oradata/orcl/system01.dbf channel ORA_DISK_1: starting piece 1 at 20-AUG-04 channel ORA_DISK_1: finished piece 1 at 20-AUG-04 piece handle= /flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256_0lczd9tf_.bkp comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45 channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including current controlfile in backupset including current SPFILE in backupset channel ORA_DISK_1: starting piece 1 at 20-AUG-04 channel ORA_DISK_1: finished piece 1 at 20-AUG-04 piece handle= /flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256_0lczfrx8_.bkp comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04 Finished backup at 20-AUG-04[/code] Now, let's destroy one of the control files: SQL> show parameters CONTROL_FILES NAME TYPE VALUE ------------------------------------ ----------- -----------------------------control_files string /oradata/orcl/control01.ctl, /oradata/orcl/control02.ctl, /oradata/orcl/control03.ctl SQL> shutdown abort; ORACLE instance shut down. SQL> ! mv /oradata/orcl/control01.ctl /tmp/control01.ctl</pre> Now, let's see if we can restore it. First we need to start the databaase in NOMOUNT mode: SQL> startup NOMOUNT ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1301536 bytes

Variable Size Database Buffers Redo Buffers

262677472 bytes 25165824 bytes 262144 bytes</pre>

Now, from SQL*Plus, run the following PL/SQL block to restore the file: DECLARE v_devtype VARCHAR2(100); v_done BOOLEAN; v_maxPieces NUMBER; TYPE t_pieceName IS TABLE OF varchar2(255) INDEX BY binary_integer; v_pieceName t_pieceName; BEGIN -- Define the backup pieces... (names from the RMAN Log file) v_pieceName(1) := '/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256_0lczfrx8_.bkp'; v_pieceName(2) := '/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256_0lczd9tf_.bkp'; v_maxPieces := 2; -- Allocate a channel... (Use type=>null for DISK, type=>'sbt_tape' for TAPE) v_devtype := DBMS_BACKUP_RESTORE.deviceAllocate(type=>NULL, ident=>'d1'); -- Restore the first Control File... DBMS_BACKUP_RESTORE.restoreSetDataFile; -- CFNAME mist be the exact path and filename of a controlfile taht was backed-up DBMS_BACKUP_RESTORE.restoreControlFileTo(cfname=>'/app/oracle/oradata/orcl/control01.ctl'); dbms_output.put_line('Start restoring '||v_maxPieces||' pieces.'); FOR i IN 1..v_maxPieces LOOP dbms_output.put_line('Restoring from piece '||v_pieceName(i)); DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done, params=>null); exit when v_done; END LOOP; -- Deallocate the channel... DBMS_BACKUP_RESTORE.deviceDeAllocate('d1'); EXCEPTION WHEN OTHERS THEN DBMS_BACKUP_RESTORE.deviceDeAllocate; RAISE; END; / Let's see if the controlfile was restored: SQL> ! ls -l /oradata/orcl/control01.ctl -rw-r----- 1 oracle dba 3096576 Aug 20 16:45 /oradata/orcl/control01.ctl[/code] We should now be able to MOUNT the database and continue recovery... SQL> ! cp /oradata/orcl/control01.ctl /oradata/orcl/control02.ctl

SQL> ! cp /oradata/orcl/control01.ctl /oradata/orcl/control03.ctl SQL> alter database mount; SQL> recover database using backup controlfile; ORA-00279: change 7917452 generated at 08/20/2004 16:40:59 needed for thread 1 ORA-00289: suggestion : /flash_recovery_area/ORCL/archivelog/2004_08_20/o1_mf_1_671_%u_.arc ORA-00280: change 7917452 for thread 1 is in sequence #671 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} /oradata/orcl/redo02.log Log applied. Media recovery complete. Database altered. SQL> alter database open resetlogs; Database altered. Oracle database Performance Tuning FAQ Why and when should one tune? One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance. One should do performance tuning for the following reasons: The speed of computing might be wasting valuable human time (users waiting for response); Enable your system to keep-up with the speed business is conducted; and Optimize hardware usage to save money (companies are spending millions on hardware). Although this site is not overly concerned with hardware issues, one needs to remember than you cannot tune a Buick into a Ferrari. Where should the tuning effort be directed? Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement. Database Design (if it's not too late): Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc. Application Tuning: Experience shows that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.

Memory Tuning: Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads. Disk I/O Tuning: Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc. Eliminate Database Contention: Study database locks, latches and wait events carefully and eliminate where possible. Tune the Operating System: Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system. What tools/utilities does Oracle provide to assist with performance tuning? Oracle provides the following tools/ utilities to assist with performance monitoring and tuning:

ADDM (Automated Database Diagnostics Monitor) introduced in Oracle 10g TKProf Statspack Oracle Enterprise Manager - Tuning Pack (cost option)
Old UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring When is cost based optimization triggered? It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, and optimizer dynamic sampling isn't performed, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed. Generally, the CBO can change the execution plan when you: Change statistics of objects by doing an ANALYZE; Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count). How can one optimize %XYZ% queries? It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints. If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table. Where can one find I/O statistics per table? The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations.

The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information. For more details, look at the header comments in the catio.sql script. My query was fine last week and now it is slow. Why? The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available. Some factors that can cause a plan to change are: Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?) Has OPTIMIZER_MODE been changed in INIT<SID>.ORA? Has the DEGREE of parallelism been defined/changed on any table? Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used? Have the statistics changed? Has the SPFILE/ INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed? Has the INIT<SID>.ORA parameter SORT_AREA_SIZE been changed? Have any other INIT<SID>.ORA parameters been changed? What do you think the plan should be? Run the query with hints to see if this produces the required performance. It can also happen because of a very high high water mark. Typically when a table was big, but now only contains a couple of records. Oracle still needs to scan through all the blocks to see if they contain data. Does Oracle use my index or not? One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example: SQL> CREATE TABLE t1 (c1 NUMBER); Table created. SQL> CREATE INDEX t1_idx ON t1(c1); Index created. SQL> ALTER INDEX t1_idx MONITORING USAGE; Index altered. SQL> SQL> Prompt this view should be consulted as the owner of the object of interest (e.g. system will mostly see an empty view). SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage; TABLE_NAME INDEX_NAME MON USE ------------------------------ ------------------------------ --- --T1 T1_IDX YES NO SQL> SELECT * FROM t1 WHERE c1 = 1; no rows selected

SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage; TABLE_NAME INDEX_NAME MON USE ------------------------------ ------------------------------ --- --T1 T1_IDX YES YES To reset the values in the v$object_usage view, disable index monitoring and re-enable it: ALTER INDEX indexname NOMONITORING USAGE; ALTER INDEX indexname MONITORING USAGE; Why is Oracle not using the darn index? This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are: USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds. USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable. USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks. Decrease the INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper. Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING). There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index. When should one rebuild an index? You can run the ANALYZE INDEX <index> VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'. For example, you may decide that index should be rebuilt if more than 20% of its rows are deleted: select del_lf_rows * 100 / decode(lf_rows,0,1,lf_rows) from index_stats where name = 'index_ name'; How does one tune Oracle Wait event XYZ? Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:

db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O
across disks. buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH ("buffer busy waits" was replaced with "read by other session" Oracle 10g). log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks log file sync: If this event is in the top 5, you are committing too often (talk to your developers) log file parallel write: deals with flushing out the redo log buffer to disk. Your disks may be too slow or you have an I/O bottleneck. Two useful sections in Oracle's Database Performance Tuning Guide:

Table of Wait Events and Potential Causes Wait Events Statistics


What is the difference between DBFile Sequential and Scattered Reads? Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache. db file sequential read: A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads. db file scattered read: Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads. The following query shows average wait time for sequential versus scattered reads: prompt "AVERAGE WAIT TIME FOR READ REQUESTS" select a.average_wait "SEQ READ", b.average_wait "SCAT READ" from sys.v_$system_event a, sys.v_$system_event b where a.event = 'db file sequential read' and b.event = 'db file scattered read'; How does one tune the Redo Log Buffer? The size of the Redo log buffer is determined by the LOG_BUFFER parameter in your SPFILE/INIT.ORA file. The default setting is normally 512 KB or (128 KB * CPU_COUNT), whichever is greater. This is a static parameter and its size cannot be modified after instance startup. SQL> show parameters log_buffer NAME TYPE value ------------------------------------ ----------- -----------------------------log_buffer integer 262144 When a transaction is committed, info in the redo log buffer is written to a Redo Log File. In addition to this, the following conditions will trigger LGWR to write the contents of the log buffer to disk:

Whenever the log buffer is MIN(1/3 full, 1 MB) full; or Every 3 seconds; or When a DBWn process writes modified buffers to disk (checkpoint). Larger LOG_BUFFER values reduce log file I/O, but may increase the time OLTP users have to wait for write operations to complete. In general, values between the default and 1 to 3MB are optimal. However, you may want to make it bigger to accommodate bulk data loading, or to accommodate a system with fast CPUs and slow disks. Nevertheless, if you set this parameter to a value beyond 10M, you should think twice about what you are doing. SQL> SELECT name, value 2 FROM SYS.v_$sysstat 3 WHERE NAME in ('redo buffer allocation retries', 4 'redo log space wait time'); NAME value ---------------------------------------------------------------- ---------redo buffer allocation retries 3 redo log space wait time 0 Statistic "REDO BUFFER ALLOCATION RETRIES" shows the number of times a user process waited for space in the redo log buffer. This value is cumulative, so monitor it over a period of time while your application is running. If this value is continuously increasing, consider increasing your LOG_BUFFER (but only if you do not see checkpointing and archiving problems). "REDO LOG SPACE WAIT TIME" shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer. If this value is low, your log buffer size is most likely adequate.

Das könnte Ihnen auch gefallen