Sie sind auf Seite 1von 31

1

O1
Oracle Ser ver Architecture
The Oracle server consists of two entities, an Oracle Instance and an Oracle Database. Oracle Instance is a set of oracle background processes (on UNIX) or single threaded process (on Windows) and shared memory area, which is memory that is shared across those processes/threads running on a single computer. An instance can exist without any disk storage whatsoever. Oracle Database is a collection of physical operating system files or disks in case of Automatic Storage Management (ASM) or RAW partitions. The relationship between database and instance is that a database may be mounted (attached) and opened by many instances, while an instance may mount and open a single database in its entire lifetime. The following figure depicts graphically the previous paragraphs.

01-ORACLE Server Architecture

Lets check out it practically; we did a software only installation of Oracle 10g i.e. without starter database. 1. Using the ps (process status) command, we can see all processes being run by the user ora10g (the Oracle software owner). There are no Oracle database processes whatsoever at this point.

2. Check inter-process communication devices like shared memory, and semaphores etc.

Currently there are none in use on this system. 3. Now start up SQL*Plus and connect AS SYSDBA.

4. Again check process status and inter-process communication devices. Our instance right now consists solely of the Oracle server process (oracleora10g ). There is no shared memory allocated yet and no other processes.

5. Lets try to start the instance now.

The parameter file (initora10g.ora) is the sole file needed to start the instance. Create the parameter file and put into it the minimal information we need to actually start a database instance i.e. database name. db_name = db

3 6. Once again start database in nomount state as:

7. Now we have what I would call an instance. Through ps we can check the background processes; additionally the ipcs shows shared memory.

8. Up till now, we have dont have a database yet. We have a name of a database (in the parameter file we created), but no database whatsoever. Lets create it.

9. We can use a simple query against some Oracle dynamic views, specifically V$DATAFILE, V$LOGFILE, and V$CONTROLFILE, to list the files that make up this database.

10. If we close this database and try to open it again, well discover that we cant. Its because an instance can mount and open at most one database in its life. We must discard this instance and create a new one in order to open this or any other database.

Oracle Instance
When an oracle instance is started on a database server, the Oracle software allocates a shared memory area called the System Global Area (SGA), non-shared memory area called Program Global Area (PGA) and starts several Oracle background processes. The following figure depicts the overall architecture of the Oracle Instance.

01- Oracle Instance

System Global Area


SGA contain data and control information for one Oracle database instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. If multiple users are concurrently connected to the same instance, then the data in the instances SGA is shared among the users. Consequently, the SGA is sometimes called the shared global area. All SGA components allocate and de-allocate space in units of granules. Granule size is determined by total SGA size. A single granule is an area of memory either 4MB, 8MB, or 16MB in size. The granule is the smallest unit of allocation, so if you ask for a Java pool of 5MB and your granule size is 4MB, Oracle will actually allocate 8MB to the Java pool (8 being the smallest number greater than or equal to 5 that is amultiple of the granule size of 4). In Oracle Database 10g the Automatic Shared Memory Management feature simplifies the SGA memory management significantly. A DBA can simply specify the total amount of SGA memory available to an instance using the SGA_TARGET initialization parameter and the Oracle Database will automatically distribute this memory among various subcomponents to ensure most effective memory utilization. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate SGA_TARGET. Note some SGA components like DB_KEEP_CACHE_SIZE, DB_RECYCLE_CACHE_SIZE and DB_nK_CACHE_SIZE are not automatically managed. The SGA contains the following memory components: 1) Database Buffer Cache 2) Redo Log Buffer Cache 3) Shared Pool 4) Large Pool 5) Java Pool 6) Streams Pool 7) Fixed SGA The following dynamic views can be used to investigate the information abut SGA. V$SGAINFO V$SGA_DYNAMIC_COMPONENTS V$SGA_RESIZE_OPS

6 V$SGA_TARGET_ADVICE

The following figure depicts the internals of SGA.

01-System Global Area

1) Database Buffer Cache


The database buffer cache is the portion of SGA that holds copies of data blocks read from datafiles. All user processes concurrently connected to the instance share access to the database buffer cache. Oracle supports multiple block sizes in a database. You specify the standard block size by setting the initialization parameter DB_BLOCK_SIZE. The allowed values for DB_BLOCK_SIZE are from 2K to 32K. The sizes and numbers of non-standard block size buffers are specified by the following parameters: DB_2K_CACHE_SIZE DB_4K_CACHE_SIZE DB_8K_CACHE_SIZE DB_16K_CACHE_SIZE DB_32K_CACHE_SIZE You can configure the database buffer cache with separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks. The DEFAULT ( DB_BLOCK_SIZE) buffer pool contains data blocks from schema objects that are not assigned to any buffer pool, as well as schema objects that are explicitly assigned to the DEFAULT pool The KEEP (DB_KEEP_CACHE_SIZE) buffer pool retains the schema objects data blocks in memory. The RECYCLE (DB_RECYCLE_CACHE_SIZE) buffer pool eliminates data blocks from memory as soon as they are no longer needed. Note: Multiple buffer pools are only available for the standard block size. Non-standard block size caches have a single DEFAULT pool. The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed. When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list. When configuring a new instance, it is impossible to know the correct size for the buffer cache. Typically, a database administrator makes a first estimate for the cache size, then runs a representative workload on the instance and examines the relevant statistics to see whether the cache is under or over configured.

2) Redo Log Buffer Cache


When a server process changes data in the data buffer cache (via an insert, a delete, or an update), it generates redo data, which is recorded in the redo log buffer. The log writer process (LGWR) writes redo information from the redo log buffer in memory to the redo log files on disk. The initialization parameter LOG_BUFFER determines the size (in bytes) of the redo log buffer. In general, larger values reduce log file I/O, particularly if transactions are long or numerous. The default setting is either 512 kilobytes (KB) or 128 KB times the setting of the CPU_COUNT parameter, whichever is greater. The redo log buffer is a circular bufferthe log writer process writes the redo entries from the redo log buffer to the redo log files, and server processes write new redo log entries over the entries that have

8 been written to the redo log files. You only need to have a small redo log buffer, about 1MB or so. Large redo log buffers will reduce your log file I/O (especially if you have large or many transactions), but your commits will take longer as well. The log writer process writes the contents of the redo log buffer to disk under any of the following circumstances: Every three seconds Whenever someone commits When LGWR is asked to switch log files When the redo buffer gets one-third full or contains 1MB of cached redo log data

3) Shared Pool
The shared pool portion of the SGA contains the library cache, the dictionary cache, buffers for parallel execution messages, and control structures. The total size of the shared pool is determined by the initialization parameter SHARED_POOL_SIZE. The default value of this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms.

Library Cache
The library cache includes the shared SQL areas, private SQL areas (in the case of a shared server configuration), PL/SQL procedures and packages, and control structures such as locks and library cache handles. A shared SQL area contains the parse tree and execution plan for a given SQL statement. Oracle saves memory by using one shared SQL area for SQL statements run multiple times, which often happens when many users run the same application. Oracle processes PL/SQL program units much the same way it processes individual SQL statements. Oracle allocates a private area to hold values specific to the session that runs the program unit, including local, global, and package variables (also known as package instantiation) and buffers for executing SQL. If more than one user runs the same program unit, then a single, shared area is used by all users, while each user maintains a separate copy of his or her private SQL area, holding values specific to his or her session.

Dictionary Cache
The data dictionary is a collection of database tables and views containing reference information about the database, its structures, and its users. The data dictionary cache is also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data). In general, any item (shared SQL area or dictionary row) in the shared pool remains until it is flushed according to a modified LRU algorithm.

4) Large Pool
An optional memory area called the large pool to provide large memory allocations for: Session memory for the shared server and the Oracle XA interface (used where transactions interact with more than one database) I/O server processes Oracle backup and restore operations

The large pool does not have an LRU list. It is different from reserved space in the shared pool, which uses the same LRU list as other memory allocated from the shared pool.

5) Java Pool
Java pool memory is used in server memory for all session-specific Java code and data within the JVM. The Java pool is used in different ways, depending on the mode in which the Oracle server is running. In dedicated server mode, the Java pool includes the shared part of each Java class, which is actually used per session. These are basically the read-only parts (execution vectors, methods, etc.) and are about 4KB to 8KB per class. Thus, in dedicated server mode (which will most likely be the case for applications using purely Java stored procedures), the total memory required for the Java pool is quite modest and can be determined based on the number of Java classes you will be using. The parameter JAVA_POOL_SIZE is used to fix the amount of memory allocated to the Java pool for all session-specific Java code and data.

6) Streams Pool
The Streams pool is a new SGA structure starting in Oracle 10g. The Streams pool is used to buffer queue messages used by the Streams process as it is moving/copying data from one database to another. The Streams pool will only be important in systems using the Streams database feature. In those environments, it should be set in order to avoid stealing 10 percent of the Shared pool for this feature.

7) Fixed SGA
Fixed SGA contains a set of variables that point to other components of the SGA. It is like a bootstrap section of the SGA. This part is fixed for each release of oracle and cant be altered by any parameter settings. The Request and Response queues are buffer areas used by dispatcher process in dedicated server mode to place request and response for the user process.

Program Global Area


A program global area (PGA) is a memory region that holds data and control information for the dedicated server process that Oracle creates for each individual user. Unlike the SGA, the PGA is for the exclusive use of each server process and cant be shared by multiple processes, consequently, the PGA is sometimes called the private global area. The total PGA memory allocated by each server process attached to an Oracle instance is also referred to as the aggregated PGA memory allocated by the instance. You use automatic PGA memory management by setting the PGA_AGGREGATE_TARGET parameter. The content of the PGA memory varies, depending on whether the instance is running the shared server option. But generally it can be: 1) Session Memory and 2) Private SQL Area The following dynamic views can be used to investigate the information abut PGA. V$PGASTAT V$PROCESS (Columns PGA_USED_MEM, PGA_ALLOC_MEM, PGA_MAX_MEM)

10

The following figure depicts the internals of PGA.

01-Program Global Area

1) Session Memory
Session memory is the memory allocated to hold a sessions variables (logon information) and other information related to the session. For a shared server, the session memory is shared and not private.

2) Private SQL Area


A private SQL area contains data such as bind information and runtime memory structures. Each session that issues a SQL statement has a private SQL area. Each user that submits the same SQL statement has his own private SQL area that uses a single shared SQL area. Thus, many private SQL areas can be associated with the same shared SQL area. The location of a private SQL area depends on the type of connection established for a session. If a session is connected through a dedicated server, private SQL areas are located in the server processs

11 PGA. However, if a session is connected through a shared server, part of the private SQL area is kept in the SGA. The private SQL area of a cursor (a handle or name for a private SQL area) is itself divided into two areas whose lifetimes are different:

Persistent Area
The persistent area contains SQL variable bind information. It is freed only when the cursor is closed.

Runtime Area
The runtime area is created for a user session when the session issues a SELECT, INSERT, UPDATE, or DELETE statement. After an INSERT, DELETE, or UPDATE statement is run, or after the output of a SELECT statement is fetched, the runtime area is freed by Oracle. For complex queries (for example, decision-support queries), a big portion of the runtime area is dedicated to work areas allocated by memory-intensive operators such as: Sort, Hash-Join, Bitmap Merge, and Bitmap Create. Statistics on allocation and use of work area memory can be viewed in the following dynamic performance views: V$SQL_WORKAREA V$SQL_WORKAREA_ACTIVE

Oracle Processes
A process is a "thread of control" or a mechanism in an operating system that can run a series of steps. A process normally has its own private memory area in which it runs. Oracle is a multiple-process (multi-user) database system which uses several processes to run different parts of the Oracle code and additional processes for the userseither one process for each connected user or one or more processes shared by multiple users. The processes in an Oracle system can be categorized into two major groups: 1) User Processes; run the application or oracle tool code. 2) Oracle Processes; run the oracle database server code. They include Server Processes and Background Processes. The process structure varies for different Oracle configurations, depending on the operating system and the choice of Oracle options. The code for connected users can be configured as a dedicated server or a shared server.

12 With dedicated server configuration, Oracle will create a new dedicated process for incoming connection for each session. There is a one-to-one mapping between a connection to the database and a server process or thread. With shared server configuration, Oracle uses a pool of shared processes for a large community of users. A process (or set of processes) called dispatchers are used by the user process to talk to shared process.

1) User Processes
When a user runs an application program (such as a Pro*C program) or an Oracle tool (such as Enterprise Manager or SQL*Plus), Oracle creates a user process to run the users application. Two terms, Connection and Session are closely related to user process. A connection is a physical path from a client to an Oracle instance. A connection is established either over a network (Oracle Net Services) (when different computers run the database application and Oracle, and communicate through a network) or over an IPC mechanism (on a computer that runs both the user process and Oracle). A session, on the other hand, is a logical entity in the instance, where a user process can execute SQL and so on. Many independent sessions can be associated with a single connection, and these sessions can even exist independently of a connection. A connection may have zero, one, or more sessions established on it. We can use SQL*Plus to see connections and sessions in action.

2) Oracle Processes
The two types of processes that run the Oracle database server code are: Server processes and Background processes.

Server Processes
The server process is the process that services an individual user process. In some situations when the application and Oracle operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead. However, when the application and Oracle operate on different computers, a user process always communicates with Oracle through a separate server process. The most common configuration for the server process is to assign each user a dedicated server process. However, Oracle provides for a more sophisticated means of servicing several users through the same server process, called the shared server architecture. The following self-explanatory figures depict the dedicated and shared server architecture.

13

01-Dedicated Server

01-Shared Server

Background Processes
The background processes are the real workhorses of the Oracle instancethey enable large numbers of users to concurrently and efficiently use information stored in database. Each of the Oracle background processes is in charge of a separate task, thus increasing the efficiency of the database instance. These processes are automatically created by Oracle when you start the database instance, and they terminate when the database is shut down. An Oracle instance can have many background processes; not all are always present. You can query the V$BGPROCESS view for more information on the background processes.

The background processes in an Oracle instance can include the following: Database Writer Process (DBWn)

14 Log Writer Process (LGWR) Checkpoint Process (CKPT) System Monitor Process (SMON) Process Monitor Process (PMON) Recoverer Process (RECO) Archiver Processes (ARCn) Job Queue Processes Queue Monitor Processes (QMNn) Other Background Processes

Figure below illustrates how each background process interacts with the different parts of an Oracle database.

Database Writer Process (DBWn)


The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes (DBW1 through DBW9 and DBWa through DBWj) to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems. When a buffer in the database buffer cache is modified, it is marked dirty. A cold buffer is a buffer that has not been recently used according to the least recently used (LRU) algorithm. The DBWn process writes cold, dirty buffers to disk so that user processes are able to find cold, clean buffers that can be used to read new blocks into the cache. As buffers are dirtied by user processes, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers. The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes. The maximum number of DBWn processes is 20. If it is not specified by the user during startup, Oracle determines how to set DB_WRITER_PROCESSES based on the number of CPUs and processor groups. The database writer process writes dirty buffers to disk under the following conditions: When the database issues a checkpoint

15 When a server process cant find a clean reusable buffer after checking a threshold number of buffers Every 3 seconds

Log Writer Process (LGWR)


The log writer process (LGWR) is responsible for redo log buffer managementwriting the redo log buffer to a redo log file on disk. LGWR writes all redo entries that have been copied into the buffer since the last time it wrote. The redo log buffer is a circular buffer. When LGWR writes redo entries from the redo log buffer to a redo log file, server processes can then copy new entries over the entries in the redo log buffer that have been written to disk. LGWR normally writes fast enough to ensure that space is always available in the buffer for new entries, even when access to the redo log is heavy. LGWR writes one contiguous portion of the buffer to disk. LGWR writes: o o o A commit record when a user process commits a transaction Redo log buffers Every three seconds When the redo log buffer is one-third full When a DBWn process writes modified buffers to disk, if necessary

Note: Before DBWn can write a modified buffer, all redo records associated with the changes to the buffer must be written to disk (the write-ahead protocol). When a user issues a COMMIT statement, LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the transactions redo entries. The corresponding changes to data blocks are deferred until it is more efficient to write them. This is called a fast commit mechanism. When a user commits a transaction, the transaction is assigned a system change number (SCN), which Oracle records along with the transactions redo entries in the redo log. SCNs are recorded in the redo log so that recovery operations can be synchronized in Real Application Clusters and distributed databases.

Checkpoint Process (CKPT)


The checkpoint (CKPT) process is charged with telling the database writer process when to write the dirty data in the memory buffers to disk. After telling the database writer process to write the changed data, the checkpoint process updates the data file headers and the control file to indicate when the checkpoint was performed. The purpose of the checkpoint process is to synchronize the buffer cache information with the information on the database disks. A checkpointing process involves the following steps: Flushing the contents of the redo log buffers to the redo log files Writing a checkpoint record to the redo log file Flushing the contents of the database buffer cache to disk Updating the data file headers and the control files after the checkpoint completes

System Monitor Process (SMON)


The system monitor process (SMON) performs recovery, if necessary, at instance startup. SMON is also responsible for cleaning up temporary segments that are no longer in use and for coalescing contiguous free extents within dictionary managed tablespaces. If any terminated transactions were skipped during instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON checks regularly to see whether it is needed. Other processes can call SMON if they detect a need for it.

16 With Real Application Clusters, the SMON process of one instance can perform instance recovery for a failed CPU or instance.

Process Monitor Process (PMON)


The process monitor (PMON) performs process recovery when a user process fails. PMON is responsible for cleaning up the database buffer cache and freeing resources that the user process was using. For example, it resets the status of the active transaction table, releases locks, and removes the process ID from the list of active processes. PMON periodically checks the status of dispatcher and server processes, and restarts any that have stopped running (but not any that Oracle has terminated intentionally). PMON also registers information about the instance and dispatcher processes with the network listener. Like SMON, PMON checks regularly to see whether it is needed and can be called if another process detects the need for it.

Recoverer Process (RECO)


The recoverer process (RECO) is a background process used with the distributed database configuration that automatically resolves failures involving distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When the RECO process reestablishes a connection between involved database servers, it automatically resolves all in-doubt transactions, removing from each databases pending transaction table any rows that correspond to the resolved in-doubt transactions. If the RECO process fails to connect with a remote server, RECO automatically tries to connect again after a timed interval. However, RECO waits an increasing amount of time (growing exponentially) before it attempts another connection. The RECO process is present only if the instance permits distributed transactions. The number of concurrent distributed transactions is not limited.

Archiver Processes (ARCn)


The archiver process (ARCn) copies redo log files to a designated storage device after a log switch has occurred. ARCn processes are present only when the database is in ARCHIVELOG mode, and automatic archiving is enabled. An Oracle instance can have up to 10 ARCn processes (ARC0 to ARC9). The LGWR process starts a new ARCn process whenever the current number of ARCn processes is insufficient to handle the workload. The alert log keeps a record of when LGWR starts a new ARCn process. If you anticipate a heavy workload for archiving, such as during bulk loading of data, you can specify multiple archiver processes with the initialization parameter LOG_ARCHIVE_MAX_PROCESSES. This parameter dynamically increases or decrease the number of ARCn processes. However, you do not need to change this parameter from its default value of 1, because the system determines how many ARCn processes are needed, and LGWR automatically starts up more ARCn processes when the database workload requires more.

Job Queue Processes


Job queue processes are used for batch processing. They run user jobs. They can be viewed as a scheduler service that can be used to schedule jobs as PL/SQL statements or procedures on an Oracle instance. Given a start date and an interval, the job queue processes try to run the job at the next occurrence of the interval. Heres what happens: 1. The coordinator process (CJQ0), periodically selects jobs that need to be run from the system JOB$ table. New jobs selected are ordered by time. 2. The CJQ0 process dynamically generates job queue slave processes (J000J999) to run the jobs.

17 3. After the process finishes execution of a single job, it polls for more jobs. If no jobs are scheduled for execution, then it enters a sleep state, from which it wakes up at periodic intervals and polls for more jobs. If the process does not find any new jobs, then it aborts after a preset interval. The initialization parameter JOB_QUEUE_PROCESSES represents the maximum number of job queue processes that can concurrently run on an instance. Note: The coordinator process JOB_QUEUE_PROCESSES is set to 0. is not started if the initialization parameter

Queue Monitor Processes (QMNn)


The queue monitor process is an optional background process for Oracle Streams Advanced Queuing, which monitors the message queues. You can configure up to 10 queue monitor processes. These processes, like the job queue processes, are different from other Oracle background processes in that process failure does not cause the instance to fail.

Other Background Processes


There are several other background processes that might be running. These can include the following: o The manageability monitor (MMON) process collects several types of statistics to help the database manage itself. For example, MMON collects the Automatic Workload Repository (AWR) snapshot information, which is the basis for the performance diagnostics capability of the Automatic Database Diagnostic Monitor (ADDM). The memory manager (MMAN) process coordinates the sizing of the memory components. MMAN keeps track of the sizes of the memory components and the pending resize operations. It observes the system and workload in order to determine the ideal distribution of memory, and it ensures that the needed memory is available The rebalance master (RBAL) process coordinates disk rebalancing activity when you use an Automatic Storage Management (ASM) storage system. The ASM rebalance (ARBn) processes perform the disk rebalancing activity in an ASM instance. The ASM background (ASMB) process is present in all Oracle databases that use an ASM storage system. The ASMB process communicates with the ASM instance by logging into the ASM instance as a foreground process. Oracle starts the recovery writer (RVWR) process to write the flashback data from the flashback buffer to the flashback logs.

o o o

18

Oracle Database
The Oracle database has a logical layer and a physical layer. The physical layer consists of the files that reside on the disk; the components of the logical layer map the data to these physical components. The separation of logical layer from physical layer is a necessary part of the relational database paradigm. The relational paradigm states that programmers should address only logical structures and let the database manage the mapping to physical structures. Thus system administrators see physical datafiles; programmers see logical components. The physical layer of the database consists of the following types of files: Data Files Control Files Online Redo Log Files Archive Log Files Parameter Files Trace Files Alert Files Password File Backup Files Flashback Log Files (Optional) Change Tracking Files (Optional)

The logical layer of the database consists of the following types of structures: Tablespace Segments Extents Oracle Blocks

Figure below shows the ER diagram of logical structure and physical structure.

Note: A schema is a collection of database objects that are owned by a particular user. A schema has the same name as that user. Schema objects are the logical structures.

19

Data Files
Data files are the most important set of files in the database. This is where all of your data will ultimately be stored. Every database has at least two data file associated with it, and typically it will have many more than two. A datafile can be associated with only one tablespace and only one database but a tablespace can spawn more than one datafile. The first tablespace in any database is always the SYSTEM tablespace, so Oracle automatically allocates the first datafiles of any database for the SYSTEM tablespace during database creation. A segment can exist in only one tablespace, but the tablespace can spread it across all the files making up the tablespace. This means that the tables sizes are not subject to any limitations imposed by the environment on maximum file size. The Oracle block is the unit of I/O for the database. Datafiles are formatted into Oracle blocks, consecutively numbered. The size of the Oracle blocks is fixed for a tablespace (generally speaking, it is the same for all tablespaces in the database); the default (with release 11g) is 8 KB. The size of an Oracle block can range from 2 KB to 16 KB on Linux or Windows, to 32 KB on some other operating systems. The block size is controlled by the parameter DB_BLOCK_SIZE. Managing space one block at a time would be a crippling task, so blocks are grouped into extents. An extent is a set of consecutively numbered Oracle blocks within one datafile. Every segment will consist of one or more extents, consecutively numbered. An operating system block is the unit of I/O for your file system. The operating system block size is configurable for some file systems (for example, when formatting an NTFS file system you can choose from 512 B to 64 KB), but typically system administrators leave it on default (512 B for NTFS, 1 KB for ext3). Note: Datafiles should not be stored on the same disk drive that stores the database redo log files. You can investigate about the data files by querying the following views: V$DATAFILE DBA_DATA_FILES V$TEMPFILE DBA_TEMP_FILES

20

Control Files
The database control file is a small binary file necessary for the database to start and operate successfully. The control file is critical to the functioning of the database, and recovery is difficult without access to an up-to-date control file. If all control files of a database are permanently lost during operation, then the instance is aborted and media recovery (type of recovery that takes a backup and applies redo) is required. The control file contains the names and locations of the data files, redo log files, current log sequence numbers, backup set and backup piece details, check point information and the all-important system change number (SCN), which indicates the most recent version of committed changes in the database. You specify control file names using the CONTROL_FILES initialization parameter in the database initialization parameter file Every database has one control file, but due to the files importance, multiple identical copies (usually three) are maintained on a different physical diskwhen the database writes to the control file, all copies of the file get written to. It is very important that you back up your control files. This is true initially, and every time you change the physical structure of your database. The following views display information about control files: V$CONTROLFILE V$PARAMETER

Online Redo Log Files


The online redo log files record all the changes made to the database, and they are vital during the recovery of a database. A database consists of two or more redo log files. The online redo log consists of groups of online redo log files, each file being known as a member. An Oracle database requires at least two groups of at least one member each to function. These files are filled with redo records or redo entry. The redo log record is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. Redo entries record data that you can use to reconstruct all changes made to the database, including the undo segments. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks. Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed. LGWR writes to redo log files in a circular fashion. When the current redo log file fills, LGWR begins writing to the next available redo log file. When the last available redo log file is filled, LGWR returns to the first redo log file and writes to it, starting the cycle again. Filled redo log files are available to LGWR for reuse depending on whether archiving is enabled.

21 If archiving is disabled (the database is in NOARCHIVELOG mode), a filled redo log file is available after the changes recorded in it have been written to the datafiles. If archiving is enabled (the database is in ARCHIVELOG mode), a filled redo log file is available to LGWR after the changes recorded in it have been written to the datafiles and the file has been archived.

The point at which the database stops writing to one redo log file and begins writing to another is called log switching. Each online or archived redo log file is uniquely identified by its log sequence number. During crash, instance, or media recovery, the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files. The following views provide information on redo logs. V$LOG V$LOGFILE

Archive Log Files


An archived redo log file is a copy of one of the filled members of a online redo log group. The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is running in ARCHIVELOG mode. You can choose automatic or manual archiving. The background process ARCn automates archiving operations when automatic archiving is enabled. The database starts multiple archiver processes as needed to ensure that the archiving of filled redo logs does not fall behind. The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the num ARCn processes that the database initially invokes. The default is two processes. If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters). The two modes of transmitting archived logs to their destination are normal archiving transmission and standby transmission mode. Normal transmission involves transmitting files to a local disk. Standby transmission involves transmitting files through a network to either a local or remote standby database. You can display information about the archived redo logs using the following sources: V$DATABASE V$ARCHIVE_DEST SQL*Plus command ARCHIVE LOG LIST

22

Parameter Files
There are many different parameter files associated with an Oracle database, from a tnsnames.ora file on a client workstation (used to find a server on the network), to a listener.ora file on the server (for the network listener startup), to the sqlnet.ora, cman.ora, and ldap.ora files, to name a few. The most important parameter file, however, is the databases parameter filewithout this, we cannot even get a database started. The parameter file for a database is commonly known as an init, pfile or spfile. The init and pfile is a text based file while spfile (server parameter file) has a binary format. It is called server parameter file because of the fact that it must reside on the server, while the text based parameter file can also be located on the client system. Because spfile is always stored on the database server; it removes the proliferation of parameter files and removes the need to manually maintain parameter files using text editors outside of the database. Parameter file contains a list of initialization parameters for an instance and a database. Parameter is a key and value pair. Initialization parameter tells Oracle, the name of database for which to startup the instance, memory for SGA, name and location for database control files. By default the init<SID>.ora can be found in dbs (on Linux) or database folder (on windows) while the spfile is located in dbs directory on both platforms. It is not necessary that a parameter file must be in particular location, you can use the pfile=name option with startup command. We can convert the pfile into spfile and vice versa as shown below: CREATE SPFILE FROM PFILE='/u01/oracle/dbs/init.ora'; CREATE SPFILE='/u01/oracle/dbs/test_spfile.ora' FROM PFILE='/u01/oracle/dbs/test_init.ora'; Spfiles are binary files, so what happens if one gets corrupted and the database wont start? At least the init.ora file was just text, so we could edit it and fix it. First, the amount of binary data in the SPFILE is very small. If you are on a Linux platform, a simple strings command will extract all of your settings:

23

In the event that the spfile has just gone missing, you can also restore the information for your parameter file from the databases alert log. Every time you start the database, the alert log will contain a section having initialization parameter information. You can display information about the archived redo logs using the following sources: V$PARAMETER V$PARAMETER2 V$SPPARAMETER SQL*Plus command SHOW PARAMETERS

24

Trace Files
Each server and background process can write to an associated trace file. When a process detects an internal error, it dumps information about the error to its trace file. Trace files are a source of debugging information. Programmers who wrote the database kernel put the debugging code, and they left it in, on purpose. There are generally two types of trace file, and what we do with each kind is very different: Trace files you expected and you want; for example, these are the result of enabling SQL_TRACE=TRUE. They contain diagnostic information about your session and will help you tune your application to optimize its performance and diagnose what bottlenecks it is experiencing. Trace files you were not expecting to receive but the server generated as the result of an ORA00600 Internal Error, ORA-03113 End of file on communication channel, or ORA-07445 Exception Encountered error: These traces contain diagnostic information that is most useful to an Oracle Support analyst and, beyond showing us where in our application the internal error was raised, are of limited use to us.

All filenames of trace files associated with a process contain the name of the process that generated the trace file. The one exception to this is trace files generated by job queue processes (Jnnn). Trace file is generated on the database server machine in one of two locations: If you are using a dedicated server connection, the trace file will be generated in the directory specified by the USER_DUMP_DEST parameter. If you are using a shared server connection, the trace file will be generated in the directory specified by the BACKGROUND_DUMP_DEST parameter.

You can display information about the trace files using the following sources: V$PARAMETER SQL*Plus command SHOW PARAMETER DUMP_DEST

25

Alert Files
The alert file (also known as the alert log) is the diary of the database. It is a simple text file written to from the day the database is born (created) to the end of time (until you erase it). In this file, you will find a chronological history of your databasethe log switches; the internal errors that might be raised; when tablespaces were created, taken offline, put back online; and so on. The alert log can come in handy during troubleshootingit is usually the first place you should check to get an idea about what was happening inside the database when a problem occurred. In fact, Oracle support may ask you for a copy of the pertinent sections of the alert log during their analysis of database problems. Oracle puts the alert log (alertdb_name.log) in the location specified for the BACKGROUND_DUMP_DEST initialization parameter. V$ALERT_TYPES DBA_OUTSTANDING_ALERTS DBA_ALERT_HISTORY SQL*Plus command SHOW PARAMETER background_dump; to find out where the alert log is located

Too see if there are any Oracle-related errors in your alert log, simply issue the following command:

26

Password File
The password file is a file in which you can specify the names of database users who have been granted the special SYSDBA or SYSOPER administrative privileges. When you attempt to start up Oracle, there is no database available that can be consulted to verify passwords. When you start up Oracle on the local system (i.e., not over the network, but from the machine the database instance will reside on), Oracle will use the OS to perform the authentication. When Oracle was installed, the person performing the installation was asked to specify the group for the administrators. Normally on UNIX/Linux, this group will be DBA by default and OSDBA on Windows. It can be any legitimate group name on that platform, however. That group is special, in that any user in that group can connect to Oracle as SYSDBA without specifying a username or password, for example:

However, suppose you wanted to perform these operations from another machine, over the network. In that case, you would attempt to connect using @tns-connect-string. However, this would fail:

OS authentication wont work over the network for SYSDBA, even if the very unsafe (for security reasons) parameter REMOTE_OS_AUTHENT is set to TRUE. Here comes the password file for rescue. For remote authentication, first, we have set the REMOTE_LOGIN_PASSWORDFILE. ALTER SYSTEM SET remote_login_passwordfile = exclusive|shared scope=spfile; Here SHARED means more than one database can use the same password file while EXCLUSIVE means only one database can uses a given password file. This setting cannot be changed dynamically while the instance is up and running, so well have to restart for this to take effect. The next step is to use the command-line tool named orapwd: $ orapwd file=orapw$ORACLE_SID password=oracle entries=20 Now we can connect as SYSDBA over the network: sqlplus sys/oracle@localhost.localdomain/orcl as sysdba Password file resides in the $ORACLE_HOME/dbs directory on linux (%ORACLE_HOME%\database on windows).

Backup Files
Backup files are used for database recovery. A backup is a copy of data. This copy can include important parts of the database, such as the control file and datafiles.

27

Flashback Log Files


Flashback log files (or simply flashback logs) were introduced in Oracle 10g in support of the FLASHBACK DATABASE command. The FLASHBACK DATABASE command was introduced to speed up the otherwise slow process of a point in time database recovery. It can be used in place of a full database restore and a rolling forward using archive logs, and it is primarily designed to speed up the recovery from an accident. Flash-back logs contain before images of modified database blocks that can be used to return the database to the way it was at some prior point in time.

Change Tracking Files


The change tracking file is a new, use with Oracle 10g Enterprise Edition. The sole purpose of this file is to track what blocks have modified since the last incremental backup. In this fashion, the Recovery Manager (RMAN) tool can back up only the database blocks that have actually been modified without having to read the entire database. The process of creating the change tracking file is simple and is accomplished via the ALTER DATABASE command:

To turn off and remove the block change tracking file, you would use the ALTER DATABASE command once again:

Note: This command will in fact erase the block change tracking file. It does not just disable the feature it removes the file as well.

Tablespace
A database is divided into logical storage units called tablespaces, which group related logical structures (table, index etc) together. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The previous paragraph is graphically depicted in the following figure:

28 Tablespaces are divided into logical units of storage called segments, which are further divided into extents. The units of database space allocation are data blocks, extents, and segments. There is no hard and fast rule regarding the number of tablespaces you can have in a database. The following five tablespaces are generally the default tablespaces that all databases must have, even though its possible to create and use a database with just the first two: System tablespace: It always contains the data dictionary tables for the entire database. All data stored on behalf of stored PL/SQL program units (that is, procedures, functions, packages, and triggers) resides in the SYSTEM tablespace. Sysaux tablespace: It is an auxiliary tablespace to the SYSTEM tablespace. The SYSAUX tablespace provides a centralized location for database metadata that does not reside in the SYSTEM tablespace. Undo tablespace: It is used solely for storing undo information. You cannot create any other segment types (for example, tables or indexes) in undo tablespaces. In automatic undo management mode, each Oracle instance is assigned one (and only one) undo tablespace. Temporary tablespace: It contains transient data that persists only for the duration of the session. Default permanent tablespace: It contains user objects.

When a database has multiple tablespaces, you can: Separate user data from data dictionary data to reduce I/O contention. Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention. Back up individual tablespaces.

Tablespaces allocate space in extents. Tablespaces can use two different methods to keep track of their free and used space: Locally managed tablespaces: extent management is done by the tablespace. It maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Changes do not generate rollback information because they do not update tables in the data dictionary (except for special cases such as tablespace quota information). Dictionary managed tablespaces: extent management is done by the data dictionary. Oracle updates the appropriate tables in the data dictionary whenever an extent is allocated or freed for reuse. Oracle also stores rollback information about each update of the dictionary tables.

The following data dictionary and dynamic performance views provide useful information about the tablespaces of a database. V$TABLESPACE DBA_TABLESPACES USER_TABLESPACES

29

Segments
A segment is a set of extents that contains all the data for a specific logical storage structure within a tablespace. There are three types of segments in an Oracle database: Data Segments: A single data segment in an Oracle database holds all of the data for one of the following: table that is not partitioned or clustered, partition of a partitioned table, cluster of tables and a materialized view. Index Segments: Every non partitioned index in an Oracle database has a single index segment to hold all of its data. For a partitioned index, every partition has a single index segment to hold its data. Temporary Segments: When processing queries, Oracle often requires temporary workspace for intermediate stages of SQL statement parsing and execution. Oracle automatically allocates this disk space called a temporary segment. Typically, Oracle requires a temporary segment as a database area for sorting, temporary tables and there indexes.

The following data dictionary and dynamic performance views provide useful information about the segments. DBA_SEGMENTS USER_SEGMENTS V$SORT_SEGMENT V$TEMPSEG_USAGE

Extents
An extent is a logical unit of database storage space allocation made up of a number of contiguous data blocks. One or more extents in turn make up a segment. When you create a table, Oracle allocates to the tables data segment an initial extent of a specified number of data blocks. Although no rows have been inserted yet, the Oracle data blocks that correspond to the initial extent are reserved for that tables rows. If the data blocks of a segments initial extent become full and more space is required to hold new data, Oracle automatically allocates an incremental extent for that segment. An incremental extent is a subsequent extent of the same or greater size than the previously allocated extent in that segment. For maintenance purposes, the header block of each segment contains a directory of the extents in that segment. Previously allocated extent in that segment. For maintenance purposes, the header block of each segment contains a directory of the extents in that segment. A tablespace that manages its extents locally can have either uniform extent sizes or variable extent sizes that are determined automatically by the system. You can display information about the extents using the following views:

30 DBA_EXTENTS USER_EXTENTS DBA_FREE_SPACE USER_FREE_SPACE

Oracle Blocks
Extents, in turn, consist of Oracle blocks. An Oracle block is the smallest unit of space allocation in Oracle. In contrast, at the physical, operating system level, all data is stored in bytes. Each operating system has a block size. Oracle requests data in multiples of Oracle data blocks, not operating system blocks. The standard block size is specified by the DB_BLOCK_SIZE initialization parameter. In addition, you can specify of up to five nonstandard block sizes. The data block sizes should be a multiple of the operating systems block size within the maximum limit to avoid unnecessary I/O. The Oracle data block format is similar regardless of whether the data block contains table, index, or clustered data. Figure below illustrates the format of a data block.

Header (Common and Variable): The header contains general block information, such as the block address and the type of segment for example, data or index. Table Directory: This portion of the data block contains information about the table having rows in this block. Row Directory: This portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area). After the space has been allocated in the row directory of a data blocks overhead, this space is not reclaimed when the row is deleted. Therefore, a block that is currently empty but had up to 50 rows at one time continues to have 100 bytes allocated in the header for the row directory. Oracle reuses this space only when new rows are inserted in the block. Row Data: This portion of the data block contains table or index data. Rows can span blocks.

31 Free Space: Free space is allocated for insertion of new rows and for updates to rows that require additional space. In data blocks allocated for the data segment of a table or cluster, or for the index segment of an index, free space can also hold transaction entries. A transaction entry is required in a block for each INSERT, UPDATE, DELETE, and SELECT...FOR UPDATE statement accessing one or more rows in the block.

In two circumstances, the data for a row in a table may be too large to fit into a single data block. In the first case, the row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG or LONG RAW. Row chaining in these cases is unavoidable. However, in the second case, a row that originally fit into one data block is updated so that the overall row length increases, and the blocks free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row. The rowid of a migrated row does not change. When a row is chained or migrated, I/O performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for the row.

Das könnte Ihnen auch gefallen