SQL*Net more data to dblink/Network The server process is sending more data or messages to the client. SQL*Net waits occurs due to Network bottleneck, time taken by to execute SQL on remote node, number of round trip messages, which can be reduced by using array fetches and array inserts. To alleviate network bottlenecks, try the following: Tune the application to reduce round trips. Explore options to reduce latency (for example, terrestrial lines opposed to VSAT links). Change system configuration to move higher traffic components to lower latency links. SQL*Net more data from client/ Network This event indicates that a server process is waiting for work from the client process. 2.DB File sequential read/User IO DB File sequential read are single block reads,a result of Index reads or full table scans(rarely), Could be from non-selective indexes, poorly optimized SQL or a suboptimal hash_area_size Can be resolved by adding DB block cache memory (server memory),by increasing the number of disks in the array and increasing the spread of the files causing the reads or increase the amount of server memory through a server upgrade or decrease the read latency by moving to IBM FlashSystem 3.DB file scattered read/User IO DB File scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan. Can be tuned by seeking missing indexes and speeding up full scans by parallel queries,use of partitioning,large-scale caching or the use of IBM FlashSystem to reduce latency 4.Replication Dequeue /Other 5.enq: TX contention/Other enq: TX contention/Other High enqueue waits are related to Oracle Internal locks. Its occurs when one user is updating or deleting a row, which another session wants to update or delete. The solution is to have the first session holding the lock perform a COMMIT or ROLLBACK. -Investigate by querying v$lock,v$transaction and v$session views to see the exact queries that are causing locks. 6.log file parallel write/System IO log file stress occurs when the log files are placed on the same physical disks as the data and index files. Modify the redo log buffer size. If the size of the log buffer is reasonable, then ensure that the disks on which the online redo logs reside do not suffer from I/O contention. This can also be relieved by moving the logs to their own disk array section.However,if high wait times for log-related
events occur, then moving the logs to an IBM FlashSystem is preferred.
log file sync/Commit When a user session commits (or rolls back), the session's redo information must be flushed to the redo logfile by LGWR. The server process performing the COMMIT or ROLLBACK waits under this event for the write to the redo log to complete. Check the Traces of the Log Writer (LGWR) Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks. Alternate redo logs on different disks to minimize the effect of the archiver on the log writer. Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1). Consider using raw devices (or simulated raw devices provided by disk vendors) to speed up the writes. Depending on the type of application, it might be possible to batch COMMITs by committing every N rows, rather than every row, so that fewer log file syncs are needed. 7.PX Deq Credit: send blkd/Other The PX Deq Credit: send blkd is an idle event in a parallel execution wait event for RAC -Parallel query execution tuning. Optimize the SQL statement without parallel hints if any. 8.Control file sequential read/System IO control file parallel write/System IO control file sequential read occurs If one has all their control files on a disk with high disk I/O. - Identify the Control file locations and place them on faster disks or less activity disks. 9.os thread startup/Concurrency This event indicates the waitupon starting os process(thread) to start queryslave process for execution of parallel query. If init parameter for parallel_min_server is set sufficently high,we can slightly cut this overhead.