Sie sind auf Seite 1von 33

data blocks consistent reads undo records applied

This query runs for long time almost every day.

Here is the SQL text. Out of topic, cursor sharing is either set to similar or force, because of symbol like :SYS_B_00, etc.

SQL Stats showed high IO waits on DISK_READS. This snapshot was taken at the middle of the execution. The end result is much higher.

Here is the SQL plan.

Session Events showed the top wait for the session was db file sequential read.

ASH Activity showed the top wait was db file sequential read

Session ASH summary showed the top IO waited object has object_id 0. We can see this from Active Session Tab from db file sequential read wait, which has the P1 (file#) as 2, normally a system related data file.

This group of Long Ops context menu entries sometime can help for pacing progress.

For the concerned SQL, there were two round of table scans on MD.CONVERSION_MAPPING, from UNION operation. The first round took 12238 seconds. The second round already spent 1143 seconds, and expected 2072 more seconds from current speed. The total work usually is the table block size.

We can refresh occasionally to check how slow the progress could be. The slowness is not at the table scan self, but at the next steps fed by the result of scan.

v$sql_workarea showed the current status for 10G. For 11G, v$session_longops can show you current step and v$sql_plan_monitor is much more useful for trace progress. Anyway, in this case, Oracle was still working on step 26, and feeding the result to next steps.

Table stats confirmed that MD.CONVERSION_MAPPING had 11677 blocks, as we have seen from v$session_longops.

The table MD.CREATIVE followed MD.CONVERSION_MAPPING is not a huge table, too. Note the objects are listed here in the order appeared in the plan.

Even the next one CREATIVE_IO_PLACEMENT_MV is not a huge table, although it has more than 20M records.

And the index is not very big, too.

MVIEW CREATIVE_IO_PLACEMENT_MV was refreshed at 00:32. This date is misleading since it is kind of like starting time, not ending time. And the refresh is COMPLETE refresh.

Session Stats is the other tool which can help us to analyze what the query is doing.

There are huge data blocks consistent reads undo records applied and consistent changes. The values are very close to consistent gets or LIO. It means Oracle has to apply UNDO to construct most of the CR read. This is known issue causing query slowness. When using Session stats, it is better to take several snapshots to see the delta. So UNDO block reads must be the source of large db file sequential read with object_id as 0.

Here is session stats delta and average per seconds..

We can try to use AWR IO -> Table/Index LIO to check if any table or index related to the query has high LIO. The surprise is that the MVIEW and its index used by the query not only has high LIO, but also with huge db_block_changes, related to the usage of UNDO to construct CR consistent read blocks.

We can further use Top SQL tab to find out the long running queries or other info. Time tab can be used to find out long runninbg queries. Disk Reads can be used to find out high PIO queries. SQL history can be used to display a query status snap by snap. In our case, both MVIEW query and the concerned long running query were showed as high Disk Reads queries. Here is the text for the MVIEW refresh query.

The first row here is the MVIEW refresh procedure, executed between 00 to 04 hour. The second row is the query to insert data into the MVIEW. There is one query with 173,925,125 buffer_gets, which is the one to purge MVIEW data by delete, could be the first step for complete refresh. The third one is the long running query. Note the the long running queries started after 02 hour, before MVIEW refresh completed.

Here is the MVIEW purge query.

Here is the MVIEW INSERT query.

Here is the snap by snap stats for data blocks consistent reads undo records applied. It started with large number after 02 hour when the long running query started.

Here is the data taken the next day when we had one case both MVIEW and the long running query had no overlaps and both were running much faster. SNAP 48416 48419 are for 06/10. The MVIEW took more than 3 hours for refresh. SNAP 48440 48441 are for 06/11. The MVIEW took less than 2 hours for refresh and completed before the concerned query started. Note 06/11 also used much less disk_reads.

Here are the timings related to MVIEW refresh.

Also on 06/11, the query usually run for around 9 hours only used 8 minutes, because the query started after MVIEW refresh completed so no UNDOs were required for consistent read.

Here are the two days history for the concerned query. The bad run started from snap 48418 and ended at 48427. The good one completed within a single snap. Also note the huge difference on LIO and PIO. While we dont know why the MV refresh runs beyond 02 hour, we can safely assume, if this query can be rescheduled to later time to give MVIEW enough time to run, this query can actually finish much earlier.

Das könnte Ihnen auch gefallen