Beruflich Dokumente
Kultur Dokumente
This document tries to explain exactly what happens when using ALTER TABLESPACE ... BEGIN BACKUP and ALTER TABLESPACE ... END BACKUP, and why it is mandatory to use it when the online backup is done with a tool that is external to Oracle ( such as OS backups using cp, tar, BCV, etc. ) It also gives an answer to those frequent questions:
Does Oracle write to data files while in hot backup mode ? What about ALTER DATABASE BEGIN BACKUP ? Why it is not used with RMAN backups What if you do an online backup without setting tablespaces in backup mode ? What if the instance crashes while the tablespaces is in backup mode ? How to check which datafiles are in backup mode What are the minimal archive logs to keep with the hot backup ? Why use OS backups instead of RMAN ? Why BEGIN BACKUP takes a long time ?
Description
Offline backup (Cold backup)
A cold OS backup is simple: the database has been cleanly shut down (not crashed, not shutdown abort) so that:
all datafiles are consistent (same SCN) and no redo is needed in case of restore the datafiles are closed: they will not be updated during the copy operation
Thus, it can be restored entirely and the database can be opened without the need to recover.
An hot backup does the copy while the database is running. That means that the copy is inconsistent and will need redo applied to be usable. Recovery is the process of applying redo log information in order to roll-forward file modifications
as they were done in the original files. When the copy is done with Oracle (RMAN), Oracle copies the datafile blocks to backupset so that it will be able to restore them and recover them. When the copy is done from the OS (i.e with a tool that is not aware of the Oracle file structure), several issues come up:
Header inconsistency: Nothing guaranties the order in which the files are copied, thus the header of the file may reflect its state at the beginning or at the end of the copy. Fractured blocks: Nothing guaranties that an Oracle block is read in one single i/o so that two halves of a block may reflect its state at two different points in time. Backup consistency:As the copy is running while the datafile is updated, it reads blocks at different point in time. The recovery is able to roll forward blocks from the past, but cannot deal with blocks from the future, thus the recovery of the copy must be recovered at least up to the SCN that was at the end of the copy.
So it is all about consistency in the copy: consistency between datafiles, consistency within datafiles and consistency within data blocks, and keep this consistency in the current files (obviously) as well as in the copy (as it will be needed for a restore/recovery)
Backup mode
The goal of ALTER TABLESPACE ... BEGIN BACKUP and ALTER TABLESPACE ... END BACKUP is to set special actions in the current database files in order to make their copy usable, without affecting the current operations. Nothing needs to be changed in the current datafiles, but, as the copy is done by an external tool, the only way to have something set in the copy is to do it in the current datafiles before the copy, and revert it back at the end. This is all about having a copy that can be recovered, with no control on the program that does the copy, and with the minimal impact on the current database. In order to deal with the 3 previous issues, the instance that will do the recovery of the restored datafiles has to know:
that the files need recovery from which SCN, and up to which SCN it has to be recovered at least enough information to fix fractured blocks
During backup mode, for each datafile in the tablespace, here is what happens: 1- When BEGIN BACKUP is issued:
The hot backup flag in the datafile headers is set, so that the copy is identified to be a hot backup copy. This is to manage the backup consistency issue when the copy will be used for a recovery. A checkpoint is done for the tablespace, so that no dirty buffer remains from modifications done before that point. Begin backup command completes only when checkpoint is done.
The datafile header is frozen so that whenever it is copied, it reflects the checkpoint SCN that was at the beginning of the backup. Then, when the copy will be restored, Oracle knows that it needs to start recovery at that SCN to apply the archived redo logs. This is to avoid the header inconsistency issue. That means that any further checkpoints do not update the datafile header SCN (but they do update a 'backup' SCN) Each first modification to a block in buffer cache will write the full block into the redo thread (in addition to the default behaviour that writes only the change vector). This is to avoid the fractured block issue. There may be a fractured block in the copy, but it will be overwritten during the recovery with the full block image.
That means that everything goes as normal except for two operations: - at checkpoint the datafile header SCN is not updated - when updating a block, the first time it is updated since it came in the buffer cache, the whole before image of the block is recorded in redo - direct path writes do not go through the buffer cache, but they always write full blocks and then full block is written to redo log (if not in nologging) 3- When END BACKUP is issued:
A record that marks the end of backup is written to the redo thread so that if the copy is restored and recovered, it cannot be recovered earlier than that point. This is to avoid the backup consistency issue. The hot backup flag in the datafile headers is unset. The header SCN is written with the current one.
Remarks: 1. the fractured block is not frequent as it happens only if the i/o for the copy is done at the same time on the same block as the i/o for the update. But the only mean to avoid the problem is to do that full logging of block for each block that will be written while the copy is occuring, just in case. 2. if the OS i/o size is multiple of the Oracle block size (e.g backup done with dd bs=1M), that supplemental logging is probably not needed because fractured blocks cannot happen. 3. the begin backup checkpoint is mandatory to manage the fractured block issue: if a dirty buffer remains, from a modification done before the begin backup, it would have no full-image redo, and may be subject to fractured block when written to disk . 4. The supplemental logging occurs when accessing the block for the first time in the buffer cache. If the same block is reloaded again in the buffer cache, supplemental logging will occur again. I haven't seen that point documented, but a testcase doing a 'flush buffer_cache' proves that.
Frequent questions
Does Oracle write to data files while in hot backup mode ?
Yes of course, it would not be called 'online' backup if it were not the case.
Header inconsistency: If the file copy is done from beginning to end, then the datafile header should reflect the right SCN Fractured blocks: If the copy does i/o with a size that is multiple of the Oracle block size, then you should not have fractured blocks
Backup consistency:If you take care to recover later than the point in time of the end of the copy, you should not have inconsistency
But there may be other internal mechanisms that are not documented so that we can't be sure that this list of issues is exhaustive. And, as it is not supported, we cannot rely on a backup done like that. Note that you will have no message
Some old documentation says to check V$DATAFILE_HEADER column FUZZY. This is because in previous versions (<9i) the begin backup unsets the online fuzzy bit in the datafile header, and set it back at when end backup is issued. Since 9i, the online fuzzy bit is unset only when datafile is offline or read-only, not for backup mode.
What are the minimal archive logs to keep with the hot backup ?
The backup done online is unusable if there is not at least the possibility to restore archive logs - from the archive log that was the current redo log when the backup started, - up to the archive log that was archived just after the backup (of the whole database) ended. That is sufficient to do an incomplete media recovery up to the point of 'end backup'. Subsequent archive logs will be needed to recover up to the point of failure.
Yet, the OS backup are still used when using OS tools that can copy an entire database in seconds, using mirror split (BCV, Flashcopy, etc), for very large databases.