Sie sind auf Seite 1von 61

Database Management Systems

Suseta Datta
Assistant Professor
Department : Computer Applications
University of Engineering & Management,
Kolkata
Concurrency Control
Concurrency Control
 Concurrency control is a
database management systems concept that is used to
address conflicts with the simultaneous accessing or
altering of data that can occur with a multi-user system.
 The objective of concurrency control is to ensure the
serializability of transactions in a multiuser database
environment.
 Concurrency control is important because the
simultaneous execution of transactions over a shared
database can create several data integrity and
consistency problems.
Lock-based Protocols
 A lock is a mechanism to control concurrent access to a
data item.
 Data items can be locked in two modes :
exclusive (X) mode: Data item can be both read as
well as
 written. X-lock is requested using lock-X
instruction.
 shared (S) mode: Data item can only be read. S-lock
is requested using lock-S instruction.
 Lock requests are made to the concurrency-control
manager by the programmer. Transaction can proceed
only after request is granted.
Lock-based Protocols
 Lock-compatibility matrix -

 A transaction may be granted a lock on an item if the requested lock


is compatible with locks already held on the item by other
transactions
 Any number of transactions can hold shared locks on an item,
 But if any transaction holds an exclusive on the item no other
transaction may hold any lock on the item.
 If a lock cannot be granted, the requesting transaction is made to
wait till all incompatible locks held by other transactions have been
released. The lock is then granted.
Lock-based Protocols
 Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
 Locking as above is not sufficient to guarantee serializability — if A
and B get updated in-between the read of A and B, the displayed sum
would be wrong.
 A locking protocol is a set of rules followed by all transactions
while requesting and releasing locks. Locking protocols restrict the
set of possible schedules.
Two-Phase Locking Protocol
 This protocol ensures conflict-serializable schedules.
 Phase 1: Growing Phase
 Transaction may obtain locks
 Transaction may not release locks
 Phase 2: Shrinking Phase
 Transaction may release locks
 Transaction may not obtain locks
 The protocol assures serializability. It can be proved
that the transactions can be serialized in the order of
their lock points (i.e., the point where a transaction
acquired its final lock).
Lock Conversion
 Two-phase locking with lock conversions:
 First Phase:
 can acquire a lock-S on item
 can acquire a lock-X on item
 can convert a lock-S to a lock-X (upgrade)
 Second Phase:
 can release a lock-S
 can release a lock-X
 can convert a lock-X to a lock-S (downgrade)
 This protocol assures serializability. But still relies on
the programmer to insert the various locking
instructions.
Pitfalls of Lock-based Protocol
 In a database, a deadlock is an unwanted situation in which two or more
transactions are waiting indefinitely for one another to give up locks.

 Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to


wait for T3 to release its lock on B, while executing lock-X(A) causes T3 to
wait for T4 to release its lock on A.
 Such a situation is called a deadlock.
 To handle a deadlock one of T3 or T4 must be rolled back
and its locks released.
Pitfalls of Lock-based Protocol
 Starvation or Livelock is the situation when a
transaction has to wait for a indefinate period of
time to acquire a lock.
 Starvation occurs if the concurrency control
manager is badly designed. For example:
A transaction may be waiting for an X-lock on an item, while
a sequence of other transactions request and are granted an S-
lock on the same item.
The same transaction is repeatedly rolled back due to
deadlocks.
 Concurrency control manager can be designed to
prevent starvation.
Solution for Starvation
• Increasing Priority –
Starvation occurs when a transaction has to wait for an indefinate time, In
this situation we can increase the priority of that particular transaction/s.
But the drawback with this solution is that it may happen that the other
transaction may have to wait longer until the highest priority transaction
comes and proceeds.
• Modification in Victim Selection algorithm –
If a transaction has been a victim of repeated selections, then the algorithm
can be modified by lowering its priority over other transactions.
• First Come First Serve approach –
A fair scheduling approach i.e. FCFS can be adopted, In which the
transaction can acquire a lock on an Item in the order, in which the
requested the lock.
• Wait die and wound wait scheme –
These are the schemes that uses timestamp ordering mechanism of
transaction .
Deadlock Detection
 Deadlocks can be described as a wait-for graph, which
consists of a pair G = (V,E),
 V is a set of vertices (all the transactions in the system)
 E is a set of edges; each element is an ordered pair Ti Tj.
 If Ti  Tj is in E, then there is a directed edge from Ti to Tj,
implying that Ti is waiting for Tj to release a data item.
 When Ti requests a data item currently being held by Tj,
then the edge Ti  Tj is inserted in the wait-for graph. This
edge is removed only when Tj is no longer holding a data
item needed by Ti.
 The system is in a deadlock state if and only if the wait-for
graph has a cycle. Must invoke a deadlock-detection
algorithm periodically to look for cycles.
Deadlock Detection
Wait-for-graph is one of the methods for
detecting the deadlock situation. Example:
Deadlock Prevention
 To prevent any deadlock situation in the system, the DBMS aggressively
inspects all the operations, where transactions are about to execute. The
DBMS inspects the operations and analyzes if they can create a deadlock
situation. If it finds that a deadlock situation might occur, then that
transaction is never allowed to be executed.
 There are deadlock prevention schemes that use timestamp ordering
mechanism of transactions in order to predetermine a deadlock situation.
 Wait-Die Scheme - In this scheme, if a transaction requests to lock a resource
(data item), which is already held with a conflicting lock by another
transaction, then one of the two possibilities may occur −
 If TS(Ti) < TS(Tj) − that is Ti, which is requesting a conflicting lock, is older
than Tj − then Ti is allowed to wait until the data-item is available.
 If TS(Ti) > TS(tj) − that is Ti is younger than Tj − then Ti dies. Ti is restarted
later with a random delay but with the same timestamp.
 This scheme allows the older transaction to wait but kills the younger one.
Deadlock Prevention
 Wound-Wait Scheme - In this scheme, if a transaction
requests to lock a resource (data item), which is already
held with conflicting lock by some another transaction,
one of the two possibilities may occur −
 If TS(Ti) < TS(Tj), then Ti forces Tj to be rolled back − that
is Tiwounds Tj. Tj is restarted later with a random delay but
with the same timestamp.
 If TS(Ti) > TS(Tj), then Ti is forced to wait until the
resource is available.
 This scheme, allows the younger transaction to wait;
but when an older transaction requests an item held by
a younger one, the older transaction forces the younger
one to abort and release the item.
Deadlock Avoidance
 When a database is stuck in a deadlock, It is always
better to avoid the deadlock rather than restarting or
aborting the database.
 Deadlock avoidance method is suitable for smaller
database whereas deadlock prevention method is
suitable for larger database.
 One method of avoiding deadlock is using application
consistent logic.
 Another method for avoiding deadlock is to apply both
row level locking mechanism and READ
COMMITTED isolation level. However, It does not
guarantee to remove deadlocks completely.
Deadlock Recovery
 When deadlock is detected :
Some transaction will have to rolled back (made a
victim) to break deadlock. Select that transaction as
victim that will incur minimum cost.
Rollback - determine how far to roll back transaction
Total rollback: Abort the transaction and then restart it.
More effective to roll back transaction only as far as
necessary to break deadlock.
Starvation happens if same transaction is always
chosen as victim. Include the number of rollbacks in
the cost factor to avoid starvation
Deadlock
 When a deadlock occurs there is a possibility of
cascading roll-backs.
 Cascading roll-back is possible under two-phase
locking.
 To avoid this, follow a modified protocol called Strict
two-phase locking - a transaction must hold all its
exclusive locks till it commits/aborts.
 Rigorous two-phase locking is even stricter. Here, all
locks are held till commit/abort. In this protocol
transactions can be serialized in the order in which they
commit.
Implementation of Locking
 A lock manager can be implemented as a separate process
to which transactions send lock and unlock requests.
 The lock manager replies to a lock request by sending a
lock grant messages (or a message asking the transaction to
roll back, in case of a deadlock).
 The requesting transaction waits until its request is
answered.
 The lock manager maintains a data-structure called a lock
table to record granted locks and pending requests.
 The lock table is usually implemented as an in-memory
hash table indexed on the name of the data item being
locked.
Timestamp-Based Protocols
 Each transaction is issued a timestamp when it enters the
system. If an old transaction Ti has time-stamp TS(Ti), a new
transaction Tj is assigned time-stamp TS(Tj) such that TS(Ti)
<TS(Tj).
 The protocol manages concurrent execution such that the
time-stamps determine the serializability order.
 In order to assure such behavior, the protocol maintains for
each data Q two timestamp values:
 W-timestamp(Q) is the largest time-stamp of any transaction that
executed write(Q) successfully.
 R-timestamp(Q) is the largest time-stamp of any transaction that
executed read(Q) successfully.
Timestamp-Based Protocols
 The timestamp ordering protocol ensures that any
conflicting read and write operations are executed
in timestamp order.
 Suppose a transaction Ti issues a read(Q)
 If TS(Ti)  W-timestamp(Q), then Ti needs to read a
value of Q that was already overwritten.
Hence, the read operation is rejected, and Ti is rolled back.
 If TS(Ti)  W-timestamp(Q), then the read operation
is executed, and R-timestamp(Q) is set to max(R-
timestamp(Q), TS(Ti)).
Timestamp-Based Protocols
 Suppose that transaction Ti issues write(Q).
 If TS(Ti) < R-timestamp(Q), then the value of Q that Ti
is producing was needed previously, and the system
assumed that that value would never be produced.
Hence, the write operation is rejected, and Ti is rolled back.
 If TS(Ti) < W-timestamp(Q), then Ti is attempting to
write an obsolete value of Q.
Hence, this write operation is rejected, and Ti is rolled back.
 Otherwise, the write operation is executed, and W-
timestamp(Q) is set to TS(Ti).
Example of the Protocol
 A partial schedule for several data items for transactions with
timestamps 1, 2, 3, 4, 5
Correctness of Timestamp-Ordering Protocol

 The timestamp-ordering protocol guarantees serializability


since all the arcs in the precedence graph are of the form:

Thus, there will be no cycles in the precedence graph


 Timestamp protocol ensures freedom from deadlock as no
transaction ever waits.
 But the schedule may not be cascade-free, and may not
even be recoverable.
Timestamp Ordering Protocol
 The timestamp-ordering protocol ensures serializability among transactions in their conflicting read and
write operations. This is the responsibility of the protocol system that the conflicting pair of tasks should be
executed according to the timestamp values of the transactions.
 The timestamp of transaction Ti is denoted as TS(Ti).
 Read time-stamp of data-item X is denoted by R-timestamp(X).
 Write time-stamp of data-item X is denoted by W-timestamp(X).
 Timestamp ordering protocol works as follows −
 If a transaction Ti issues a read(X) operation −
 If TS(Ti) < W-timestamp(X)
 Operation rejected.
 If TS(Ti) >= W-timestamp(X)
 Operation executed.
 All data-item timestamps updated.
 If a transaction Ti issues a write(X) operation −
 If TS(Ti) < R-timestamp(X)
 Operation rejected.
 If TS(Ti) < W-timestamp(X)
 Operation rejected and Ti rolled back.
 Otherwise, operation executed.
Advantages & Disadvantages of
Timestamp Ordering Protocol
 TO protocol ensures serializability since the precedence graph is as
follows:

 TS protocol ensures freedom from deadlock that means no


transaction ever waits.
 But the schedule may not be recoverable and may not even be
cascade- free.
Recoverability and Cascade Freedom
 Problem with timestamp-ordering protocol:
 Suppose Ti aborts, but Tj has read a data item written by Ti
 Then Tj must abort; if Tj had been allowed to commit earlier, the
schedule is not recoverable.
 Further, any transaction that has read a data item written by Tj must
abort
 This can lead to cascading rollback --- that is, a chain of rollbacks
 Solution 1:
 A transaction is structured such that its writes are all performed at the
end of its processing
 All writes of a transaction form an atomic action; no transaction may
execute while a transaction is being written
 A transaction that aborts is restarted with a new timestamp
 Solution 2: Limited form of locking: wait for data to be committed
before reading it
 Solution 3: Use commit dependencies to ensure recoverability
Thomas' Write Rule
 This rule states if TS(Ti) < W-timestamp(X),
then the operation is rejected and Ti is rolled
back.
 Time-stamp ordering rules can be modified to
make the schedule view serializable.
 Instead of making Ti rolled back, the 'write'
operation itself is ignored.
Thomas' Write Rule
 The basic Thomas write rules are as follows:
If TS(T) < R_TS(X) then transaction T is aborted and
rolled back, and operation is rejected.
If TS(T) < W_TS(X) then don't execute the
W_item(X) operation of the transaction and continue
processing.
If neither condition 1 nor condition 2 occurs, then
allowed to execute the WRITE operation by
transaction Ti and set W_TS(X) to TS(T).
Thomas' Write Rule
 In the following figure, T1's read and precedes T1's write of the same data
item. This schedule does not conflict serializable.

 Thomas write rule checks that T2's write is never seen by any transaction.
If we delete the write operation in transaction T2, then conflict serializable
schedule can be obtained which is shown in below figure:
Multiple Granularity
 Allow data items to be of various sizes and define a
hierarchy of data granularities, where the small granularities
are nested within larger ones
 Can be represented graphically as a tree.
 When a transaction locks a node in the tree explicitly, it
implicitly locks all the node's descendents in the same
mode.
 Granularity of locking (level in tree where locking is
done):
 fine granularity (lower in tree): high concurrency, high locking
overhead
 coarse granularity (higher in tree): low locking overhead, low
concurrency
Example of Granularity Hierarchy

 The levels, starting from the coarsest (top) level are:


 database
 area
 file
 record
Intention Lock Modes
 In addition to S and X lock modes, there are three
additional lock modes with multiple granularity:
 intention-shared (IS): indicates explicit locking at a lower
level of the tree but only with shared locks.
 intention-exclusive (IX): indicates explicit locking at a
lower level with exclusive or shared locks
 shared and intention-exclusive (SIX): the sub-tree rooted
by that node is locked explicitly in shared mode and
explicit locking is being done at a lower level with
exclusive-mode locks.
 Intention locks allow a higher level node to be locked
in S or X mode without having to check all descendent
nodes.
Compatibility Matrix with Intention
Lock Modes
 The compatibility matrix for all lock modes is:
Validation Based Protocol
 Validation phase is also known as optimistic concurrency control technique. In the validation based
protocol, the transaction is executed in the following three phases:
 Read phase: In this phase, the transaction T is read and executed. It is used to read the value of
various data items and stores them in temporary local variables. It can perform all the write operations
on temporary variables without an update to the actual database.
 Validation phase: In this phase, the temporary variable value will be validated against the actual data
to see if it violates the serializability.
 Write phase: If the validation of the transaction is validated, then the temporary results are written to
the database or system otherwise the transaction is rolled back.
 Here each phase has the following different timestamps:
 Start(Ti): It contains the time when Ti started its execution.
 Validation (Ti): It contains the time when Ti finishes its read phase and starts its validation phase.
 Finish(Ti): It contains the time when Ti finishes its write phase.
 This protocol is used to determine the time stamp for the transaction for serialization using the time
stamp of the validation phase, as it is the actual phase which determines if the transaction will commit
or rollback.
Hence TS(T) = validation(T).
 The serializability is determined during the validation process. It can't be decided in advance.
 While executing the transaction, it ensures a greater degree of concurrency and also less number of
conflicts.
 Thus it contains transactions which have less number of rollbacks.
Schedule Produced by Validation
 Example of schedule produced using validation:
Insert and Delete Operations
 If two-phase locking is used :
 A delete operation may be performed only if the transaction
deleting the tuple has an exclusive lock on the tuple to be
deleted.
 A transaction that inserts a new tuple into the database is given
an X-mode lock on the tuple
 Insertions and deletions can lead to the phantom
phenomenon.
 A transaction that scans a relation
 (e.g., find sum of balances of all accounts in Perryridge)
 and a transaction that inserts a tuple in the relation
 (e.g., insert a new account at Perryridge)
 (conceptually) conflict in spite of not accessing any tuple in common.
 If only tuple locks are used, non-serializable schedules can result
 E.g. the scan transaction does not see the new account, but reads some
other tuple written by the update transaction
Insert and Delete Operations
 The transaction scanning the relation is reading information that indicates
what tuples the relation contains, while a transaction inserting a tuple
updates the same information.
 The conflict should be detected, e.g. by locking the information.
 One solution:
 Associate a data item with the relation, to represent the information about what
tuples the relation contains.
 Transactions scanning the relation acquire a shared lock in the data item,
 Transactions inserting or deleting a tuple acquire an exclusive lock on the data
item. (Note: locks on the data item do not conflict with locks on individual
tuples.)
 Above protocol provides very low concurrency for insertions/deletions.
 Index locking protocols provide higher concurrency while
preventing the phantom phenomenon, by requiring locks
on certain index buckets.
Recovery System
Failure Classification
 Transaction failure :
 Logical errors: transaction cannot complete due to some internal error
condition.
 System errors: the database system must terminate an active
transaction due to an error condition (e.g., deadlock).
 System crash: a power failure or other hardware or software failure causes
the system to crash.
 Fail-stop assumption: non-volatile storage contents are assumed to not
be corrupted by system crash.
 Database systems have numerous integrity checks to prevent
corruption of disk data.
 Disk failure: a head crash or similar disk failure destroys all or part of disk
storage.
 Destruction is assumed to be detectable: disk drives use checksums to
detect failures.
Recovery Algorithm
 Recovery algorithms are techniques to ensure
database consistency and transaction atomicity
and durability despite failures.
 Recovery algorithms have two parts:
 Actions taken during normal transaction processing to
ensure enough information exists to recover from
failures.
 Actions taken after a failure to recover the database
contents to a state that ensures atomicity, consistency
and durability.
Storage Structure
 Volatile storage:
does not survive system crashes
Examples: main memory, cache memory
 Nonvolatile storage:
survives system crashes
Examples: disk, tape, flash memory, non-volatile
(battery backed up) RAM
 Stable storage:
a mythical form of storage that survives all failures
approximated by maintaining multiple copies on
distinct nonvolatile media
Recovery and Atomicity
 When a system crashes, it may have several transactions being executed
and various files opened for them to modify the data items.
 According to ACID properties of DBMS, atomicity of transactions as a
whole must be maintained, that is, either all the operations are executed or
none.
 When a DBMS recovers from a crash, it should maintain the following −
 It should check the states of all the transactions, which were being
executed.
 A transaction may be in the middle of some operation; the DBMS must
ensure the atomicity of the transaction in this case.
 It should check whether the transaction can be completed now or it
needs to be rolled back.
 No transactions would be allowed to leave the DBMS in an
inconsistent state.
Data Access
 Physical blocks are those blocks residing on the disk.
 Buffer blocks are the blocks residing temporarily in main
memory.
 Block movements between disk and main memory are
initiated through the following two operations:
 input(B) transfers the physical block B to main memory.
 output(B) transfers the buffer block B to the disk, and replaces
the appropriate physical block there.
 Each transaction Ti has its private work-area in which local
copies of all data items accessed and updated by it are kept.
 Ti's local copy of a data item X is called xi.
 We assume, for simplicity, that each data item fits in, and is
stored inside, a single block.
Data Access
 Transaction transfers data items between system buffer blocks and
its private work-area using the following operations :
 read(X) assigns the value of data item X to the local variable xi.
 write(X) assigns the value of local variable xi to data item {X} in the
buffer block.
 both these commands may necessitate the issue of an input(BX)
instruction before the assignment, if the block BX in which X resides is
not already in memory.
 Transactions
 Perform read(X) while accessing X for the first time;
 All subsequent accesses are to the local copy.
 After last access, transaction executes write(X).
 output(BX) need not immediately follow write(X). System can
perform the output operation when it deems fit.
Recovery and Atomicity
 There are two types of techniques, which can
help a DBMS in recovering as well as
maintaining the atomicity of a transaction −
Maintaining the logs of each transaction, and
writing them onto some stable storage before
actually modifying the database.
Maintaining shadow paging, where the changes
are done on a volatile memory, and later, the actual
database is updated.
Log-based Recovery
 A log is kept on stable storage.
 The log is a sequence of log records, and maintains a record of update
activities on the database.
 When transaction Ti starts, it registers itself by writing a
<Ti start>log record
 Before Ti executes write(X), a log record <Ti, X, V1, V2> is written, where
V1 is the value of X before the write, and V2 is the value to be written to X.
 Log record notes that Ti has performed a write on data item Xj Xj had
value V1 before the write, and will have value V2 after the write.
 When Ti finishes it last statement, the log record <Ti commit> is written.
 We assume for now that log records are written directly to stable storage
(that is, they are not buffered)
 Two approaches using logs
 Deferred database modification
 Immediate database modification
Deferred Database Modification
 The deferred database modification scheme records all
modifications to the log, but defers all the writes to after
partial commit.
 Assume that transactions execute serially
 Transaction starts by writing <Ti start> record to log.
 A write(X) operation results in a log record <Ti, X, V> being
written, where V is the new value for X
 Note: old value is not needed for this scheme
 The write is not performed on X at this time, but is deferred.
 When Ti partially commits, <Ti commit> is written to the log
 Finally, the log records are read and used to actually execute
the previously deferred writes.
Deferred Database Modification
 During recovery after a crash, a transaction needs to be redone if
and only if both <Ti start> and<Ti commit> are there in the log.
 Redoing a transaction Ti ( redo Ti) sets the value of all data items
updated by the transaction to the new values.
 Crashes can occur while
 the transaction is executing the original updates, or
 while recovery action is being taken
 example transactions T0 and T1 (T0 executes before T1):
T0: read (A) T1 : read (C)
A: - A - 50 C:- C- 100
Write (A) write (C)
read (B)
B:- B + 50
write (B)
Deferred Database Modification
 Below we show the log as it appears at three instances of time.

 If log on stable storage at time of crash is as in case:


(a) No redo actions need to be taken
(b) redo(T0) must be performed since <T0 commit> is present
(c) redo(T0) must be performed followed by redo(T1) since
<T0 commit> and <Ti commit> are present
Immediate Database Modification
 The immediate database modification scheme allows database
updates of an uncommitted transaction to be made as the writes are
issued
 since undoing may be needed, update logs must have both old value
and new value
 Update log record must be written before database item is written
 We assume that the log record is output directly to stable storage
 Can be extended to postpone log record output, so long as prior to
execution of an output(B) operation for a data block B, all log records
corresponding to items B must be flushed to stable storage
 Output of updated blocks can take place at any time before or after
transaction commit
 Order in which blocks are output can be different from the order in
which they are written.
Immediate Database Modification
Log Write Output

<T0 start>
<T0, A, 1000, 950>
To, B, 2000, 2050
A = 950
B = 2050
<T0 commit>
<T1 start>
<T1, C, 700, 600>
C = 600
BB, BC
<T1 commit>
BA
• Note: BX denotes block containing X.
Immediate Database Modification
 Recovery procedure has two operations instead of one:
 undo(Ti) restores the value of all data items updated by Ti to
their old values, going backwards from the last log record for Ti
 redo(Ti) sets the value of all data items updated by Ti to the new
values, going forward from the first log record for Ti.
 Both operations must be idempotent.
 That is, even if the operation is executed multiple times the
effect is the same as if it is executed once.
 Needed since operations may get re-executed during recovery.
 When recovering after failure:
 Transaction Ti needs to be undone if the log contains the record
<Ti start>, but does not contain the record <Ti commit>.
 Transaction Ti needs to be redone if the log contains both the
record <Ti start> and the record <Ti commit>.
 Undo operations are performed first, then redo operations.
Immediate Database Modification
 Below we show the log as it appears at three instances of time.

 Recovery actions in each case above are:


(a) undo (T0): B is restored to 2000 and A to 1000.
(b) undo (T1) and redo (T0): C is restored to 700, and then A and B are
set to 950 and 2050 respectively.
(c) redo (T0) and redo (T1): A and B are set to 950 and 2050
respectively. Then C is set to 600
Recovery with Concurrent Transactions
 When more than one transaction are being executed in parallel, the logs are
interleaved. At the time of recovery, it would become hard for the recovery
system to backtrack all logs, and then start recovering. To solve this
problem DBMS introduces an concept called ‘checkpoints’.
 Checkpoint –
 Keeping and maintaining logs in real time and in real environment may
fill out all the memory space available in the system.
 As time passes, the log file may grow too big to be handled at all.
 Checkpoint is a mechanism where all the previous logs are removed
from the system and stored permanently in a storage disk.
 Checkpoint declares a point before which the DBMS was in consistent
state, and all the transactions were committed.
Recovery with Concurrent Transactions
 When a system with concurrent transactions crashes and recovers, it behaves in the
following manner −

 The recovery system reads the logs backwards from the end to the last checkpoint.
 It maintains two lists, an undo-list and a redo-list.
 If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn,
Commit>, it puts the transaction in the redo-list.
 If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it
puts the transaction in undo-list.

 All the transactions in the undo-list are then undone and their logs are removed. All
the transactions in the redo-list and their previous logs are removed and then
redone before saving their logs.
Shadow Paging
 Shadow paging is an alternative to log-based recovery; this scheme
is useful if transactions execute serially.
 Idea: maintain two page tables during the lifetime of a transaction –
the current page table, and the shadow page table.
 Store the shadow page table in nonvolatile storage, such that state of
the database prior to transaction execution may be recovered.
 Shadow page table is never modified during execution.
 To start with, both the page tables are identical. Only current page
table is used for data item accesses during execution of the
transaction.
 Whenever any page is about to be written for the first time.
 A copy of this page is made onto an unused page. [to be a new current
page]
 The current page table is then made to point to the copy.
 The update is performed on the copy.
Shadow Page Table
Shadow Paging
 To commit a transaction :
 Flush all modified pages in main memory to disk
 Output current page table to disk
 Make the current page table the new shadow page table, as follows:
 keep a pointer to the shadow page table at a fixed (known) location on
disk.
 to make the current page table the new shadow page table, simply
update the pointer to point to current page table on disk
 Once pointer to shadow page table has been written, transaction is
committed.
 No recovery is needed after a crash — new transactions can start
right away, using the shadow page table.
 Pages not pointed to from current/shadow page table should be freed
(garbage collected).
Shadow Paging
 Advantages of shadow-paging over log-based schemes
 no overhead of writing log records.
 recovery is trivial.
 Disadvantages :
 Copying the entire page table is very expensive.
 Can be reduced by using a page table structured like a B+-tree.
 No need to copy entire tree, only need to copy paths in the tree
that lead to updated leaf nodes.
 Commit overhead is high even with above extension.
 Need to flush every updated page, and page table.
 Data gets fragmented (related pages get separated on disk).
 After every transaction completion, the database pages containing old
versions of modified data need to be garbage collected.
 Hard to extend algorithm to allow transactions to run concurrently.
 Easier to extend log based schemes.
Thank You

Das könnte Ihnen auch gefallen