Sie sind auf Seite 1von 7

2/14/13

Topic Transcript

Concurrency Control with Locking


Learning Objectives

After completing this topic, you should be able to


recognize the fundamentals of locking
describe advanced locking techniques
1. Fundamentals of locking
In real-time, multiple transactions happen concurrently. And this introduces a range of concurrency problems that
may affect data integrity. So to maintain data integrity, Database Management Systems, often referred to as
DBMSs, use locking mechanisms.
Locking involves preventing the data items that a transaction accesses from being accessed by other
transactions. Other transactions are placed in a wait state until the first transaction releases the locks.
Consider two transactions, A and B, that need to update the Orders table in a database. If transaction A updates
the Orders table, the table will be locked for transaction A until it's committed by the DBMS automatically. If
transaction B tries to update the table while transaction A is still in progress, it will be placed in a wait state. After
transaction A is committed, the DBMS automatically releases that lock and then locks the table again for
transaction B until it's committed.
In a DBMS, locks can be implemented at five levels, which are

the database level


When using database level locking, the entire database gets locked, preventing other users from
performing any transactions. A database level lock can be good in some situations such as when a
transaction needs to update numerous data items in a database or when the database structure needs
to be modified. In other situations, it can cause poor performance, because other users have to wait to
access any objects in the database.
the table level
In the table level lock, the table that a transaction will access is locked. However, this lock reduces
performance if several transactions need to update the data in the same table at the same time, because
other users will need to wait to access the table.
the page level
In page level locking, individual data blocks that the transaction is accessing are locked on the disk
where the data resides. This lock allows many simultaneous transactions on different pages in the
database. Many older DBMSs use this type of lock.
the row level, and
A row level lock locks a specific row in the table for each transaction. Simultaneously, another
transaction can obtain a lock over a different row and update data, even within the same table. This type
of lock is now used instead of page level locking.
the data item level
Individual transactions can lock individual data items in a row, allowing multiple concurrent transactions
in every row as long as they accessed different columns. However, commercial DBMSs don't use this
level of locking because it causes high system overhead.
https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

1/7

2/14/13

Topic Transcript

In addition to the different levels of locks, DBMSs implement two types of locking schemes to provide
concurrency. These are shared and exclusive locks.
A shared lock is used to provide read-only access to a database entity. Multiple transactions can simultaneously
obtain shared locks over the same database item.
An exclusive lock is used when a transaction needs to update data. An exclusive lock is only released after a
transaction is completed. So, no other transaction can lock a data item with either the shared or exclusive key
until the exclusive lock is released.
If two or more transactions need to update the same data items in a database and try to lock the data at the
same time, deadlocks may occur. Consider two transactions, A and B, that need to update the Orders and Store
tables. Transaction A updates the Orders table first. Meanwhile, transaction B updates the Store table. Now
transaction A waits for transaction B to release its lock on the Store table. Similarly, transaction B waits for
transaction A to release its lock over the Orders table. Both transactions may endlessly wait for the other to end.
This is called a deadlock.
DBMSs usually check for deadlocks at predefined intervals. If a deadlock occurs, the DBMS ends one of the
transactions and rolls it back. The user who issued that transaction is notified of the deadlock and roll back. The
user's transaction who "won" the deadlock continues and completes as usual.
Deadlocks can also be avoided by setting transactions that update the same data to proceed in a predetermined
order.

Question
Two transactions want to update the same table in a DBMS that is using table level locks.
How can the DBMS ensure that both transactions are successful?
Options:
1. By granting shared locks over the table to both transactions
2. By locking the table for one transaction until it is committed
3. By locking the individual rows that each transaction updates
4. By granting exclusive locks over the table simultaneously for both transactions

Answer
Option 1: This option is incorrect. A transaction cannot update a tab le with a shared lock. So this
measure does not ensure that b oth transactions are successful.
Option 2: This option is correct. Though locking the entire tab le for a transaction reduces
performance, the DBMS can ensure that b oth transactions are successful.
Option 3: This option is incorrect. Because the DBMS uses tab le level locks, individual rows cannot
b e locked for their respective transactions.
Option 4: This option is incorrect. Two transactions cannot ob tain exclusive locks simultaneously
over the same tab le. So this does not ensure that the transactions are successful.
Correct answer(s):
2. By locking the table for one transaction until it is committed
https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

2/7

2/14/13

Topic Transcript

2. Advanced locking techniques


In addition to the standard locking features in the SQL standard, commercial DBMSs offer advanced locking
features. These include explicit locking, isolation levels, and locking parameters. These features help to define the
desired locking settings for a transaction.
If a transaction needs to repeatedly access rows in a table, the transaction will repeatedly obtain individual locks
on rows, which will increase system overhead. And small transactions may also place a bulk update in a wait
state. This may cause a deadlock and roll the bulk update back. These problems can be avoided if the bulk
update can lock the entire table until the transaction is complete by using explicit locks. Before the transaction
takes place, the user would specify that they want to acquire the explicit lock. This can be done differently in
different DBMSs, but most use the LOCKTABLEstatement.
Some DBMSs, such as SQL Server do not support explicit locking.

Question
Identify the benefits of using explicit locks.
Options:
1. Increasing system overhead because of numerous small locks
2. Avoiding bulk updates being rolled back
3. Allowing several transactions to update data simultaneously
4. Processing bulk updates quickly

Answer
Option 1: This option is incorrect. An explicit lock is used to replace the numerous small locks
required b y a particular transaction. System overhead can b e reduced if all small locks are
replaced b y a single, large lock.
Option 2: This option is correct. If a deadlock occurs, a b ulk update may b e terminated b y the
DBMS and rolled b ack. Locking the data required for the b ulk update helps to avoid rolling b ack
b ulk updates.
Option 3: This option is incorrect. An explicit lock is used to lock data until a transaction is
complete. So it is not possib le for multiple transactions to update data simultaneously.
Option 4: This option is correct. Because the explicit locking feature is used to lock the required
data, b ulk updates that need to update several rows are processed quickly.
Correct answer(s):
2. Avoiding bulk updates being rolled back
4. Processing bulk updates quickly

A transaction may run the same statement multiple times while it's in progress. Ideally, the query should yield the
same results. But, if the transaction doesn't have an exclusive lock, other transactions can update the data. In
such a case, query results might contain inconsistent data causing a lost update, dirty read, or nonrepeatable
read, or a phantom read.
https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

3/7

2/14/13

Topic Transcript

Graphic
The image illustrates an example where a query yields different results. First, transaction A
executes the statement SELECT PROD_QTY FROM PRODUCTS. This returns 85. When
transaction A executes the same statement again, it returns 50.

The ability of a DBMS to provide identical answers to the same query executed in a single transaction is known
as its isolation level. This depends on how well the transaction is isolated from all other transactions being
performed concurrently.

Graphic
The image illustrates an example where two transactions, A and B, are in progress. Transaction A
executes the statement SELECT PROD_QTY FROM PRODUCTS and returns 85. A little later, while
transaction A is still in progress, another transaction B executes the statement UPDATE
PROD_QTY. This transaction is placed in the wait state b ecause transaction A has not yet b een
committed.

The DBMS can ensure that every row is unchanged within a transaction. This is the highest level of isolation.
However, this will reduce performance because of loss of concurrency. But the number of locks can be reduced if
the DBMS knows how a transaction works.
The different levels of isolation that can be set include

SERIALIZABLE
When a DBMS uses the SERIALIZABLEisolation level, the results of two similar queries during a
transaction will be the same as if the transaction occurs in isolation and not concurrently with other
transactions. This is the highest level of isolation for transactions.
In the SERIALIZABLE isolation level, all concurrency problems will be prevented by the DBMS, including
the lost update, dirty read, nonrepeatable read, and phantom read problems.
REPEATABLEREAD
REPEATABLEREADis the second highest level of isolation provided by DBMSs. Using this isolation
level, a transaction can't access data updates from other transactions that are either committed or
uncommitted. However, insertions by other simultaneous transactions may be available in queries.
When the REPEATABLE READ isolation level is used, the phantom read problem may occur. Other
problems, such as lost update, dirty read, and nonrepeatable read, are prevented by the DBMS.
READCOMMITTED, and
READCOMMITTEDis the third level of isolation. Though data from uncommitted transactions aren't
available in this level, data from other committed transactions will be available when statements are
executed. This isolation level can be used when a transaction doesn't need to modify data.
When the READ COMMITTED isolation level is used, the phantom read and nonrepeatable read
problems may occur. However, other problems such as lost update and dirty read will be prevented by
the DBMS.
READUNCOMMITTED
READUNCOMMITTEDis the lowest level of isolation. Data from both committed and uncommitted
https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

4/7

2/14/13

Topic Transcript

transactions will be available for a transaction in this level. This means queries may yield updated values
when they're executed at different points within a transaction.
When the READ UNCOMMITTED isolation level is used, only the lost update problem will be prevented
by the DBMS. All other concurrency problems, including dirty read, nonrepeatable read, and phantom
read problems may occur.
You can define the level of isolation by using the STARTTRANSACTIONand SETTRANSACTIONstatements. By
default, the SERIALIZABLEisolation level is defined for all transactions.
If you are defining isolation level for a transaction that needs to update data, you can't specify the READ
UNCOMMITTEDisolation level. You should also set the access level as READWRITEto allow data updates.

Question
Match the isolation level with its description.
Options:
A. The DBMS can prevent all concurrency problems from occurring
B. The only problem that can't be avoided at this level is the phantom read problem
C. Phantom read and nonrepeatable read problems can occur
D. The DBMS can only prevent the lost update problem
Targets:
1. SERIALIZABLE
2. REPEATABLEREAD
3. READCOMMITTED
4. READUNCOMMITTED

Answer
The SERIALIZABLEisolation level is the highest level of isolation. In this level, all concurrency
prob lems can b e avoided. And within a transaction, a query will yield the same results when
executed at different points.
If you use the REPEATABLEREADisolation level, the transaction cannot access updates b y other
committed or uncommitted transactions. However, the transaction can access insertions b y other
transactions. So the phantom read prob lem may occur.
In the READCOMMITTEDisolation level, transactions can access updates and additions b y other
committed transactions. So the DBMS can prevent lost update and dirty read prob lems. However,
phantom read and nonrepeatab le read prob lems can occur.
READUNCOMMITTEDis the lowest level of isolation. Transactions can access insertions and updates
from other committed and uncommitted transactions. Though the DBMS prevents the lost update
prob lem, other concurrency prob lems can occur.
Correct answer(s):

Target 1 = Option A
Target 2 = Option B
Target 3 = Option C
https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

5/7

2/14/13

Topic Transcript

Target 4 = Option D

As the needs of different organizations may vary, having the option of changing the parameters used for locking is
beneficial. So certain DBMSs allow users to manually update the locking parameters. Database Administrators,
abbreviated to DBAs, can configure certain locking parameters. But other new DBMSs don't provide this feature
because a faulty configuration may lead to numerous problems.
The parameters that can be set by DBAs include

lock size
DBMSs offer different lock sizes, including row level, table level, and page level locks. A DBA can set
this depending on the needs a specific application or transaction.
number of locks
The DBA can define the maximum number of locks that a transaction can obtain. Complex transactions
may require multiple locks for small transactions, which can cause poor system performance. If the
number of locks required exceeds the number of locks allowed, lock escalation can occur.
lock escalation, and
If there are many small locks, the overhead may increase. So the DBA can escalate many smaller locks
into a single, large lock. For example, if one transaction is obtaining multiple row level locks on one
table, the small locks can be escalated to a table lock instead. In addition, DBMSs are capable of
automatically escalating locks.
lock timeout
Transactions may sometimes have to wait a long time for other transactions to release locks.
Sometimes deadlocks may also occur. The DBA can use the lock timeout feature to define a time limit
for releasing the lock. This means that if the transaction doesn't release the lock within the specified
time, the DBMS will step in and override the lock.

Question
Match the locking parameter with its description.
Options:
A. The specific lock required for a DBMS, such as row, page, or table level lock, can be
defined by using this parameter
B. The locks for transactions are specified to prevent poor performance
C. The system overhead due to handling a large number of small locks can be reduced
D. You can avoid transactions being placed in the wait state for extended time periods
Targets:
1. Lock size
2. Number of locks
3. Lock escalation
4. Lock timeout

Answer

https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

6/7

2/14/13

Topic Transcript

The lock size parameter can b e used to define the locks required for a DBMS, such as row, page,
or tab le level locks. If a different lock size is required, it may also b e defined b y the datab ase
administrator.
If the maximum numb er of locks for a complex transaction is defined, the DBMS may escalate the
locks and estab lish a large lock instead of many small locks.
DBMSs can automatically escalate many smaller locks and provide a single, large lock for a
transaction. By doing this, system overhead due to a large numb er of small locks can b e reduced.
The lock timeout parameter helps a DBMS to release locks if the respective transactions do not
release them within a specified time. This reduces the wait time for transactions.
Correct answer(s):

Target 1 = Option A
Target 2 = Option B
Target 3 = Option C
Target 4 = Option D

Summary
In real-time, DBMSs handle several concurrent transactions that may need to access or update the same data.
SQL offers several standard locking features. You can define locks at the required level, such as table or rows. To
update data, exclusive locks can be used. And to access data, shared locks can be simultaneously provided to
several transactions.
In addition to this, several commercial DBMSs offer advanced locking techniques, such as explicit locking. The
DBA can also set certain locking parameters, including lock escalation and lock timeout. These techniques help
you to choose and configure the right locking mechanism for a DBMS.

2013 SkillSoft Ireland Limited

https://xlibrary.skillport.com/courseware/Content/cca/df_dbfn_a02_it_enus//output/t19/misc/transcript.html

7/7

Das könnte Ihnen auch gefallen