Sie sind auf Seite 1von 38

B-Trees, Part 2 Hash-Based Indexes

R&G Chapter 10 Lecture 10

Administrivia
The new Homework 3 now available Due 1 week from Sunday Homework 4 available the week after
Midterm exams available here

Review
Last time discussed File Organization Unordered heap files Sorted Files Clustered Trees Unclustered Trees Unclustered Hash Tables Indexes B-Trees dynamic, good for changing data, range queries Hash tables fastest for equality queries, useless for range queries

Review (2)
For any index, 3 alternatives for data entries k*: Data record with key value k <k, rid of data record with search key value k> <k, list of rids of data records with search key k> Choice orthogonal to the indexing technique

Today:
Indexes Composite Keys, Index-Only Plans B-Trees details of insertion and deletion
Hash Indexes How to implement with changing data sets

Inserting a Data Entry into a B+ Tree


Find correct leaf L. Put data entry onto L. If L has enough space, done! Else, must split L (into L and a new node L2)
Redistribute entries evenly, copy up middle key. Insert index entry pointing to L2 into parent of L.

This can happen recursively To split index node, redistribute entries evenly, but push up middle key. (Contrast with leaf splits.) Splits grow tree; root split increases height. Tree growth: gets wider or one level taller at top.

Indexes with Composite Search Keys


Composite Search Keys: Search on a combination of fields. Equality query: Every field value is equal to a constant value. E.g. wrt <sal,age> index:
age=20 and sal =75

Examples of composite key indexes using lexicographic order.


11,80 12,10 12,20 13,75 <age, sal> name age sal bob 12 cal 11 joe 12 10 80 20 11 12 12 13 <age>

Range query: Some field value is not a constant. E.g.:


age =20; or age=20 and sal > 10

10,12
20,12

sue 13

75

10
20

Data entries in index sorted by search key to support range queries. Lexicographic order, or Spatial order.

75,13
80,11 <sal, age>

Data records sorted by name

75
80 <sal>

Data entries in index sorted by <sal,age>

Data entries sorted by <sal>

Composite Search Keys


To retrieve Emp records with age=30 AND sal=4000, an index on <age,sal> would be better than an index on age or an index on sal. Choice of index key orthogonal to clustering etc. If condition is: 20<age<30 AND 3000<sal<5000: Clustered tree index on <age,sal> or <sal,age> is best. If condition is: age=30 AND 3000<sal<5000: Clustered <age,sal> index much better than <sal,age> index! Composite indexes are larger, updated more often.

Index-Only Plans
<E.dno>

SELECT D.mgr FROM Dept D, Emp E WHERE D.dno=E.dno

A number of queries <E.dno,E.eid> SELECT D.mgr, E.eid FROM Dept D, Emp E can be answered Tree index! WHERE D.dno=E.dno without retrieving any tuples from one SELECT E.dno, COUNT(*) <E.dno> FROM Emp E or more of the relations involved if GROUP BY E.dno a suitable index is SELECT E.dno, MIN(E.sal) <E.dno,E.sal> FROM Emp E available. Tree index! GROUP BY E.dno <E. age,E.sal> SELECT AVG(E.sal) or FROM Emp E <E.sal, E.age> WHERE E.age=25 AND Tree! E.sal BETWEEN 3000 AND 5000

Index-Only Plans (Contd.)


Index-only plans are possible if the key is <dno,age> or we have a tree index with key <age,dno> Which is better? What if we consider the second query?
SELECT E.dno, COUNT (*) FROM Emp E WHERE E.age=30 GROUP BY E.dno

SELECT E.dno, COUNT (*) FROM Emp E WHERE E.age>30 GROUP BY E.dno

B-Trees: Insertion and Deletion


Insertion Find leaf where new record belongs If leaf is full, redistribute* If siblings too full, split, copy middle key up If index too full, redistribute* If index siblings full, split, push middle key up Deletion Find leaf where record exists, remove it If leaf is less than 50% empty, redistribute* If siblings too empty, merge, remove key above If index node above too empty, redistribute* If index siblings too empty, merge, move above key down

B-Trees: For Homework 2


Insertion Find leaf where new record belongs If leaf is full, redistribute* If siblings too full, split, copy middle key up If index too full, redistribute* If index siblings full, split, push middle key up Deletion Find leaf where record exists, remove it If leaf is less than 50% empty, redistribute* If siblings too empty, merge, remove key above If index node above too empty, redistribute* If index siblings too empty, merge, move above key down

This means that after deletion, nodes will often be < 50% full

B-Trees: For Homework 2 (cont)


Splits When splitting nodes, choose the middle key If there are even number of keys, choose middle
Your code must Handle Duplicate Keys We promise that there will never be more than 1 page of duplicate values. Thus, when splitting, if the middle key is identical to the key to the left, you must find the closest splittable key to the middle.

B-Tree Demo

Hashing
Static and dynamic hashing techniques exist; trade-offs based on data change over time
Static Hashing Good if data never changes Extendable Hashing Uses directory to handle changing data Linear Hashing Avoids directory, usually faster

Static Hashing
# primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed. h(k) mod N = bucket to which data entry with key k belongs. (N = # of buckets)
0 2

h(key) mod N
key h

N-1
Primary bucket pages Overflow pages

Static Hashing (Contd.)


Buckets contain data entries. Hash fn works on search key field of record r. Must distribute values over range 0 ... N-1. h(key) = (a * key + b) usually works well. a and b are constants; lots known about how to tune h. Long overflow chains can develop and degrade performance. Extendible and Linear Hashing: Dynamic techniques to fix this problem.

Extendible Hashing
Situation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets? Reading and writing all pages is expensive!

Idea: Use directory of pointers to buckets, double # of buckets by doubling the directory,

splitting just the bucket that overflowed! Directory much smaller than file, doubling much cheaper. No overflow pages! Trick lies in how hash function is adjusted!

Extendible Hashing Details

Need directory with pointer to each bucket


Need hash function to incrementally double range can just use increasingly more LSBs of h(key) Must keep track of global depth how many times directory doubled so far Must keep track of local depth how many time each bucket has been split

LOCAL DEPTH GLOBAL DEPTH

2 4* 12* 32* 16* Bucket A

Example
Directory is array of size 4. To find bucket for r, take last `global depth # bits of h(r); we denote r by h(r).

00 01 10 11

2 1* 5* 21* 13* Bucket B

10*

Bucket C

If h(r) = 5 = binary 101, it is in bucket pointed to by 01.

DIRECTORY

2 15* 7* 19* Bucket D

DATA PAGES

Insert: If bucket is full, split it (allocate new page, re-distribute). If necessary, double the directory. (As we will see, splitting a bucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.)

Insert h(r)=20 (Causes Doubling)


LOCAL DEPTH
GLOBAL DEPTH 2 00 01 10 11 2 2 4* 12* 32* 16* Bucket A 2 1* 5* 21*13* Bucket B 000 001 LOCAL DEPTH GLOBAL DEPTH 3 2 1* 5* 21* 13* Bucket B 2 10* 2 15* 7* 19* Bucket D Bucket C 3 32* 16* Bucket A

10*
2 DIRECTORY 15* 7* 19*

Bucket C

010

011
100 Bucket D 101 110 111

3 DIRECTORY 4* 12* 20* Bucket A2 (`split image' of Bucket A)

Now Insert h(r)=9 (Causes Split Only)


LOCAL DEPTH GLOBAL DEPTH 3 000 001 010 2 10* 2 15* 7* 19* Bucket D Bucket C 2 1* 5* 21* 13* Bucket B 000 001 010 2 10* 2 15* 7* 19* Bucket D Bucket C 3 32* 16* Bucket A LOCAL DEPTH GLOBAL DEPTH 3 3 1* 9* Bucket B 3 32* 16* Bucket A

011
100 101 110 111

011
100 101 110 111

3 DIRECTORY 4* 12* 20* Bucket A2 (`split image' of Bucket A)

3 4* 12* 20* DIRECTORY 3 Bucket A2

5* 21* 13*

Bucket B2

Points to Note
20 = binary 10100. Last 2 bits (00) tell us r belongs in A or A2. Last 3 bits needed to tell which. Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to. Local depth of a bucket: # of bits used to determine if splitting bucket will also double directory When does bucket split cause directory doubling? If, before insert, local depth of bucket = global depth.

Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing pointer to split image page. (Use of least significant bits enables efficient doubling via copying of directory!)

Directory Doubling
Why use least significant bits in directory? Allows for doubling via copying!
2 4* 12* 32* 16* 000 2 00 01 10 11 2 1* 5* 21* 13* 001 010 011 2 1* 5* 21* 13* 3

2
4* 12* 32* 16*

2
10*

100

2 10*

101 110

2 15* 7* 19*

111

2 15* 7* 19*

Deletion
Delete: If removal of data entry makes bucket empty, can be merged with `split image. If each directory element points to same bucket as its split image, can halve directory.
2 4* 12* 32* 16* 1 4* 12* 32* 16*

2 00 01 10 11

2 1*

Delete 10
5* 21* 13*
00 01

2 1*

5* 21* 13*

2 10*

10 11

2 15*

2 15*

Deletion (cont)
Delete: If removal of data entry makes bucket empty, can be merged with `split image. If each directory element points to same bucket as its split image, can halve directory. 1
2 1 4* 12* 32* 16* 00 4* 12* 32* 16*

Delete 15

01 10
1 1* 5* 21* 13*

2 00 01 10 11

2 1*

11

5* 21* 13*

1 0 2 15* 1

1 4* 12* 32* 16*

1
1* 5* 21* 13*

Comments on Extendible Hashing


If directory fits in memory, equality search answered with one disk access; else two. 100MB file, 100 bytes/rec, 4K pages contains 1,000,000 records (as data entries) and 25,000 directory elements; chances are high that directory will fit in memory. Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large.

Biggest problem: Multiple entries with same hash value cause problems! If bucket already full of same hash value, will keep doubling forever! So must use overflow buckets if dups.

Linear Hashing
This is another dynamic hashing scheme, an alternative to Extendible Hashing.

LH handles the problem of long overflow chains without using a directory, and handles duplicates. Idea: Use a family of hash functions h0, h1, h2, ... hi(key) = h(key) mod(2iN); N = initial # buckets h is some hash function (range is not 0 to N-1) If N = 2d0, for some d0, hi consists of applying h and looking at the last di bits, where di = d0 + i. hi+1 doubles the range of hi (similar to directory doubling)

Linear Hashing Example


Lets start with N = 4 Buckets Start at round 0, next 0, have 2round buckets Each time any bucket fills, split next bucket Start
Next
4* 12* 32* 16* 1* 10* 5* 21* 13*

Add 9
32* 16*

Next

1* 10*

5* 21* 13*

9*

Add 20
32* 16*

Nround

15*

Nround

15* 4* 12*

Next

1* 10*

5* 21* 13*

9*

Nround

15* 4* 12* 20*

If (O hround(key) < Next), use hround+1(key) instead

Linear Hashing Example (cont)


Overflow chains do exist, but eventually get split Instead of doubling, new buckets added one-at-a-time Add 6
32* 16*

Add 17
32* 16* 9* 1* 13*

Next

1*

5* 21* 13*

10* 6*

Next
Nround

10* 6* 15* 4* 12* 20* 5* 21* 9*

Nround

15* 4* 12* 20*

Linear Hashing (Contd.)


Directory avoided in LH by using overflow pages, and choosing bucket to split round-robin. Splitting proceeds in `rounds. Round ends when all NR initial (for round R) buckets are split. Buckets 0 to Next1 have been split; Next to NR yet to be split.

Current round number also called Level. Search: To find bucket for data entry r, find hround(r): If hround(r) in range `Next to NR , r belongs here. Else, r could be hround(r) or hround(r) + NR;
must apply hround+1(r) to find out.

Overview of LH File
In the middle of a round.
Bucket to be split Next Buckets that existed at the beginning of this round: this is the range of Buckets split in this round: If h Level ( search key value ) is in this range, must use h Level+1 ( search key value ) to decide if entry is in `split image' bucket.

hLevel
`split image' buckets: created (through splitting of other buckets) in this round

Linear Hashing (Contd.)


Insert: Find bucket by applying hround / hround+1: If bucket to insert into is full: Add overflow page and insert data entry. (Maybe) Split Next bucket and increment Next. Can choose any criterion to `trigger split. Since buckets are split round-robin, long overflow chains dont develop!

Doubling of directory in Extendible Hashing is similar; switching of hash functions is implicit in how the # of bits examined is increased.

Another Example of Linear Hashing


On split, hLevel+1 is used to re-distribute entries.
Round=0, N=4 Round=0

h
1 000 001

h 0 00

PRIMARY Next=0 PAGES 32*44* 36* 9* 25* 5* Data entry r with h(r)=5 Primary bucket page

h
1 000 001

h 0 00

PRIMARY PAGES 32*

OVERFLOW PAGES

01
10 11

Next=1 9* 25* 5* 01 10 11 00

010 011

14* 18*10* 30*


31* 35* 7* 11*

010 011 100

14* 18*10* 30*


31* 35* 7* 11* 44* 36* 43*

(This info is for illustration only!)

(The actual contents of the linear hashed file)

Example: End of a Round


h1
Round=0 PRIMARY PAGES h0 00 01 10 11 00 01 10 32* 010 001 010 011 100 101 110 9* 25* 011 66*18* 10* 34* Next=3 31*35* 7* 11* 100 43* 101 110 111 10 000 OVERFLOW PAGES 001 h0 00 01

Round=1 PRIMARY PAGES Next=0 32* 9* 25* 66* 18* 10* 34* 43* 35* 11* 44* 36* 5* 37* 29* 14* 30* 22* 31* 7*

OVERFLOW PAGES

h1 000

50*

11
00 11 10 11

44*36*
5* 37*29* 14*30* 22*

LH Described as a Variant of EH
The two schemes are actually quite similar: Begin with an EH index where directory has N elements. Use overflow pages, split buckets round-robin. First split is at bucket 0. (Imagine directory being doubled at this point.) But elements <1,N+1>, <2,N+2>, ... are the same. So, need only create directory element N, which differs from 0, now.
When bucket 1 splits, create directory element N+1, etc.

So, directory can double gradually. Also, primary bucket pages are created in order. If they are allocated in sequence too (so that finding ith is easy), we actually dont need a directory! Voila, LH.

Summary
Hash-based indexes: best for equality searches, cannot support range searches. Static Hashing can lead to long overflow chains.

Extendible Hashing avoids overflow pages by splitting a full bucket when a new data entry is to be added to it. (Duplicates may require overflow pages.) Directory to keep track of buckets, doubles periodically. Can get large with skewed data; additional I/O if this does not fit in main memory.

Summary (Contd.)
Linear Hashing avoids directory by splitting buckets round-robin, and using overflow pages. Overflow pages not likely to be long. Duplicates handled easily. Space utilization could be lower than Extendible Hashing, since splits not concentrated on `dense data areas. Can tune criterion for triggering splits to trade-off slightly longer chains for better space utilization. For hash-based indexes, a skewed data distribution is one in which the hash values of data entries are not uniformly distributed!

Das könnte Ihnen auch gefallen