Sie sind auf Seite 1von 64

+

William Stallings
Computer Organization
and Architecture
10th Edition
© 2016 Pearson Education, Inc., Hoboken,
NJ. All rights reserved.
+ Chapter 4
Cache Memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


COMPUTER

I/O Main
memory

System
Bus

CPU

CPU

Registers ALU

Structure Internal
Bus

Control
Unit

CONTROL
UNIT
Sequencing
Logic

Control Unit
Registers and
Decoders

Control
Memory

Figure 1.1 A Top-Down View of a Computer

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Capacity and Performance:

The two most important characteristics of


memory

Three performance parameters are used:

Memory cycle time


Access time (latency) Transfer rate
•Access time plus any additional
•For random-access memory it is the time required before second •The rate at which data can be
time it takes to perform a read or access can commence transferred into or out of a memory
write operation •Additional time may be required unit
•For non-random-access memory it for transients to die out on signal •For random-access memory it is
is the time it takes to position the lines or to regenerate data if they equal to 1/(cycle time)
read-write mechanism at the are read destructively
desired location •Concerned with the system bus,
not the processor

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Memory Hierarchy

 Design constraints on a computer’s memory can be summed


up by three questions:
 How much, how fast, how expensive

 There is a trade-off among capacity, access time, and cost


 Faster access time, greater cost per bit
 Greater capacity, smaller cost per bit
 Greater capacity, slower access time

 The way out of the memory dilemma is not to rely on a single


memory component or technology, but to employ a memory
hierarchy

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+ The Need for a Memory Hierarchy

The widening speed gap between CPU and main memory

Processor operations take of the order of 1 ns


(eg. Intel i9-7980Xe, clock is 2.6 GHz (0.39 nS period)
Memory access requires 10s or even 100s of ns
(eg DDR4 Memory Module has a row cycle time of 45 ns)

Memory bandwidth limits the instruction execution rate

Each instruction executed involves at least one memory access


Hence, a few to 100s of MIPS is the best that can be achieved
A fast buffer memory can help bridge the CPU-memory gap
The fastest memories are expensive and thus not very large
A second (third?) intermediate cache level is thus often used
g-
Re r s
e
i st
Inb e
ch
me oard Ca
mo in
ry Ma ory
m
me

Ou isk
t t i cd
sto boar
e
gn OM
rag d M a D- R W
e C D -R W
C R M
D-
D V D- R A y
a
DV lu-R
B

Of ap
e
f ct
sto -line eti
rag gn
e Ma

Figure 4.1 The Memory Hierarchy

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Block Transfer
Word Transfer

CPU Cache Main Memory


Fast Slow

(a) Single cache

Level 1 Level 2 Level 3 Main


CPU
(L1) cache (L2) cache (L3) cache Memory

Fastest Fast
Less Slow
fast

(b) Three-level cache organization

Figure 4.3 Cache and Main Memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+What Makes a Cache Work?
Temporal locality
Spatial locality
Address mapping 9-instruction
(many-to-one) program loop

Cache Cache line/ block


memory (unit of t rans fer
between main and
Assuming no conflict in address cache memories)
mapping, the cache will hold a small Main
program loop in its entirety, leading to memory
fast execution.
+Desktop, Drawer, and File Cabinet Analogy
Once the “working set” is in the
drawer, very few trips to the
file cabinet are needed.
Access cabinet
Access drawer
in 30 s
in 5 s

Register Access
file desktop in 2 s Cache
memory

Main
memory

Items on a desktop (register) or in a drawer (cache) are more readily


accessible than those in a file cabinet (main memory).
+ Temporal and Spatial Localities
Addresses From Peter Denning’s CACM paper, July
2005 (Vol. 48, No. 7, pp. 19-24)

Temporal:
Accesses to the
same address are
typically clustered
in time

Spatial:
When a location is
accessed, nearby
locations tend to Working set
be accessed also
Time
START

Receive address
RA from CPU

Is block No Access main


containing RA memory for block
in cache? containing RA
Yes

Fetch RA word Allocate cache


and deliver line for main
to CPU memory block

Load main
Deliver RA word
memory block
to CPU
into cache line

DONE

Figure 4.5 Cache Read Operation

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Suppose that the processor has access to two levels of memory. Level 1 contains
1000 words and has an access time of 0.01 us; level 2 contains 100,000 words and
has an access time of 0.1 us. Assume that if a word to be accessed is in level 1, then
the processor accesses it directly. If it is in level 2, then the word is first transferred
to level 1 and then accessed by the processor. For simplicity, we ignore the time
required for the processor to determine whether the word is in level 1 or level 2.
Figure 4.2 shows the general shape of the curve that covers this situation. The figure
shows the average access time to a two-level memory as a function of the hit ratio
H, where H is defined as the fraction of all memory accesses that are found in the
faster memory (e.g., the cache), T1 is the access time to level 1, and T2 is the access
time to level 2. As can be seen, for high percentages of level 1 access, the average
total access time is much closer to that of level 1 than that of level 2.

In our example, suppose 95% of the memory accesses are found in level 1. Then the
average time to access a word can be expressed as

(0.95)(0.01 us) + (0.05)(0.01 us + 0.1 us) = 0.0095 + 0.0055 = 0.015 us

The average access time is much closer to 0.01 us than to 0.1 us, as desired.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
T1 + T2

T2

Average access time

T1

0 1
Fraction of accesses involving only Level 1 (Hit ratio)

Figure 4.2 Performance of a Simple Two-Level Memory

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+ The Need for a Cache
One level of cache with hit rate h
Ceff = hCfast + (1 – h)(Cslow + Cfast) = Cfast + (1 – h)Cslow
CPU CPU CPU CPU
registers registers

Level-1 Level-2 Main Level-2 Level-1 Main


cache cache memory cache cache memory

(a) Level 2 between level 1 and main (b) Level 2 connected to “backside” bus

Cache memories act as intermediaries between the superfast


processor and the much slower main memory.
Two levels of cache with hit rate h1 and h2 respectively
Ceff = Cfast + (1 – h1) [ Cmedium + (1 – h2) Cslow ]
+ Performance of a Two-Level Cache System

The average CPI for L1 and L2 cache system with no miss is 1.0
L1 has a hit rate of 95%. L2 has a hit rate of 80%. The miss penalty for
L1 is 8 cycles and L2 is 60 cycles. 1 cycle takes 0.1 ns.
What is the average access time or Ceffective? CPU registers
CPU

1%
4%
Level Local hit rate Miss penalty 95%
L1 95 % 8 cycles Level-1 Level-2 Main
cache cache memory
L2 80 % 60 cycles 8 60
cycles cycles

(a) Level 2 between level 1 and main


Solution
Ceff = Cfast + (1 – h1)[Cmedium + (1 – h2)Cslow]
Cfast = 0.1 ns (CPI of 1.0 with no miss)
Cmedium = 0.8 ns (8 cycles to access L2)
Cslow = 6 ns (60 cycles to access main memory)
Ceff = 0.1 + 0.05 [ 0.8 + 0.2 X 6] = 0.2 ns

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Year of
Processor Type L1 Cachea L2 cache L3 Cache
Introduction
IBM 360/85 Mainframe 1968 16 to 32 kB — —
PDP-11/70 Minicomputer 1975 1 kB — —
VAX 11/780 Minicomputer 1978 16 kB — —
IBM 3033 Mainframe 1978 64 kB — —
IBM 3090 Mainframe 1985 128 to 256 kB — —
Intel 80486 PC 1989 8 kB — —
Pentium PC 1993 8 kB/8 kB 256 to 512 KB —
PowerPC 601 PC 1993 32 kB — —
PowerPC 620 PC 1996 32 kB/32 kB — —
PowerPC G4 PC/server 1999 32 kB/32 kB 256 KB to 1 MB 2 MB
IBM S/390 G6 Mainframe 1999 256 kB 8 MB — Cache Sizes of
Pentium 4 PC/server
High-end
2000 8 kB/8 kB 256 KB —
Some
IBM SP server/ 2000 64 kB/32 kB 8 MB — Processors
supercomputer
CRAY MTAb Supercomputer 2000 8 kB 2 MB —
Itanium PC/server 2001 16 kB/16 kB 96 KB 4 MB
Itanium 2 PC/server 2002 32 kB 256 KB 6 MB
IBM High-end
2003 64 kB 1.9 MB 36 MB
POWER5 server
CRAY XD-1 Supercomputer 2004 64 kB/64 kB 1MB —
IBM
aTwo values separated by
POWER6
PC/server 2007 64 kB/64 kB 4 MB 32 MB a slash refer to instruction
and data caches.
IBM z10 Mainframe 2008 64 kB/128 kB 3 MB 24-48 MB
Intel Core i7 Workstaton/ b Both caches are
EE 990
2011 6 ´ 32 kB/32 kB 1.5 MB 12 MB instruction only; no data
server
caches.
IBM 24 MB L3
Mainframe/ 24 ´ 64 kB/
zEnterprise
Server
2011 24 ´ 1.5 MB 192 MB
196 128 kB
L4
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Cache Addresses Write Policy
Logical Write through
Physical Write back
Cache Size Line Size
Mapping Function Number of caches
Direct Single or two level
Associative Unified or split
Set Associative
Replacement Algorithm
Least recently used (LRU)
First in first out (FIFO)
Least frequently used (LFU)
Random

Table 4.2
Elements of Cache Design
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+Cache Memory Design Parameters
Cache size (in bytes or words). A larger cache can hold more of the program’s
useful data but is more costly and likely to be slower.
Block or cache-line size (unit of data transfer between cache and main). With a
larger cache line, more data is brought in cache with each miss. This can
improve the hit rate but also may bring low-utility data in.
Placement policy. Determining where an incoming cache line is stored. More
flexible policies imply higher hardware cost and may or may not have
performance benefits (due to more complex data location).
Replacement policy. Determining which of several existing cache blocks (into
which a new cache line can be mapped) should be overwritten. Typical policies:
choosing a random or the least recently used block.
Write policy. Determining if updates to cache words are immediately forwarded
to main (write-through) or modified blocks are copied back to main if and
when they must be replaced (write-back or copy-back).
Mapping Function
 Because there are fewer cache lines than main memory
blocks, an algorithm is needed for mapping main memory
blocks into cache lines

 Three techniques can be used:

Direct Associative Set Associative


• The simplest technique • Permits each main • A compromise that
• Maps each block of main memory block to be exhibits the strengths of
memory into only one loaded into any line of the both the direct and
possible cache line cache associative approaches
while reducing their
• The cache control logic disadvantages
interprets a memory
address simply as a Tag
and a Word field
• To determine whether a
block is in the cache, the
cache control logic must
simultaneously examine
every line’s Tag for a
match

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Line Memory
Number Tag Block address
0 0
1 1
2 2 Block 0
3 (K words)

C–1
Block Length
(K Words)

(a) Cache

Block M – 1

2n – 1
Word
Length
(b) Main memory

Figure 4.4 Cache/Main-Memory Structure

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
eg. Cache size = 128 kbytes, Main memory size = 16 Mbytes
Line size = 4 words = 8 bytes = cache is 16 klines
Cache size / line size = 128 kbytes / 8 bytes = 16 klines
log2 ( 8 bytes = 4 words) = 2 = bits for offset (w)
log2 (16 klines) = 14 = bits for cache index (r)
So remaining upper bits is 24 – 14 – 2 = 8 = bits for tag address
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
s+w

Cache Main Memory


Memory Address Tag Data WO
Tag Line Word W1 B0
L0 W2
s–r r w W3

s–r

s
W4j
w Li
W(4j+1) Bj
Compare w
W(4j+2)
W(4j+3)
(hit in cache)
1 if match
0 if no match

Lm–1
0 if match
1 if no match
(miss in cache)

Figure 4.9 Direct-Mapping Cache Organization

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Main memory address (binary)
Tag
(hex) Tag Line + Word Data
00 000000000000000000000000 13579246
00 000000000000000000000100

00 000000001111111111111000
00 000000001111111111111100
Line
Tag Data Number
16 000101100000000000000000 77777777 00 13579246 0000
16 000101100000000000000100 11235813 16 11235813 0001

16 000101100011001110011100 FEDCBA98 16 FEDCBA98 0CE7

FF 11223344 3FFE
16 000101101111111111111100 12345678 16 12345678 3FFF

8 bits 32 bits
FF 111111110000000000000000 16-Kline cache
FF 111111110000000000000100

FF 111111111111111111111000 11223344
FF 111111111111111111111100 24682468
Note: Memory address values are
in binary representation;
32 bits other values are in hexadecimal
16-MByte main memory

Tag Line Word


Main memory address =

8 bits 14 bits 2 bits

Figure 4.10 Direct Mapping Example

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
For the hexadecimal main memory address DDDDDD in binary it
will be
1101 1101 1101 1101 1101 1101

To fit that to the format below:

It will be 1101 1101 11 0111 0111 0111 01

which will be D D 3 7 7 7 1 in hex


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+Direct-Mapped Cache
2-bit word offset in line
0-3 Main
3-bit line index in cache 4-7
8-11 memory
Word locations
Tag address
32-35
36-39
40-43

64-67
68-71
72-75

Tags Read tag and


Valid bits specified word 96-99
100-103
104-107
Data out

1,Tag Com- 1 if equal Cache miss


pare

Direct-mapped cache holding 32 words within eight 4-word lines. Each line is
associated with a tag and a valid bit.
+
Direct Mapping Summary

 Address length = (s + w) bits

 Number of addressable units = 2s+w words or bytes

 Block size = line size = 2w words or bytes

 Number of blocks in main memory = 2s+ w/2w = 2s

 Number of lines in cache = m = 2r

 Size of tag = (s – r) bits

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


s+w

Cache Main Memory


Memory Address Tag Data W0
Tag Word W1
W2
B0
L0
s W3

w Lj
s
W4j
W(4j+1)
Compare w Bj
W(4j+2)
W(4j+3)
(hit in cache)
1 if match
0 if no match
s
Lm–1

0 if match
1 if no match
(miss in cache)

Figure 4.11 Fully Associative Cache Organization


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
eg. Cache size = 128 kbytes, Main memory size = 16 Mbytes
Line size = 4 words = 8 bytes = cache is 16 klines
Cache size / line size = 128 kbytes / 8 bytes = 16 klines
log2 ( 8 bytes = 4 words) = 2 = bits for offset (w)
So remaining upper bits is 24 – 2 = 22 = bits for tag address (s)

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Main memory address (binary)

Tag (hex) Tag Word Data


000000 000000000000000000000000 13579246
000001 000000000000000000000100

Line
Tag Data Number
3FFFFE 11223344 0000
058CE7 FEDCBA98 0001
058CE6 000101100011001110011000
058CE7 000101100011001110011100 FEDCBA98 FEDCBA98
058CE8 000101100011001110100000
3FFFFD 33333333 3FFD
000000 13579246 3FFE
3FFFFF 24682468 3FFF

22 bits 32 bits
16 Kline Cache

3FFFFD 111111111111111111110100 33333333


3FFFFE 111111111111111111111000 11223344
3FFFFF 111111111111111111111100 24682468 Note: Memory address values are
in binary representation;
32 bits other values are in hexadecimal

16 MByte Main Memory

Tag Word
Main Memory Address =

22 bits 2 bits

Figure 4.12 Associative Mapping Example


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
For the hexadecimal main memory address 777777 in binary it will
be
0111 0111 0111 0111 0111 0111

To fit that to the format below:

It will be 0111 0111 01 1101 1101 1101 11

which will be 7 7 1 D D D 3 in hex


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
Associative Mapping Summary

 Address length = (s + w) bits

 Number of addressable units = 2s+w words or bytes

 Block size = line size = 2w words or bytes

 Number of blocks in main memory = 2s+ w/2w = 2s

 Number of lines in cache = undetermined

 Size of tag = s bits

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Set Associative Mapping

 Compromise that exhibits the strengths of both the direct and


associative approaches while reducing their disadvantages

 Cache consists of a number of sets

 Each set contains a number of lines

 A given block maps to any line in a given set

 e.g. 2 lines per set


 2 way associative mapping
 A given block can be in one of 2 lines in only one set

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
s+w

Cache Main Memory


Memory Address Tag Data
B0
Tag Set Word
F0
B1
s–d d w F1

Set 0

s–d Fk–1

Fk s+w
Bj

Compare Fk+i Set 1

(hit in cache) F2k–1


1 if match
0 if no match

0 if match
1 if no match
(miss in cache)

Figure 4.14 k-Way Set Associative Cache Organization

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


eg. Cache size = 128 kbytes, Main memory size = 16 Mbytes
Line size = 4 words = 8 bytes = cache is 16 klines
Cache size / line size = 128 kbytes / 8 bytes = 16 klines
log2 ( 8 bytes = 4 words) = 2 = bits for offset (w)
Number of lines per set = 2
Number of sets = Number of lines in cache / number of sets =
16 klines/ 2 = 8 klines
log2 (8 klines) = 13 = bits for cache index
So remaining upper bits is 24 – 13 – 2 = 9 = bits for tag
address

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Main memory address (binary)
Tag Main Memory Address =
(hex) Tag Set + Word Data
Tag Set Word
000 000000000000000000000000 13579246
000 000000000000000000000100

9 bits 13 bits 2 bits

000 000000001111111111111000
000 000000001111111111111100
Set
Tag Data Number Tag Data
02C 000101100000000000000000 77777777 000 13579246 0000 02C 77777777
02C 000101100000000000000100 11235813 02C 11235813 0001

02C 000101100011001110011100 FEDCBA98 02C FEDCBA98 0CE7

1FF 11223344 1FFE


02C 000101100111111111111100 12345678 02C 12345678 1FFF 1FF 24682468

9 bits 32 bits 9 bits 32 bits


1FF 111111111000000000000000 16 Kline Cache
1FF 111111111000000000000100

1FF 111111111111111111111000 11223344


1FF 111111111111111111111100 24682468

32 bits
Note: Memory address values are
16 MByte Main Memory in binary representation;
other values are in hexadecimal

Figure 4.15 Two-Way Set Associative Mapping Example

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Set-Associative Cache

2-bit word offset in line 0-3 Main


2-bit set index in cache memory
Word 16-19 locations
Tag
address
32-35
Option 0 Option 1
48-51

64-67
Read tag and specified
Tags word from each option
Valid bits 80-83
0
Data
out 96-99
1
1,Tag Com-
pare Cache 112-115
1 if equal miss
Com-
pare

Two-way set-associative cache holding 32 words of data within 4-


word lines and 2-line sets.
+Accessing a Set-Associative Cache

Show cache addressing scheme for a byte-addressable memory with


32-bit addresses. Cache line width 2W = 16 B. Set size 2S = 2 lines.
Cache size 2L = 4096 lines (64 KB).
Solution
Byte offset in line is log216 = 4 b. Cache set index is (log24096/2) = 11 b.
This leaves 32 – 11 – 4 = 17 b for the tag.
11-bit set index in cache

17-bit line tag 4-bit byte offset in line


Components of the 32-bit
address in an example 32-bit
two-way set-associative address
cache. Address in cache used to
read out two candidate
items and their control info
+
Set Associative Mapping Summary
 Address length = (s + w) bits

 Number of addressable units = 2s+w words or bytes

 Block size = line size = 2w words or bytes

 Number of blocks in main memory = 2s+w/2w=2s

 Number of lines in set = k

 Number of sets = v = 2d

 Number of lines in cache = m=kv = k * 2d

 Size of cache = k * 2d+w words or bytes

 Size of tag = (s – d) bits

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+Improving Cache Performance

For a given cache size, the following design issues and tradeoffs exist:
Line width (2W). Too small a value for W causes a lot of main memory accesses;
too large a value increases the miss penalty and may tie up cache space with low-
utility items that are replaced before being used.
Set size or associativity (2S). Direct mapping (S = 0) is simple and fast; greater
associativity leads to more complexity, and thus slower access, but tends to
reduce conflict misses.
Line replacement policy. Usually LRU (least recently used) algorithm or some
approximation thereof; not an issue for direct-mapped caches. Somewhat
surprisingly, random selection works quite well in practice.
Write policy. Modern caches are very fast, so that write-through if seldom a good
choice. We usually implement write-back or copy-back, using write buffers to
soften the impact of main memory latency.
+Effect of Associativity on Cache Performance
0.3

0.2
Miss rate

0.1

0
Direct 2-way 4-way 8-way 16-way 32-way 64-way
Associativity
Performance improvement of caches with increased associativity.
1.0
0.9
0.8
0.7
Hit ratio

0.6
0.5
0.4
0.3
0.2
0.1
0.0
1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M
Cache size (bytes)
direct
2-way
4-way
8-way
16-way

Figure 4.16 Varying Associativity over Cache Size


© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
Replacement Algorithms

 Once the cache has been filled, when a new block is brought
into the cache, one of the existing blocks must be replaced

 For direct mapping there is only one possible line for any
particular block and no choice is possible

 For the associative and set-associative techniques a


replacement algorithm is needed

 To achieve high speed, an algorithm must be implemented in


hardware

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+ The most common replacement
algorithms are:
 Least recently used (LRU)
 Most effective
 Replace that block in the set that has been in the cache longest with
no reference to it
 Because of its simplicity of implementation, LRU is the most popular
replacement algorithm

 First-in-first-out (FIFO)
 Replace that block in the set that has been in the cache longest
 Easily implemented as a round-robin or circular buffer technique

 Least frequently used (LFU)


 Replace that block in the set that has experienced the fewest
references
 Could be implemented by associating a counter with each line

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Write Policy

When a block that is resident in


There are two problems to
the cache is to be replaced
contend with:
there are two cases to consider:

If the old block in the cache has not been


altered then it may be overwritten with a More than one device may have access to
new block without first writing out the old main memory
block

If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the multiple processors are attached to the
cache then main memory must be same bus and each processor has its own
updated by writing the line of cache out local cache - if a word is altered in one
to the block of memory before bringing cache it could conceivably invalidate a
in the new block word in other caches

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Write Through
and Write Back
 Write through
 Simplest technique
 All write operations are made to main memory as well as to the cache
 The main disadvantage of this technique is that it generates substantial
memory traffic and may create a bottleneck

 Write back
 Minimizes memory writes
 Updates are made only in the cache
 Portions of main memory are invalid and hence accesses by I/O
modules can be allowed only through the cache
 This makes for complex circuitry and a potential bottleneck

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Line Size
When a block of Two specific effects
data is retrieved come into play:
and placed in the • Larger blocks reduce the
cache not only the As the block size number of blocks that fit
desired word but increases more into a cache
also some number useful data are • As a block becomes larger
each additional word is
of adjacent words brought into the farther from the requested
are retrieved cache word

As the block size The hit ratio will


increases the hit begin to decrease
ratio will at first as the block
increase because becomes bigger
of the principle of and the
locality probability of
using the newly
fetched
information
becomes less than
the probability of
reusing the
information that
has to be replaced
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
Multilevel Caches
 As logic density has increased it has become possible to have a cache
on the same chip as the processor

 The on-chip cache reduces the processor’s external bus activity and
speeds up execution time and increases overall system performance
 When the requested instruction or data is found in the on-chip cache, the bus
access is eliminated
 On-chip cache accesses will complete appreciably faster than would even
zero-wait state bus cycles
 During this period the bus is free to support other transfers

 Two-level cache:
 Internal cache designated as level 1 (L1)
 External cache designated as level 2 (L2)

 Potential savings due to the use of an L2 cache depends on the hit rates
in both the L1 and L2 caches

 The use of multilevel caches complicates all of the design issues related
to caches, including size, replacement algorithm, and write policy
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
0.98

0.96

0.94

0.92

0.90
L1 = 16k
Hit ratio

0.88 L1 = 8k

0.86

0.84

0.82

0.80

0.78
1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M 2M

L2 Cache size (bytes)

Figure 4.17 Total Hit Ratio (L1 and L2) for 8 Kbyte and 16 Kbyte L1

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


+
Unified Versus Split Caches
 Has become common to split cache:
 One dedicated to instructions
 One dedicated to data
 Both exist at the same level, typically as two L1 caches

 Advantages of unified cache:


 Higher hit rate
 Balances load of instruction and data fetches automatically
 Only one cache needs to be designed and implemented

 Trend is toward split caches at the L1 and unified caches for


higher levels

 Advantages of split cache:


 Eliminates cache contention between instruction fetch/decode unit
and execution unit
 Important in pipelining

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.


Processor on which
Feature First
Problem Solution Appears
Add external cache using 386
External memory slower than the system
bus. faster memory
technology.
Increased processor speed results in Move external cache on- 486
chip, operating at the
external bus becoming a bottleneck for
same speed as the
cache access.
processor.
Internal cache is rather small, due to Add external L2 cache 486 Table 4.4
limited space on chip using faster technology
than main memory
Contention occurs when both the Create separate data and Pentium Intel
Instruction Prefetcher and the Execution
Unit simultaneously require access to the
instruction caches.
Cache
cache. In that case, the Prefetcher is stalled Evolution
while the Execution Unit’s data access
takes place.
Create separate back-side Pentium Pro
bus that runs at higher
speed than the main
Increased processor speed results in
(front-side) external bus.
external bus becoming a bottleneck for L2
The BSB is dedicated to
cache access.
the L2 cache.
Move L2 cache on to the Pentium II
processor chip.
Some applications deal with massive Add external L3 cache. Pentium III
databases and must have rapid access to
large amounts of data. The on-chip caches Move L3 cache on-chip. Pentium 4
are too small. (Table is on page
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. 150 in the
+ Summary Cache
Memory
Chapter 4
 Elements of cache
 Computer memory
design
system overview
 Cache addresses
 Characteristics of
Memory Systems  Cache size

 Memory Hierarchy  Mapping function

 Cache memory  Replacement algorithms

principles  Write policy

 Pentium 4 cache  Line size

organization  Number of caches

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Das könnte Ihnen auch gefallen