Sie sind auf Seite 1von 92

Chapter 5

Large and Fast:


Exploiting Memory
Hierarchy
5.1 Introduction
Memory Technology
Static RAM (SRAM)
0.5ns 2.5ns, $2000 $5000 per GB
Dynamic RAM (DRAM)
50ns 70ns, $20 $75 per GB
Magnetic disk
5ms 20ms, $0.20 $2 per GB
Ideal memory
Access time of SRAM
Capacity and cost/GB of disk

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 2


The Principle of Locality

Observation: A program accesses a relatively


small portion of its address space during any short
period of its execution time.
Two Different Types of Locality:
Temporal Locality (Locality in Time): If an item is
referenced, it tends to be referenced again soon (e.g.,
instructions of a loop, scalar data items reused)
Spatial Locality (Locality in Space): If an item is

referenced, items whose addresses are close by tend to


be referenced soon
(e.g., instructions of straightline code, array data items)
Locality is a property of programs which is exploited in
computer design
Taking Advantage of Locality
Memory of a computer could be organized
in a hierarchy

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 4


Levels of the Memory Hierarchy

Upper Level
Registers faster
Instructions, Operands

Cache

Blocks (Lines)

Main Memory

Pages

Disk

Files

Tape Lower Level Larger


-8

CPE 432
Taking Advantage of Locality
Organize the computer memory system in
a hierarchy
Store everything on disk
Copy recently accessed (and nearby)
items from disk to smaller DRAM memory
Main memory
Copy more recently accessed (and
nearby) items from DRAM to smaller
SRAM memory
Cache memory attached to CPU
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 6
Memory Hierarchy Levels
Block (aka line): unit of copying
May be multiple words

If accessed data is present in upper


level
Hit: access satisfied by upper level

Hit ratio: hits/accesses


If accessed data is absent
Miss: block copied from lower level

Time taken: miss penalty


Miss ratio: misses/accesses
= 1 hit ratio
Then accessed data supplied from
upper level
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 7
5.2 The Basics of Caches
Cache Memory
Cache memory
The level of the memory hierarchy closest to the CPU
Given accesses X1, , Xn1, Xn

How do we know if
the data is present?
Where do we look?

Fig. 5.4 The cache just before and just after a


reference to a word Xn that is not initially in the
cache
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 8
Direct Mapped Cache Example
Block = 1 word; cache size = 8 blocks
Block location in cache determined by its address
Only one choice for mapping a memory block to a
cache block: cache block# = (Memory Block address)
modulo (#Blocks in cache)
Memory block 00001
mapped to (00001)mod8 =
cache block 001
Memory block 01001
mapped to (01001)mod8 =
cache block 001
..
Memory block 10101
mapped to (10101)mod8 =
cache block 101
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 9
Direct Mapped Cache Example
5 memory address bits; Block = 1 word; cache size = 8 blocks
Block location in cache determined by its address
Only one choice for mapping a memory block to a cache

block: cache block# = (Memory block address) modulo


(#Blocks in cache)
Memory block 00001
mapped to (00001)mod8 =
cache block 001
Memory block 01001
mapped to (01001)mod8 =
cache block 001
..
Memory block 10101
mapped to (10101)mod8 =
cache block 101
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 10
Direct Mapped Cache Example (cont.)
# of cache blocks is a power of 2 (i.e. 8)
Use low-order address bits in mapping memory
blocks to cache blocks
00001 mapped TO 001
00101 mapped TO 101
01001 mapped TO 001
01101 mapped TO 101
10001 mapped TO 001
10101 mapped TO 101
11001 mapped TO 001
11101 mapped TO 101

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 11


Tags and Valid Bits
How do we know which particular memory
block is stored in a cache block?
Store in the cache block the block memory
address as well as its data
Actually, only need to store the high-order bits of
the address (in the example 2 bits)
Called the tag
What if there is no data in a cache block?
Valid bit: 1 = data present, 0 = data not present
Initially 0

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 12


Cache Example
8-blocks, 1 word/block, direct mapped
Initial state

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 13


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Miss 110

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 14


Cache Example
Word addr Binary addr Hit/miss Cache block
26 11 010 Miss 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 15


Cache Example
Word addr Binary addr Hit/miss Cache block
22 10 110 Hit 110
26 11 010 Hit 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 16


Cache Example
Word addr Binary addr Hit/miss Cache block
16 10 000 Miss 000
3 00 011 Miss 011
16 10 000 Hit 000

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 11 Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 17


Cache Example
Word addr Binary addr Hit/miss Cache block
18 10 010 Miss 010

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 10 Mem[10010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 18


Address Subdivision

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 19


Example: Larger Block Size
64 blocks, 16 bytes/block
To what cache block number does the byte
memory address 1200 map?
Memory block address = 1200/16 = 75
Cache block number = 75 modulo 64 = 11

31 10 9 4 3 0
Tag Index Offset
22 bits 6 bits 4 bits

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 20


Block Size Considerations
Larger blocks should reduce miss rate
Due to spatial locality
But in a fixed-sized cache
Larger blocks fewer of them
More competition increased miss rate
Larger miss penalty
Can override benefit of reduced miss rate
Early restart and critical-word-first can help

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 21


Cache Misses
On cache hit, CPU proceeds normally
On cache miss
Stall the CPU pipeline
Fetch block from next level of hierarchy
Instruction cache miss
Restart instruction fetch
Data cache miss
Complete data access

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 22


Write-Through Policy
On data-write hit, could just update the block in
cache
But then cache and memory would be inconsistent
Write through policy: also update memory
But write through makes writes take longer
e.g., if base CPI = 1, 10% of instructions are stores and
write to memory takes 100 cycles
Effective CPI = 1 + 0.1100 = 11
Solution: write buffer
Holds data waiting to be written to memory
CPU continues immediately
CPU only stalls on write if write buffer is already full

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 23


Write-Back Policy
Alternative: On data-write hit, just update
the block in cache
Keep track of whether each block is dirty
When a dirty block is replaced
Write it back to memory
Can use a write buffer to allow replacing block
to be read first

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 24


Write Allocation
What should happen on a write miss?
For write-back caches
Allocate the cache block; fetch the block from memory

For write-through caches: 2 alternatives


1. Allocate the cache block
fetch the block from memory
2. Write around: dont fetch the block from memory
Allocate block in cache and write to the cache and write
buffer
This often works well since programs often write a whole
block before reading it (e.g., initialization)
Example: Intrinsity FastMATH
Embedded MIPS processor
12-stage pipeline
Instruction and data access on each cycle
Split cache: separate I-cache and D-cache
Each 16KB: 256 blocks 16 words/block
D-cache: write-through or write-back
SPEC2000 miss rates
I-cache: 0.4%
D-cache: 11.4%
Weighted average: 3.2%

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 26


Example: Intrinsity FastMATH

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 27


Main Memory Supporting Caches
Use DRAMs for main memory
Fixed width main memory (e.g., 1 word)
Connected by fixed-width clocked bus
Bus clock is typically slower than CPU clock

Example cache block read


1 bus cycle for address transfer
15 bus cycles per DRAM word access
1 bus cycle per DRAM word transfer
For 4-word block, 1-word-wide DRAM
Miss penalty = 1 + 415 + 41 = 65 bus cycles
Bandwidth = 16 bytes / 65 bus cycles = 0.25 B/bus cycle

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 28


Increasing Memory Bandwidth

4-word wide memory


Miss penalty = 1 + 15 + 1 = 17 bus cycles
Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle
4-bank interleaved memory
Miss penalty = 1 + 15 + 41 = 20 bus cycles
Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 29


Advanced DRAM Organization
Bits in a DRAM are organized as a
rectangular array
DRAM accesses an entire row of bits
Burst mode: supply successive words from a
row with reduced latency
Double data rate (DDR) DRAM
Transfer on rising and falling clock edges
Quad data rate (QDR) DRAM
Separate DDR inputs and outputs

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 30


DRAM Generations

Year Capacity $/GB 300

1980 64Kbit $1,500,000


250
1983 256Kbit $500,000
1985 1Mbit $200,000 200
1989 4Mbit $50,000 Trac
150
1992 16Mbit $15,000 Tcac

1996 64Mbit $10,000 100


1998 128Mbit $4000
50
2000 256Mbit $1000
2004 512Mbit $250 0
2007 1Gbit $50 '80 '83 '85 '89 '92 '96 '98 '00 '04 '07

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 31


5.3 Measuring and Improving Cache Performance
Measuring Cache Performance
Components of CPU time
Program execution cycles
Includes cache hit time
Memory stall cycles
Mainly because of cache misses
With simplifying assumptions:
Memory stall cycles
Memory accesses
Miss rate Miss penalty
Program
Instructio ns Misses
Miss penalty
Program Instructio n
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 32
Cache Performance Example
Given
I-cache miss rate = 2%
D-cache miss rate = 4%
Miss penalty = 100 cycles
Base CPI (ideal cache) = 2
Load & stores are 36% of instructions
Miss cycles per instruction
I-cache: 0.02 100 = 2
D-cache: 0.36 0.04 100 = 1.44
Actual CPI = 2 + 2 + 1.44 = 5.44
Ideal CPU is 5.44/2 =2.72 times faster

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 33


Average Access Time
Hit time is also important for performance
Average memory access time (AMAT)
AMAT = Hit time + Miss rate Miss penalty
Example
CPU with 1ns clock, hit time = 1 cycle, miss
penalty = 20 cycles, I-cache miss rate = 5%
AMAT = 1 + 0.05 20 = 2ns
2 cycles per instruction

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 34


Performance Summary
When CPU performance increased
Miss penalty becomes more significant
Decreasing base CPI
Greater proportion of time spent on memory
stalls
Increasing clock rate
Memory stalls account for more CPU cycles
Cant neglect cache behavior when
evaluating system performance

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 35


Fully Associative Caches
Allow a memory block to go in any cache
block
Requires all cache blocks to be searched
simultaneously
Comparator per cache block is needed
(expensive)

36
n-way Set Associative Caches
Organize the cache into sets (groups) of blocks
Each set contains n cache blocks
Main memory block number is used to
determine which set in the cache will store the
memory block
(Memory block number) modulo (#Sets in cache)
Search all cache blocks in a given set
simultaneously
n comparators are needed (less expensive
than a fully associative cache organization)
37
Associative Cache Example

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 38


Spectrum of Associativity
For a cache with 8 blocks

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 39


Associativity Example
Compare 4-block caches
Direct mapped, 2-way set associative,
fully associative
Memory blocks access sequence: 0, 8, 0, 6, 8
Direct mapped: (000 00, 010 00, 000 00, 001 10,
010 00)
Number of misses = 5
Memory Cache Hit/miss Cache content after access
block index 0 1 2 3
number
0 0 miss Mem[0]
8 0 miss Mem[8]
0 0 miss Mem[0]
6 2 miss Mem[0] Mem[6]
8 0 miss Mem[8] Mem[6]
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 40
Associativity Example (cont. I)
2-way set associative, 4 blocks cache
Memory blocks access sequence: 0, 8, 0, 6, 8
Mapping function:
(Memory block number) modulo (#Sets in cache)
0 modulo 2= 0 ; 8 modulo 2= 0 ; 6 modulo 2= 0
Hence cache sets accessed: 0, 0, 0, 0, 0
Number of misses = 4
Block Cache Hit/miss Cache content after access
address set Set 0 Set 1
0 0 miss Mem[0]
8 0 miss Mem[0] Mem[8]
0 0 hit Mem[0] Mem[8]
6 0 miss Mem[0] Mem[6]
8 0 miss Mem[8] Mem[6]

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 41


Associativity Example (cont. II)
Fully associative, 4 blocks cache
Memory blocks access sequence: 0, 8, 0, 6, 8
Mapping function: a memory block can go in
any cache block

Number of misses = 3
Block Hit/miss Cache content after access
address
0 miss Mem[0]
8 miss Mem[0] Mem[8]
0 hit Mem[0] Mem[8]
6 miss Mem[0] Mem[8] Mem[6]
8 hit Mem[0] Mem[8] Mem[6]

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 42


How Much Associativity
Increased associativity decreases miss
rate
But with diminishing returns
Simulation of a system with 64KB
D-cache, 16-word blocks, SPEC2000 miss
rates:
1-way: 10.3%
2-way: 8.6%
4-way: 8.3%
8-way: 8.1%
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 43
Set Associative Cache Organization Example
4 Way Set Associative Cache, Cache Size = 4KB, Block Size= 1 Word.
# address bits = 32
Byte offset = 2 bits
Memory block number is the most significant 30 bits
# of cache sets= (4*1024)/{(4*4)= 256}
Mapping function:
(Memory block number) modulo (#Sets in cache)
(Memory block number) modulo (256)
# of cache sets is a power of 2 (i.e. 256)
Use low-order address bits in mapping memory blocks to cache
blocks

44
Set Associative Cache Organization

45
Block Replacement Policy
Direct mapped: no choice
Set associative
Prefer non-valid block, if there is one
Otherwise, choose among blocks in the set
Least-recently used (LRU)
Choose the block unused for the longest time
Simple for 2-way, manageable for 4-way, too hard
beyond that
Random
Gives approximately the same performance
as LRU for high associativity

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 46


Multilevel Caches
Primary cache attached to CPU
Small, but fast
Level-2 cache services misses from
primary cache
Larger than primary cache
Slower than primary cache
Faster than main memory
Main memory services L-2 cache misses
Some high-end systems include L-3 cache
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 47
Multilevel Cache Example
Given
CPU base CPI = 1, clock rate = 4GHz
Miss rate/instruction = 2%
Main memory access time = 100ns
With just primary cache
Miss penalty = 100ns/0.25ns = 400 cycles
Effective CPI = 1 + 0.02 400 = 9

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 48


Example (cont.)
Now add L-2 cache
Access time = 5ns
Global miss rate to main memory = 0.5%
Primary miss with L-2 hit
Penalty = 5ns/0.25ns = 20 cycles
Primary miss with L-2 miss
Extra penalty = 500 cycles
CPI = 1 + 0.02 20 + 0.005 400 = 3.4
Performance ratio = 9/3.4 = 2.6
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 49
Multilevel Cache Considerations
Primary cache
Focus on minimal hit time
L-2 cache
Focus on low miss rate to avoid main memory
access
Hit time has less overall impact
Results
L-1 cache usually smaller than a single cache
L-1 block size smaller than L-2 block size

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 50


Interactions with Software

Instructions / Item
Misses depend on
memory access
patterns

Clock Cycles / Item


Algorithm behavior
Compiler

optimization for
memory access

Cache Misses / Item

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 51


5.4 Virtual Memory
Virtual Memory
Use main memory as a cache for
secondary (disk) storage
Managed jointly by CPU hardware and the
operating system (OS)
Programs share main memory
Each gets a private virtual address space
holding its frequently used code and data
Protected from other programs
CPU and OS translate virtual addresses to
physical addresses
VM block is called a page
VM miss is called a page fault
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 52
Address Translation
Fixed-size pages (e.g., 4K)

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 53


Page Fault Penalty
On page fault, the page must be fetched
from disk
Takes millions of clock cycles
Handled by OS code
Try to minimize page fault rate
Fully associative placement
Smart replacement algorithms

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 54


Virtual to Physical Address Translation
How to translate a virtual address into a
physical address?

Use a page table stored in memory and a


base register pointing to the address of the
first entry of the page table

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 55


Page Tables
Stores virtual address translation information
Array of page table entries, indexed by virtual
page number
Page table register in CPU points to page table
in physical memory
If page is present in memory
Page Table Entry (PTE) stores the physical
page number
Plus other status bits (referenced, dirty, )
If page is not present
PTE can refer to location in swap space on disk
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 56
Example Virtual and Physical Addresses
32 bits virtual address 30 bits physical address
Virtual memory size Physical memory size
4GB 1GB
Page size= 4KB 12 bits page offset
12 bits page offset 18 bits physical page #
20 bits virtual page # 218 physical pages
220 virtual pages

31 30 29 28 ......15 14 13 12 11 10 9 ...........1 0

Virtual Page # Page Offset

29 28 ...15 14 13 12 11 10 9 .......1 0

Physical Page # Page Offset


Chapter 5 Large and Fast: Exploiting Memory Hierarchy 57
Translation Using a Page Table

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 58


Mapping Pages to Storage

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 59


Replacement and Writes
To reduce page fault rate, prefer least-
recently used (LRU) replacement
Reference bit (aka use bit) in PTE set to 1 on
access to page
Periodically cleared to 0 by OS
A page with reference bit = 0 has not been
used recently
Disk writes take millions of cycles
Write a page to disk, not individual words
Write through is impractical
Use write-back
Dirty bit in PTE set when page is written
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 60
Fast Translation Using a TLB
Address translation would appear to require
extra memory references
One to access the Page Table Entry, PTE
Then the actual memory access
But access to page tables has good locality
So use a fast cache of PTEs within the CPU
Called a Translation Look-aside Buffer (TLB)
Typical: 16512 PTEs, 0.51 cycle for hit, 10100
cycles for miss, 0.01%1% miss rate
Misses could be handled by hardware or software

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 61


Fast Translation Using a TLB

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 62


TLB Misses
If page is in memory
Load the PTE from memory and retry
Could be handled in hardware
Can get complex for more complicated page table
structures
Or in software
Raise a special exception, with optimized handler
If page is not in memory (page fault)
OS handles fetching the page and updating
the page table
Then restart the faulting instruction

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 63


TLB Miss Handler
TLB miss indicates either
Page is present in main memory, but the
associated PTE is not in TLB
Page not preset (TLB is a cache for PTEs of
pages residing in memory)
Must recognize TLB miss before destination
register overwritten
Raise exception
Handler copies PTE from memory to TLB
Then restarts instruction
If page not present, page fault will occur
Page Fault Handler
Use faulting virtual address to find PTE
Locate page on disk
Choose physical page to replace
If dirty, write to disk first
Read page from disk into memory and
update page table
Make the process that caused the page
fault runnable again
Restart from faulting instruction

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 65


TLB and Cache Interaction
If cache tag uses
physical address
Need to translate
before cache lookup
Alternative: use virtual
address tag

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 66


5.5 A Common Framework for Memory Hierarchies
The Memory Hierarchy
The BIG Picture

Common principles apply at all levels of


the memory hierarchy
Based on notions of caching
At each level in the hierarchy
Block placement
Finding a block
Replacement on a miss
Write policy

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 67


Block Placement
Determined by associativity
Direct mapped (1-way associative)
One choice for placement
n-way set associative
n choices within a set
Fully associative
Any location
Higher associativity reduces miss rate
Increases complexity, cost, and access time

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 68


Finding a Block
Associativity Location method Tag comparisons
Direct mapped Index 1
n-way set Set index, then search n
associative entries within the set
Fully associative Search all entries #entries
Full lookup table 0

Hardware caches
Reduce comparisons to reduce cost
Virtual memory
Full table lookup makes full associativity feasible
Benefit in reduced miss rate

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 69


Replacement
Choice of entry to replace on a miss
Least recently used (LRU)
Complex and costly hardware for high associativity
Random
Close to LRU, easier to implement
Virtual memory
LRU approximation with hardware support

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 70


Write Policy
Write-through
Update both upper and lower levels
Simplifies replacement, but may require write
buffer
Write-back
Update upper level only
Update lower level when block is replaced
Need to keep more state
Virtual memory
Only write-back is feasible, given disk write
latency

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 71


Sources of Misses
Compulsory misses (aka cold start misses)
First access to a block
Capacity misses
Due to finite cache size
A replaced block is later accessed again
Conflict misses (aka collision misses)
In a non-fully associative cache
Due to competition for entries in a set
Would not occur in a fully associative cache of
the same total size

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 72


Cache Design Trade-offs

Design change Effect on miss rate Negative performance


effect
Increase cache size Decrease capacity May increase access
misses time
Increase associativity Decrease conflict May increase access
misses time
Increase block size Decrease Increases miss
compulsory misses penalty. For very
large block size, may
increase miss rate
due to pollution.

Remember types of misses {Compulsory, Capacity, Conflict}

an executing computer program loads data into cache unnecessarily,


thus causing other needed data to be evicted from the cache and
degrading performance.
Skip section 5.6 Virtual Machines

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 74


Next: Section 5.7 Using a Finite State
Machine to Control A Simple Cache

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 75


5.7 Using a Finite State Machine to Control A Simple Cache
Cache Control (Design of Controller)
Example cache characteristics
Direct-mapped, write-back, write allocate (i.e. policy
on a write miss: allocate block followed by the write-hit
action)
32-bit byte addresses
Block size: 4 words (16 bytes)
Cache size: 16 KB (1024 blocks)
Valid bit and dirty bit per block
Blocking cache
CPU waits until access is complete

31 14 13 4 3 0
Tag Index Offset
18 bits 10 bits 4 bits
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 76
Interface Signals
When set to 1, indicates that When set to 1, indicates that
there is a cache operation there is a memory operation

Read/Write Read/Write
Valid Valid
32 32
Address Address
32 Cache 128 Memory
CPU Write Data Write Data
32 128
Read Data Read Data
Ready Ready

When set to 1, indicates that


When set to 1, indicates that
the cache operation is
the memory operation is
complete
complete
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 77
Review: Finite State Machines
Use an FSM to
sequence control steps
(design the controller)
Set of states, transition
on each clock edge
State values are binary
encoded
Current state stored in a
register
Next state
= fn (current state,
current inputs)
Control output signals
= fo (current state)
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 78
Four State Cache Controller

Cache Hit Compare Tag


Mark Cache Ready If Valid && Hit
Idle Set Valid, Set Tag,
Valid CPU request If Write set Dirty
Cache Miss
Cache Miss Old block is
Old block is Dirty
clean
Allocate Write Back
Read new block Write old block
Memory Ready
from memory to memory

Memory
Memory
Not Ready
Not Ready

79
CSE431 Chapter 5A.79 Irwin, PSU, 2008
5.8 Parallelism and Memory
Hierarchies: Cache
Coherence

80
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 80
Cache Coherence in Multicores
In future multicore processors its likely that the cores will
share a common physical address space, causing a
cache coherence problem

Read X Read X
Core 1 Core 2
Write 1 to X
L1 I$ L1 D$ L1 I$ L1 D$
XX == 01 X=0

XX =
= 01
Unified (shared) L2

82
CSE431 Chapter 5A.82 Irwin, PSU, 2008
Coherence Defined
Informally: If caches are coherent, then
reads return most recently written value
Formally:
If core1 writes X and then core1 reads X (with
no intervening writes)
the read operation returns the written value
Core1 writes X then core2 reads X (sufficiently
later)
the read operation returns the written value
core1 writes X then core2 writes X
cores 1 & 2 see writes in the same order
End up with the same final value for X
Chapter 5 Large and Fast: Exploiting Memory Hierarchy 83
A Coherent Memory System
Any read of a data item should return the most
recently written value of the data item
Coherence defines what values can be returned by a
read
- Writes to the same location are serialized (two
writes to the same location must be seen in the
same order by all cores)
Consistency determines when a written value will be
returned by a read

84
CSE431 Chapter 5A.84 Irwin, PSU, 2008
A Coherent Memory System (cont.)
To enforce coherence, caches must provide
Replication of shared data items in multiple cores
caches (e.g. without accessing L2 cache in our example)
Replication reduces both latency and contention for a
read shared data item
Migration of shared data items to a cores local cache
Migration reduced the latency of the access the data
and the bandwidth demand on the shared memory
(L2 in our example)

85
CSE431 Chapter 5A.85 Irwin, PSU, 2008
Cache Coherence Protocols
Need a hardware protocol to ensure cache
coherence
the most popular of cache coherence protocols
is snooping
The cache controllers monitor (snoop) on the
broadcast medium (e.g., bus) to determine if their
cache has a copy of a block that is requested
Write invalidate protocol writes require
exclusive access and invalidate all other copies
Exclusive access ensure that no other readable or
writable copies of an item exists

86
CSE431 Chapter 5A.86 Irwin, PSU, 2008
Cache Coherence Protocols (cont.)
If two cores attempt to write the same data at the
same time, one of them wins the race causing
the other cores copy to be invalidated.
For the other core to complete, it must obtain a
new copy of the data which must now contain
the updated value thus enforcing write
serialization

87
CSE431 Chapter 5A.87 Irwin, PSU, 2008
Example of Snooping Invalidation

Read X Read X
Core 1 Core 2

INVALID COPY
Write 1 to X Read X
L1 I$ L1 D$ L1 I$ L1 D$
X
X == 01 X 0I
X == 1

XX==1I0
Unified (shared) L2

When the second miss (second read) by Core 2


occurs, Core 1 responds with the value canceling the
response from the L2 cache (and also updating the L2
copy)
91
CSE431 Chapter 5A.91 Irwin, PSU, 2008
Invalidating Snooping Protocols (cont.)

Core activity Bus activity Core 1s L1 Core 2s Contents of


cache L1 cache L2 Cache
X
0
Core 1 reads X Cache miss for X 0 0
Core 2 reads X Cache miss for X 0 0 0
Core 1 writes 1 to X Invalidate for X 1 Invalid Invalid
Core 2 read X Cache miss for X 1 1 1

When the second miss (second read) by Core 2 occurs,


Core 1 responds with the value canceling the response from
the L2 cache (and also updating the L2 copy)
92
Invalidating Snooping Protocols
A processors cache gets exclusive access
to a block when this processor writes in this
block
The exclusive access is achieved by
broadcasting an invalidate message on the
bus which will make copies of the block in
caches of other processors invalid.
A subsequent read by a another processor
produces a cache miss
Owning cache supplies updated value

93
Skip sections 5.9, 5.10, 5.11

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 94


5.12 Concluding Remarks

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 95


Concluding Remarks
Fast memories are small, large memories are
slow
We really want fast, large memories
Caching gives this illusion
Principle of locality
Programs use a small part of their memory space
frequently
Memory hierarchy
L1 cache L2 cache DRAM memory
disk
Memory system design is critical for
multiprocessors (e.g. multicore microprocessors)

Chapter 5 Large and Fast: Exploiting Memory Hierarchy 96

Das könnte Ihnen auch gefallen