Beruflich Dokumente
Kultur Dokumente
• Cache Operations
– Placement strategy
Caches and Virtual Memory – Replacement strategy
– Read and write policy
Cache Operations
• Placement strategy
– Where to place an incoming block in the
Cache Operations cache
• Read and write policies
– How to handle reads and writes on cache hits
and misses
• Replacement strategy
– Which block to replace on cache miss
• Write-back
– Write to cache only. Update main memory when block
is replaced (needs dirty bit per cache block)
1
Read Miss Policies Write Miss Policies
• Read block in from main memory. • Write allocate
– Two approaches: – Bring block to cache from memory, then update
• Forward desired word first (faster, but more • Write-no allocate
hardware needed) – Only update main memory (don’t bring into cache)
• Delay until entire block has been moved to cache
Write through caches usually use write-no allocate
Write-back caches usually use write allocate
2
The Memory Hierarchy Why Use Disks?
CPU registers hold words retrieved
Smaller, L0: from L1 cache.
faster, registers
and • Multiprogramming
costlier L1: on-chip L1 L1 cache holds cache lines retrieved
(per byte)
storage
cache (SRAM) from the L2 cache memory.
– More than one application running at one time
off-chip L2 L2 cache holds cache lines
devices L2:
cache (SRAM) retrieved from main memory. • May need more than available main memory to
store all needed programs and data
main memory Main memory holds disk
L3:
(DRAM)
blocks retrieved from local • May want to share data between applications
Larger, disks.
slower,
and
cheaper
L4: local secondary storage virtual memory
(per byte) (local disks)
Local disks hold files
retrieved from disks on
remote network servers.
• Cheap, large (but, slow)
storage
devices
• Tedious to do by hand
• Difficult to determine overlay set, especially
when considering multiprogramming
3
Virtual Memory Terminology
2. Generate virtual
address
THEN • Effective address – address computed by
MMU translate to physical
address
processor (as viewed inside CPU)
• Logical address – same as effective
4. Request logical page
1. logical address 3. Physical address
address (as viewed outside CPU)
• Virtual address – generated by MMU
• Physical address – address in physical
Main Disk Storage
Memory memory (main memory)
CPU
4
Address Translation in Segmented
Segmentation
Memory
• Divide memory into segments (varying
sizes) Q: How can OS
switch processes?
Problem:
External fragmentation
5
Address Translation in a Paged
Paging
MMU
• Problem
– Internal fragmentation
• Last page is unlikely to be full
• Page Placement
– Page table is direct-mapped
• Page Replacement
– Use bits
Summary
• Cache Operations
– Placement strategy
– Replacement strategy
– Read and write policy
• Virtual Memory
– Why?
– General overview
– Lots of terminology