Sie sind auf Seite 1von 56

Time…

is a great teacher;
unfortunately
it kills all it's pupils..!
10/27/10 05:13
CHAPTER 4

THE MEMORY
SYSTEM

10/27/10 05:13
Chapter Objectives

• Basic memory circuits


• Organization of the main memory
• Cache memory concept
• Virtual memory mechanism
• Secondary storage devices

10/27/10 05:13
Memory

• Ideally,
– Fast
– Large
– Inexpensive
• Is it possible to meet all 3 requirements
simultaneously?

10/27/10 05:13
Some basic concepts
• What is the max. size of memory?
• Address space
– 16-bit : 216 = 64K mem. locations
– 32-bit : 232 = 4G mem. locations
– 40-bit : 240 = 1 T locations
• What is Byte addressable?

10/27/10 05:13
Processor  Memory
k­bit
address bus
MAR
n­bit
data bus
Up to 2 k  addressable
MDR locations

Word length = n bits

Control lines
R /W
(          , MFC, etc.)

Figure 1. Connection of the memory to the processor.

10/27/10 05:13
Terminology
• Memory access time – time between Read and
MFC signals
• Memory cycle time – min. time delay between
initiation of two successive memory
operations

• Serial access vs. Random access memory


• Cache memory
• Virtual memory

10/27/10 05:13
Semiconductor RAM memories

• Cycle time 100 ns to < 10 ns.


• Introduced in late 1960’s
• VLSI technology

10/27/10 05:13
Internal Organization of
memory chips

• Form of an array
• Word line & bit lines

• 16x8 organization : 16 words of 8 bits each

10/27/10 05:13
b7 b′ 7 b1 b′1 b0 b′ 0

W0




FF FF
A0
W1




A1
Address Memory
• • • • • •
decoder • • • • • • cells
A2 • • • • • •

A3

W15




Sense / Write Sense / Write Sense / Write R /W
circuit circuit circuit
CS

Data input /output lines: b7 b1 b0

Figure 2. Organization of bit cells in a memory chip.
10/27/10 05:13
5­bit row
address W0
W1
32 × 32
5­bit
memory cell
decoder
array
W31
Sense / Write
circuitry

10­bit
address
 32­to­1
R/ W
output multiplexer
and
CS
input demultiplexer

5­bit column
address

Data
input/output

Figure 3.  Organization of a 1K ×  1 memory chip.

10/27/10 05:13
Static memories
• Circuits capable of retaining their state as
long as power is applied
• Static RAM(SRAM)
– volatile

10/27/10 05:13
b b′

T1 T2
X Y

Word line

Bit lines

Figure 4.  A static RAM cell.

10/27/10 05:13
b V supply b′

T3 T4

T1 T2
X Y

T5 T6

Word line

Bit lines

Figure 5. An example of a CMOS memory cell.
10/27/10 05:13
Asynchronous DRAMs
• How is information stored?
• Charge on a capacitor
• Needs “Refreshing”

10/27/10 05:13
Bit line

Word line

T
C

Figure 6. A single­transistor dynamic memory cell 

10/27/10 05:13
R AS

Row
Row 4096 × ( 512 × 8 )
address
decoder cell array
latch

A20 ­ 9
⁄ A8 ­ 0 Sense / Write CS
circuits
R/W

Column
Column
address
decoder
latch

C AS D7 D0

Figure 7.  Internal organization of a 2M ×  8 dynamic memory chip.

10/27/10 05:13
Asynchronous DRAMs(cont.)
• 16-megabit DRAM chip configured
as 2M x 8
• Fast page mode

Block transfer capability

10/27/10 05:13
Synchronous DRAMs

• Synchronized with a clock signal

10/27/10 05:13
Refresh
counter

Row
address Row
decoder Cell array
latch
Row/Column
address
Column Column
address Read/Write
counter decoder circuits & latches

Clock
RAS Mode register
CAS and Data input Data output
register register
R/ W timing control
CS

Data

Figure 8. Synchronous DRAM.
10/27/10 05:13
Synchronous DRAMs(cont.)
• Memory latency
• Bandwidth
• Double-Data-Rate SDRAM
– Interleaving of words

10/27/10 05:13
Structure of larger memories
• Static Memory Systems

• Dynamic Memory Systems


– SIMMs
– DIMMs

10/27/10 05:13
21­bit
addresses 19­bit internal chip address
A0
A1

A 19
A 20

2­bit
decoder

512 K × 8
 memory chip
D 31­24 D 23­16 D 15­8 D 7­0

512 K × 8 memory chip

19­bit 8­bit data
address input/output

Chip select

10/27/10 05:13
Figure 10.  Organization of a 2M ×  32 memory module using 512K ×  8 static memory chips.
Memory system considerations

• Cost
• Speed
• Power dissipation
• Size of chip

10/27/10 05:13
Memory controller
• Between processor and memory
• Refresh Overhead

10/27/10 05:13
Row/Column
address
Address

RA S
R/ W
CA S
Memory
Request controller R/ W
Processor Memory
CS
Clock
Clock

Data

Figure 11.  Use of a memory controller.
10/27/10 05:13
Read-only memories

• Why?
• Nonvolatile
• Manufacturer-programmed memory

10/27/10 05:13
Bit line

Word line

T
Not connected to store a 1
P Connected to store a 0

Figure 12. A ROM cell.
10/27/10 05:13
ROM
• PROM
• EPROM
– Erasure by exposure to UV light
• EEPROM
– Programmed and erased electrically

10/27/10 05:13
Flash memory
• Greater density
• Higher capacity
• Lower cost per bit
• Low power consumption

• Flash cards
• Flash drives

10/27/10 05:13
Processor

Registers
Increasing Increasing Increasing
size speed cost per bit
Primary L1
cache

Secondary L2
cache

Main
memory

Magnetic disk
secondary
memory

10/27/10 05:13
Figure 5.13. Memory hierarchy.
Cache Memories
• Speed of the main memory is very low in
comparison with the speed of processor

• For good performance, the processor cannot


spend much time of its time waiting to access
instructions and data in main memory.

10/27/10 05:13
Cache Memories
• Important to device a scheme that reduces
the time to access the information

• An efficient solution is to use fast cache


memory

10/27/10 05:13
Main
Processor Cache
memory

Use of a cache memory.

10/27/10 05:13
Cache Memories
• When a read request is received from the
processor, the contents of a block of
memory words containing the location
specified are transferred into the cache one
word at a time.
• When the program references any locations
in this block, the desired contents are read
directly from the cache
10/27/10 05:13
Cache Memories

• When a cache is full and a memory word


that is not in the cache is referenced, the
cache control hardware must decide which
block should be removed to create space for
the new block that contain the referenced
word

10/27/10 05:13
Cache Memories

• Read hit
• Write hit
• Read Miss
• Write Miss

10/27/10 05:13
Cache Memories
• Read operation – Main memory is not
involved
• Write Operation – Two technique
1. Write- through protocol
2. Write-back or copy-back protocol

10/27/10 05:13
Mapping Function
• Direct Mapping
• Associative Mapping
• Set-Associate Mapping

10/27/10 05:13
Direct Mapping
• In this technique, block j of the main memory
maps onto block j modulo 128 of the cache.

• Main memory blocks 0,128,256,…is loaded in the


cache, it is stored in cache block 0.

• Blocks 1,129,257,…are stored in cache block 1.

10/27/10 05:13
Main
memory

Block 0

Block 1

Cache Block 127
tag
Block 0 Block 128
tag
Block 1 Block 129

tag
Block 127 Block 255

Block 256

Block 257

Block 4095

Tag Block Word


5 7 4 Main memory address

10/27/10 05:13
Figure : Direct­mapped cache.
Associative Mapping
• More flexible mapping technique

• A main memory block can be placed inot


any cache block position.

• Space in the cache can be used more


efficiently, but need to search all 128 tag
patterns

10/27/10 05:13
Main
memory

Block 0

Block 1

Cache
tag
Block 0
tag
Block 1

Block i

tag
Block 127

Block 4095

Tag Word
12 4 Main memory address

Figure     Associative­mapped cache.
10/27/10 05:13
Set-Associate Mapping
• Combination of the direct- and associative-
mapping technique
• Blocks of the cache are grouped into sets,
and the mapping allows a block of the main
memory to reside in any block of a specific
set.
Note: Memory blocks 0,64,128,…,4032 maps
into cache set 0.

10/27/10 05:13
Main
memory

Block 0

Block 1

Cache
tag
Block 0
Set 0
Block 63
tag
Block 1
Block 64
tag
Block 2
Set 1
tag Block 65
Block 3

tag Block 127
Block 126
Set 63
Block 128
tag
Block 127
Block 129

Block 4095

Tag Set Word


6 6 4 Main memory address
10/27/10 05:13
Replacement Algorithm
• Collection of rules for making the decision
to remove blocks and to create space for
new block

10/27/10 05:13
LRU Replacement Algorithm
• LRU- Least Recently Used block

10/27/10 05:13
Memory address Contents

(7A00) 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 A(0,0)
(7A01) 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 1 A(1,0)
(7A02) 0 1 1 1 1 0 1 0 0 0 0 0 0 0 1 0 A(2,0)
(7A03) 0 1 1 1 1 0 1 0 0 0 0 0 0 0 1 1 A(3,0)
(7A04) 0 1 1 1 1 0 1 0 0 0 0 0 0 1 0 0 A(0,1)

(7A24) 0 1 1 1 1 0 1 0 0 0 1 0 0 1 0 0 A(0,9)
(7A25) 0 1 1 1 1 0 1 0 0 0 1 0 0 1 0 1 A(1,9)
(7A26) 0 1 1 1 1 0 1 0 0 0 1 0 0 1 1 0 A(2,9)
(7A27) 0 1 1 1 1 0 1 0 0 0 1 0 0 1 1 1 A(3,9)

Tag for direct mapped
Tag for set­associative
Tag for associative

Figure 5.18. An array stored in the main memory .
10/27/10 05:13
SUM := 0
for j:= 0to 9do
SUM := SUM + A(0,j)
end
AVE := SUM / 10
for i:= 9downto 0do
A(0,i) := A(0,i) / AVE
end

Figure 5.19. Task for example in Section 5.5.3.

10/27/10 05:13
Contents of data cache after pass:

Block
position j= 1 j= 3 j= 5 j= 7 j= 9 i= 6 i= 4 i= 2 i= 0

0 A(0,0) A(0,2) A(0,4) A(0,6) A(0,8) A(0,6) A(0,4) A(0,2) A(0,0)


1
2
3
4 A(0,1) A(0,3) A(0,5) A(0,7) A(0,9) A(0,7) A(0,5) A(0,3) A(0,1)
5
6
7

Figure 5.20. Contents of a direct­mapped data cache in Example 5.1.

10/27/10 05:13
Contents of data cache after pass:

Block j = 7 j = 8 j = 9 i = 1 i = 0
position
0 A(0,0) A(0,8) A(0,8) A(0,8) A(0,0)
1 A(0,1) A(0,1) A(0,9) A(0,1) A(0,1)
2 A(0,2) A(0,2) A(0,2) A(0,2) A(0,2)
3 A(0,3) A(0,3) A(0,3) A(0,3) A(0,3)
4 A(0,4) A(0,4) A(0,4) A(0,4) A(0,4)
5 A(0,5) A(0,5) A(0,5) A(0,5) A(0,5)
6 A(0,6) A(0,6) A(0,6) A(0,6) A(0,6)
7 A(0,7) A(0,7) A(0,7) A(0,7) A(0,7)

Figure 5.21.  Contents of an associative­mapped data cache in Example 5.1.

10/27/10 05:13
Contents of data cache after pass:

j= 3 j= 7 j= 9 i= 4 i= 2 i= 0

A(0,0) A(0,4) A(0,8) A(0,4) A(0,4) A(0,0)


A(0,1) A(0,5) A(0,9) A(0,5) A(0,5) A(0,1)
Set 0
A(0,2) A(0,6) A(0,6) A(0,6) A(0,2) A(0,2)
A(0,3) A(0,7) A(0,7) A(0,7) A(0,3) A(0,3)

Set 1

Figure 5.22.  Contents of a set­associative­mapped data cache in Example 5.1.

10/27/10 05:13
Processing units

L1 instruction L1 data
cache cache

Bus interface unit

System bus
Cache bus

Main
L2 cache memory Input/Output

Figure 5.24.  Caches and external connections in Pentium III processor.
10/27/10 05:13
k bits m bits
Module Address in module MM address

ABR DBR ABR DBR ABR DBR

Module Module Module


0 i n­ 1

(a) Consecutive words in a module

m bits k bits

Address in module Module MM address

ABR DBR ABR DBR ABR DBR

Module Module Module


k
0 i 2 ­ 1

10/27/10 05:13 (b) Consecutive words in consecutive modules


Processor

Virtual address

Data MMU

Physical address

Cache

Data Physical address

Main memory

DMA transfer

Disk storage

Figure 5.26. Virtual memory organization.
10/27/10 05:13
Virtual address from processor

Page table base register

Page table address Virtual page number Offset

+
PAGE TABLE

Control Page frame
bits in memory Page frame Offset

Physical address in main memory

10/27/10 05:13 Figure 5.27. Virtual­memory address translation.

Das könnte Ihnen auch gefallen