Sie sind auf Seite 1von 2

Cache memory is random access memory which can be accessed more quickly than th

e access of main memory.


The basic purpose of cache memory is to store program instructions that are freq
uently re-referenced by software during operation. Fast access to these instruct
ions increases the overall speed of the software program.
As the microprocessor processes data, it looks first in the cache memory; if it
finds the instructions there (from a previous reading of data), it does not have
to do a more time-consuming reading of data from larger memory or other data st
orage devices.
SRAM usually used to store cache. But due to d large amount of space it occupies
and cost factor it is not preferred for main memory.
From the diagram one can see that the CPU accesses cache first for data or instr
uctions. The cache controller is initially used to store program from main memor
y to cache memory. It uses DMA controller for this process. IF the CPU does not
find the data in cache then it accesses main memory.
There are two types of cache system organisation.
In look aside the cache and main memory are connected to the CPu through a syste
m bus. Whenever the CPU wants to access data it puts the physical address in the
bus. When the address is available the cache searches its own tag address to lo
cate the physical address. If available cache is accessed else CPU accesses main
memory to obtain the data.
The only difference in look throgh is that the cache access is done through a se
perate bus. This allows other systems to use the main bus during cache operation
. This methord is slower during the case of cache miss.
Cache Read Operation:
The memory address is 10bytes and cache is 4bytes long.
As one can see from the diagram the first 8 bytes of the address is compared w
ith cache tag memory using tag comparator. If match found that row is selected.
The last two bytes of the address is compared in cache data memory and when matc
h found it is read from its location.
Cache Write Operation:
The addressing methord is same as cache read operation. When cache hit o
ccurs the old data is replaced by the new data.
Element of cache design:
First cache size. It must be small such that overall cost is close to th
e cost of main memory.
Mapping function: The cache has reasonable number of blocks at any given
time but this number is small when compared to the main memory. Thus mapping fu
nctions are used to relate the main memory to cache memory. The mapping functio
ns used are direct mapping and associate mapping.
In direct mapping the blocks from main memory are stored in one possible
location in cache. Whereas in associative mapping a memory block can be placed
anywher in cache block position. the Tag address and memory address are used to
identify the block.
Replacement algorithm: When a new block is brought into cache a existing
block must be replaced if cache is full. This is done by FIFO algorithm, Least
frequentyl used or random.
Write policy: It is known as cache updating policy. There can be two cop
ies of same data one in memory and another in cache. But if content in main memo
ry is altered and the content in cache is not then it will lead to inconsistenci
es. therefore updating mechanisms are used such as write through, buffered write

through etc.
Number of cache: When on-chip cache is insufficient then secondary cache
can be used.

Das könnte Ihnen auch gefallen