Sie sind auf Seite 1von 4

Information and Communication

Technology

Submitted by:
Desabelle C. Marticio
Denver Palma
Raffy Nerveza
11-ICT

Submitted to:
Mr. Marco Bautista
Subject Teacher
Memory Architecture

Memory architecture describes the methods used to implement electronic computer data storage in a
manner that is a combination of the fastest, most reliable, most durable, and least expensive way to store and
retrieve information. Depending on the specific application, a compromise of one of these requirements may be
necessary in order to improve another requirement. Memory architecture also explains how binary digits are
converted into electric signals and then stored in the memory cells. And also the structure of a memory cell.
For example, dynamic memory is commonly used for primary data storage due to its fast access speed.
However dynamic memory must be repeatedly refreshed with a surge of current millions of time per second, or the
stored data will decay and be lost. Flash memory allows for long-term storage over a period of years, but it is much
slower than dynamic memory, and the static memory storage cells wear out with frequent use.
Similarly, the data bus is often designed to suit specific needs such as serial or parallel data access, and the
memory may be designed to provide for parity error detection or even error correction in expensive business
systems.

Memory Capacity and Performance

High working memory capacity is more affected by the cognitive control network than a low capacity group,
possibly because people with high span utilize more efficient brain network during dual or multitasking situations.
These findings contribute to perceiving cognitive control network as an individual trait, which can reflect neural
efficiency to allow augmented human cognition, as well as a significant predictor of brain-computer interface
performance.

Memory Hierarchy Structure

The memory hierarchy separates computer storage into a hierarchy based on response time. Since
response time, complexity, and capacity are related, the levels may also be distinguished by their performance and
controlling technologies. Memory hierarchy affects performance in computer architectural design, algorithm
predictions, and lower level programming constructs involving locality of reference.
Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and
capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories
(m1,m2,...,mn) in which each member mi is typically smaller and faster than the next highest member mi+1 of the
hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for
activating the transfer.
There are four major storage levels.
Internal – Processor registers and cache.
Main – the system RAM and controller cards.
On-line mass storage – Secondary storage.
Off-line bulk storage – Tertiary and Off-line storage.

This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging
algorithm may be considered as a level for virtual memory when designing a computer architecture, and one can
include a level of near line storage between online and offline storage.

Memory Capacity
The memory capacity is the amount of data a device can store at any given time in its memory.
For example, computer software may list memory requirements similar to those shown below.
Recommend 32 MB of memory.
Minimum 16 MB of memory.
Here, the developer of the program recommends, for optimal performance, that the computer have 32 MB of
memory. The software is capable of running with only 16 MB of memory, although its performance may suffer.
Memory Performance
The performance of a memory system is defined by two different measures, the access time
and the cycle time. Access time, also known as response time or latency, refers to how quickly the
memory can respond to a read or write request. Several factors contribute to the access time of a
memory system. The main factor is the physical organization of the memory chips used in the system.
This time varies from about 80 ns in the chips used in personal computers to 10 ns or less for chips used
in caches and buffers (small, fast memories used for temporary storage, described in more detail
below). Other factors are harder to measure. They include the overhead involved in selecting the right
chips (a complete memory system will have hundreds of individual chips), the time required to forward
a request from the processor over the bus to the memory system, and the time spent waiting for the
bus to finish a previous transaction before initiating the processor's request. The bottom line is that the
response time for a memory system is usually much longer than the access time of the individual chips.