Sie sind auf Seite 1von 3

Tom Butler

Random Access Memory in Computers


Dynamic RAM (DRAM) is a type of RAM that employs refresh circuits to maintain data in
its logic circuits. Each memory cell in DRAM consists of a single transistor and a capacitor.
The capacitor is responsible for holding the electrical charge that designates a 1 bit. The
absence of a charge designates a logical 0. Capacitors loose their charge over time and
therefore need to be recharged or refreshed. A more expensive and faster type of RAM, Static
RAM (SRAM), uses between 4 and 6 transistors in a special flip-flop circuit that maintains
a 1 or 0 while the computer system is operating. SRAM can be read or written to like DRAM.
DRAM logic, on the other hand, is refreshed several hundreds of times a second. To do this,
the DRAM controller logic merely reads the contents of each memory cell, and because of the
way in which cells are constructed, the reading action simply refreshes the contents of the
memory. This action puts the dynamic into DRAM. However, refreshing takes time and
increases the latency (the time taken for a memory access request to the time the data is
output) of DRAM.
DRAM is used in all computers and associated devices for main system memory. DRAM is
used instead of SRAM, even DRAMs is slower than SRAM, due to operation of the refresh
circuitry. DRAM is used is because it is much cheaper and takes up less spacetypically
25% the silicon area of SRAMs or less. To build a 256 MB 6f system SRAM memory would
be prohibitively expensive. However, technological advances has led to faster and faster
forms of DRAM, despite the disadvantages of the refresh circuit. As indicated, DRAM are
smaller and less expensive than SRAM because SRAM is made from four to six transistors
(or more) per bit.
DRAM technology involves very large scale integration (VSLI) using a silicon substrate
which is etched with the patterns that make the transistors and capacitors. Each unit of
DRAM comes packaged in an integrated circuit (IC). By 2003, DRAM technologies had
evolved to the point where several competing technologies existed, however, the older of
these are slower (i.e. have a higher latency) and contain less MB of storage per unit. T
Ceterus paribus, adding more memory to a computer system increases its performance. Why
is this? Well, if the amount of RAM is insufficient to hold the processes and data required, the
operating system has to create a swap file on hard disk, which is used to create virtual
memory. On average, it takes a CPU about 200 nanoseconds (ns) to access DRAM compared
to 12,000,000ns to access the hard drive. More RAM means less swapping and speeds up a
system.

Synchronous Dram (SDRAM)


In late 1996, synchronous DRAM began to appear in computer systems. Unlike previous
RAM technology, which used asynchronous memory access techniques, SDRAM is
synchronized with the system clock and the CPU. The result of this is that the CPU no longer
waits for memory accesses to occur. Significantly, SDRAM employs interleaving and burst
mode functions, which make memory retrieval even faster. SDRAM dual inline memory
modules (DIMMs, as opposed to the older single inline memory modules, SIMMs) are
available from numbers vendors and at several different packing densities (e.g. the amount of
MB on each DIMM) and speeds. The speed of SDRAM chips are closely associated with the
speed of the front-side bus, in order to synchronous with the operation of the CPU. For
example, PC66 SDRAM runs at 66MHz, PC100 SDRAM runs at 100MHz, PC133 SDRAM
runs at 133MHz, and so on. Faster SDRAM speeds such as 200MHz and 266MHz are
currently in development.

Tom Butler

Double Data Rate Synchronous SDRAM (DDR SDRAM)


DDR SDRAM is the latest generation of SDRAM memory technology. It is targeted at Intels
7th Generation Pentium IV as its key innovation is that the memory control logic gates switch
on both the leading and trailing edge of a clock pulse, rather than on just the leading edge as
with normal gate operation. With typical SDRAM technology, binary signals on the control,
data and address potions of the system bus from the Northbridge chip to the memory unit are
transferred on the leading edge of the clock pulse that opens the bus interface logic gates.
Until the advent of the Pentium IV cpu, bus speed was dictated by the system clock speed
with ran at 100 Mhz AND 133Mhz on the Pentium III. The Front Side Bus (FSB) speed to the
Northbridge chip and the portion of the system bus from the Northbridge chip to the memory
chips ran at 100 Mhz and 133 Mhz. The Pentium IV Willamette, however, has a FSB speed of
400Mhz (the 100 Mhz system clock is quad pumped (X4) to achieve this). With a data bus
width of 64 bits (8 bytes), this gives a data bandwidth of 3.2 Giga Bytes (400 M x 8 = 3,000
MB or 3.2 GB). Note, data transfer rates within a computer are rated in Kilo, Mega, or Giga
Bytes per second due to the parallel method of transfer, while between computers
(client/server etc.) it is measured in bits per second (Kilo, Mega or Giga bits per second).
Older SDRAM technologies (PC100 SDRAM and PC133 SDRAM) operate at system bus
speeds and therefore constitute a bottleneck for Pentium IV systems. Hence, the advent of
DDR technology to help alleviate the bottleneck, and Intels support for Rambus DRAM,
which it feels is a better solution.
With DDR technology, special logic circuitry enables data to be transferred on the leading
and trailing edge of the 1 clock pulse (remember each clock cycle consists of a 1 followed
by a 0. Taking the data bus for example, a clock pulse transition from the 0 of the
preceding cycle to the 1 of the next opens logic gates to allow 64 bits of data onto the data
bus, while the transition from 1 to a 0 results in another 64 bits being switched onto the
bus. Of course the gates on the Northside chip that serves the memory segment of the system
bus open and close in unison. This effectively double the speed of operation of SDRAM,
hence the term the term "Double Data Rate". With DDR SDRAM, a 100 or 133MHz system
clock rate yields an effective data rate of 200MHz or 266MHz when double clocked. Newer
designs (PC2700 etc.) are based on DDR SRRAM running at 166MHz.which is double

clocked to give an effective rate of 333 Mhz.

Speed Ratings for DDR SDRAM


As indicated, in the past, the speeds that SDRAM chips operated at were dictated by the bus
speeds, so PC100 and PC133 SDRAM DIMMs operated on 100 and 133 MHz FSBs. DDRDRAM ratings are not based on clock speed, but on the maximum data bandwidth or
throughput. Hence, with a 100 Mhz system bus on a Pentium IV Willamette system, the
maximum data bandwidth is 1600 Mbytes per second (100 x 2 x 8) or 1.6 GB. Hence, the
industry designation for DDR SDRAM DIMMS on 100 Mhz systems is PC1600. Likewise,
with a 133 Mhz system clock speed, the designation for DDR SDRAMs that operate at 133
Mhz is PC2100 (133 x 2 x 8 = 2133 M bytes per second). The reason for this rating system
lies with manufacturers marketing strategies. For example, Rambus DRAM RIMMs are
designated PC800, because of the rate in Mhz at which the memory chips operate. This is the
internal and external rate of operation to the Northbridge chip; however, the data bus between
memory and the Northbridge chip is a mere 16 bits at which it operates. This gives a
bandwidth of 1600 MBytes/second, the same as DDR (however, the actual throughput of data
is higher and latency lower in Rambus DRAM RIMMs. The manufacturers of DDR-DRAM
were reluctant to badge their chips with smaller designations (e.g. PC200 or PC266) as
potential customers might not buy their chips even though the difference in performance was
negligible. Further advances in DDR SDRAM technologies saw DDR SDRAM-based Intel

and VIA chipsets which accommodated PC2400 AND PC2700 DDR SRRAM

Tom Butler
running at 150 Mhz and 166MHz.respectively and which is double clocked to 300
and 333 Mhz (so called DDR 300 and 333). However, the evolution of DDR366 and
chipset design led to the PC3000 DDR SDRAM being released with even higher
bandwidth speeds.

Rambus DRAM
While DDR SDRAM is relatively new, its major competitor, Rambus DRAM (DRAM), has
been around for some time. Intels support for Rambus DRAM, as indicated by its
collaboration with Rambus Inc. and its efforts to successfully develop chipset support for the
standard and promote this combination as the de facto standard to PC manufacturers,
indicates that the future of DRAM is RDRAM. RDRAM is therefore a proprietary technology
jointly developed by Intel and Rambus Inc. However, Intel had problems with its initial
chipset designs and the emergence of support for DDR among Intels major competitors in
the CPU and chipset markets, caused Intel to develop a chipset (the i845) to support DDR.
Then there was the relative cost of DDR SDRAM and RDRAM, with the latter being
prohibitively more expensive until 2002/2003, when the cost of DDR SDRAM DIMMs and
Rambus DRAM RIMMs were more or less equal, but not the PC systems of which they were
a part.
There are three types of Rambus DRAM; Base Rambus, Concurrent Rambus and Direct
Rambus. Direct Rambus is the newest DRAM architecture and interface standard that
challenges traditional main memory designs. Direct Rambus transfers data at speeds up to and
over 800MHz over a 16-bit bus called a Direct Rambus Channel (later versions use an 18 bit
bus). Therefore, PC600 RDRAM delivers a peak bandwidth of 1200 Mbytes/second, while
PC800 delivers 1,600 M Bytes/second. Advances in Rambus technology has seen new
designations such as PC1033 and PC1066, with concomitant increases in bandwidth.
However, there is a problem when it comes to Rambus DRAM. While PC800 Rambus
DRAM delivers the same throughput as PC1600 DDR SDRAM, the efficiency of RDRAM is
in the order of 80% or over. On the other hand, DDR SDRAM delivers between 40% - 70%
dependant on systems and applications. This might seem to give Rambus the advantage,
however, the latency of Rambus DRAM is higher, and increases with every RIMM installed.
Remember latency refers to the time that elapses from the CPU requesting (addressing)
instructions or data from RAM and the time it receives it. So, whats the net effect of higher
efficiency and throughput and higher latency? In 2003, the jury is still out, but DDR SDRAM
remains the best option for many PC systems, except in high end solution, where high cost
and performance is the goal. The reason for this is the evolution of even higher DRAM to
Northbridge bandwidths of the PC2700 and PC3000 DDR SDRAM.

Das könnte Ihnen auch gefallen