Sie sind auf Seite 1von 138

INTERNATIONAL DIPLOMA IN HARDWARE & COMPUTER

NETWORK TECHNOLOGY
STUDENT NAME : K.G.D.I. Hashan Thilakarathna
IDM REGISTRATION
: 1101214649
NO.

PROGRAMME : International Diploma in Hardware & Computer Net-


work Technology
ASSIGNMENT TITLE : Assembling and Maintenance of Personal Computers

Task 01
1.1 Find top five UPS & PSU Companies around the world.

1.2 What are the Power Supply form factors? Find images.

1.3 Search for Online Voltage Calculator and use any two of them (Give screen shots).

1.4 Find details about any Thousand V Power Supply.

1.5 Give examples for Line Interactive & Online UPS with technical details.

Task 02
2.1 Search Motherboard Form Factors.
2.2 Explain any motherboard which having North Bridge & South Bridge (Search Images).
2.3 Explain any motherboard which supports Intel Core i Series CPU (Ex : Core
i3,Corei5, Corei7).
2.4 Compare difference between Nvedia SLI and ATI Crossfire motherboards using im-
ages.

Task 03
3.1 Search images for different back side connections of CD/DVD and Blue Ray.

3.2 Create a table for CD, DVD and Blueray data speeds.

3.3 Explain what is a Dual Layer Disk is?

3.4 Find a chart for trouble shoot optical drive problems.

1|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 1 of 138
3.5 What are the hard disk types for desktop computers?

3.6 Compare laptop and desktop hard drives.

3.7 Find a table which includes all hard disk space needs for Windows Client and Server
Operating Systems.
3.8 Explain basic disk, dynamic disk and volume.

3.9 What are the differences between desktop and server hard disks?

Task 04

4.1 What are the desktop ram types?

4.2 Find difference between RAM types using Images.

4.3 What are the Laptop RAM types?

4.4 Compare Intel Core 2 DuoCPU & Core 2 Quad CPU. Find suitable technical table.

4.5 Compare Intel Core i3 first generation CPU & Core i7 first generation CPU. Include
technical table.
4.6 Compare Intel Core i5 seventh generation CPU & Core i7 seventh generation CPU.
Include technical table.
4.7 What are the Laptop CPU types? Give a detail table.

4.8 Give a table Desktop & Laptop CPU socket types.

Task 05

5.1 Find three IPv4 classes with their default subnet masks.

5.2 Explain what is a protocol?

5.3 What are the OSI seven layers?

5.4 Explain difference between LAN & WAN.

5.5 What are the benefits of Star topology.

2|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 2 of 138
Acknowledgement

I take this opportunity to express profound gratitude and great regards

To my guide lecturer Mr. Lahiru Mihidum

For his exemplary guidance, monitoring and constant encouragement

through the course of thesis.

The blessing, help and guidance given by him.

I also take this opportunity to express a deep sense of gratitude to

the manager of IDM Gampaha Branch for his cordial support,

valuable information and guidance

which helped me in completing this task through various stages.

Im obliged to staff members of my institute

for the valuable information provided by them I their cooperation

During the period of my assignment.

Finally I thank my parents and friends

for their constant encouragement.

Without that this assignment would not be possible!

3|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 3 of 138
Task 01
1.1 Find top five UPS & PSU Companies around the world.

Top 10 Best Power Supply Manufacturers in the world:

Seasonic
Corsair Components
FSP Group
Antec
Thermaltake
Cooler Master
Delta Electronics
EVGA Corporation
Silver Stone
Enermax

Top 10 Best UPS Brands in the world:

APC UPS
Teknitron
Deutsche Power
REMCO (Renewable Energy Manufacturing Company)
Poer Tech International Group
Kemapower Sustem & Equipment Company (PVT) Ltd
Numeric UPS
Microtek UPS
Intex UPS

4|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 4 of 138
1.2 What are the Power Supply form factors? Find images.

Power Supply Form Factors

The shape and general physical layout of a component is called the form factor. Items
that share a form factor are generally interchangeable, at least as far as their sizes and fits are
concerned. When designing a PC, the engineers can choose to use one of the popular standard
PSU (power supply unit) form factors, or they can elect to build their own. Choosing the for-
mer means that a virtually inexhaustible supply of inexpensive replacement parts will be
available in a variety of quality and power output levels. Going the custom route means addi-
tional time and expense for development. In addition, the power supply is unique to the sys-
tem and available only from the original manufacturer.

If you can't tell already, I am a fan of the industry-standard form factors! Having
standards and then following them allows us to upgrade and repair our systems by easily re-
placing physically (and electrically) interchangeable components. Having interchangeable
parts means that we have a better range of choices for replacement items, and the competition
makes for better pricing, too.

In the PC market, IBM originally defined the standards, and everybody else copied
them. This included power supplies. All the popular PC power supply form factors up
through 1995 were based on one of three IBM models, including the PC/XT, AT, and PS/2

5|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 5 of 138
Model 30. The interesting thing is that these three power supply definitions all had the same
motherboard connectors and pinouts; where they differed was mainly in shape, maximum
power output, the number of peripheral power connectors, and switch mounting. PC systems
using knock-offs of one of those three designs were popular through 1996 and beyond, and
some systems still use them today.

Intel gave the power supply a new definition in 1995 with the introduction of the
ATX form factor. ATX became popular in 1996 and started a shift away from the previous
IBM-based standards. ATX and the related standards that followed have different connectors
with additional voltages and signals that allow systems with greater power consumption and
additional features that would otherwise not be possible with the AT style supplies.

Technically, the power supply in your PC is described as a constant voltage half-


bridge forward converting switching power supply:

Constant voltage means that the power supply puts out the same voltage to the com-
puter's internal components, no matter what the voltage of AC current running it or
the capacity (wattage) of the power supply.
Half-bridge forward converting switching refers to the design and power regulation
technique used by most suppliers. This design is commonly referred to as a switching
supply. Compared to other types of power supplies, this design provides an efficient
and inexpensive power source and generates a minimum amount of heat. It also main-
tains a small size and a low price.

Seven main power supply physical form factors have existed that can be called industry
standard. Five of these are based on IBM designs, whereas two are based on Intel designs. Of
these, only three are used in most modern systems; the others are pretty much obsolete.

Note that although the names of the power supply form factors seem to be the same as
those of motherboard form factors, the power supply form factor is more related to the system
chassis (case) than the motherboard. That is because all the form factors use one of only two
types of connector designs, either AT or ATX.

For example, all PC/XT, AT, and LPX form factor supplies use the same pair of six-pin
connectors to plug into the motherboard and will therefore power any board having the same
type of power connections. Plugging into the motherboard is one thing, but for the power
supply to physically work in the system, it must fit the case. The bottom line is that it is up to
you to ensure the power supply you purchase not only plugs into your motherboard but also
fits into the chassis or case you plan to use.

6|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 6 of 138
Modern Power Supply Connector Types and Form Factors
Modern PS Form Originated From Connector Type Associated MB Form Fac-
Factors tors
LPX style* IBM PS/2 Model 30 (1987) AT Baby-AT, Mini-AT, LPX

ATX style Intel ATX, ATX12V ATX ATX, NLX, Micro-ATX


(1985/2000)
SFX style Intel SFX (1997) ATX Flex-ATX, Micro-ATX

* LPX is also sometimes called Slimline or PS/2.

Each of these power supply form factors is available in numerous configurations and
power output levels. The LPX form factor supply had been the standard used on most sys-
tems from the late 1980s to mid-1996, when the ATX form factor started to gain in populari-
ty. Since then, ATX has become by far the dominant form factor for power supplies, with the
new SFX style being added as an ATX derivative for use in very compact systems that main-
ly use Flex-ATXsized boards. The earlier IBM-derived form factors have been largely obso-
lete for some time now.

LPX Style

The next power supply form factor to gain popularity was the LPX style, also called
the PS/2 type, Slimline, or slim style. The LPX-style power supply has the exact same moth-
erboard and disk drive connectors as the previous standard power supply form factors; it dif-
fers mainly in the shape. LPX systems were designed to have a smaller footprint and lower
height than AT-sized systems. These computers used a different motherboard configuration
that mounts the expansion bus slots on a "riser" card that plugs into the motherboard. The ex-
pansion cards plug into this riser and are mounted sideways in the system, parallel to the
motherboard. Because of its smaller case, an LPX system needed a smaller power supply.
The power supply designed for LPX systems is smaller than the Baby-AT style in every di-
mension and takes up less than half the space of its predecessor.

As with the Baby-AT design in its time, the LPX power supply does the same job as
its predecessor but comes in a smaller package. The LPX power supply quickly found its way
into many manufacturers' systems, soon becoming a de facto standard. This style of power
supply became the staple of the industry for many years, coming in everything from low-
profile systems using actual LPX motherboards to full-size towers using Baby-AT or even
full-size AT motherboards. It still is used in some PCs produced today; however, since 1996
the popularity of LPX has been overshadowed by the increasing popularity of the ATX de-
sign.
7|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 7 of 138
LPX form factor power supply.

ATX Style

One of the newer standards in the industry today is the ATX form factor. The ATX
specification, now in version 2.03, defines a new motherboard shape, as well as a new case
and power supply form factor.

ATX form factor power supply, used with both ATX and NLX systems.

8|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 8 of 138
The shape of the ATX power supply is based on the LPX design, but some important
differences are worth noting.One difference is that the ATX specification originally called for
the fan to be mounted along the inner side of the supply, where it could draw air in from the
rear of the chassis and blow it inside across the motherboard. This kind of airflow runs in the
opposite direction as most standard supplies, which exhaust air out the back of the supply
through a hole in the case where the fan protrudes. The idea was that the reverse flow design
could cool the system more efficiently with only a single fan, eliminating the need for a fan
(active) heatsink on the CPU.

Another benefit of the reverse-flow cooling is that the system would run cleaner, more
free from dust and dirt. The case would be pressurized, so air would be continuously forced
out of the cracks in the casethe opposite of what happens with a negative pressure design.
For this reason, the reverse-flow cooling design is often referred to as a positive-pressure-
ventilation design. On an ATX system with reverse-flow cooling, the air would be blown out
away from the drive because the only air intake would be the single fan vent on the power
supply at the rear. For systems that operate in extremely harsh environments, you can add a
filter to the fan intake vent to further ensure that all the air entering the system is clean and
free of dust.

Although this sounds like a good way to ventilate a system, the positive-pressure de-
sign needs to use a more powerful fan to pull the required amount of air through a filter and
to pressurize the case. Also, if a filter is used, it must be serviced on a periodic basis
depending on operating conditions, it can need changing or cleaning as often as every week.
In addition, the heat load from the power supply on a fully loaded system heats up the air be-
ing ingested, blowing warm air over the CPU, reducing overall cooling capability. As newer
CPUs create more and more heat, the cooling capability of the system becomes more critical.
In common practice, it was found that using a standard negative-pressure system with an ex-
haust fan on the power supply and an additional high-quality cooling fan blowing cool air
right on the CPU is the best solution. For this reason, the ATX power supply specification has
been amended to allow for either positive- or negative-pressure ventilation.

Because a standard negative-pressure system offers the most cooling capacity for a
given fan airspeed and flow, most of the newer ATX-style power supplies use the negative-
pressure cooling system.

The ATX specification was first released by Intel in 1995. In 1996, it became increas-
ingly popular in Pentium and Pentium Probased PCs, capturing 18% of the motherboard
market. Since 1996, ATX has become the dominant motherboard form factor, displacing the
previously popular Baby-AT. ATX and its derivatives are likely to remain the most popular
form factor for several years to come.

The ATX form factor addressed several problems with the power supplies used with
Baby-AT and mini-AT form factors. One is that the power supplies used with Baby-AT
9|P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 9 of 138
boards have two connectors that plug into the motherboard. If you insert these connectors
backward or out of their normal sequence, you will fry the motherboard! Most responsible
system manufacturers "key" the motherboard and power supply connectors so that you cannot
install them backward or out of sequence. However, some vendors of cheaper systems do not
feature this keying on the boards or supplies they use. The ATX form factor includes differ-
ent power plugs for the motherboard to prevent users from plugging in their power supplies
incorrectly. The ATX design features up to three motherboard power connectors that are de-
finitively keyed, making plugging them in backward virtually impossible. The new ATX
connectors also supply +3.3v, reducing the need for voltage regulators on the motherboard to
power the chipset, DIMMs, and other +3.3v circuits.

Besides the new +3.3v outputs, another set of outputs is furnished by an ATX power
supply that is not normally seen on standard power supplies. The set consists of the Pow-
er_On (PS_ON) and 5v_Standby (5VSB) outputs mentioned earlier, known collectively as
Soft Power. This enables features to be implemented, such as Wake on Ring or Wake on
LAN, in which a signal from a modem or network adapter can actually cause a PC to wake
up and power on. Many such systems also have the option of setting a wake-up time, at
which the PC can automatically turn itself on to perform scheduled tasks. These signals also
can enable the optional use of the keyboard to power the system onexactly like Apple sys-
tems. Users can enable these features because the 5v Standby power is always active, giving
the motherboard a limited source of power even when off. Check your BIOS Setup for con-
trol over these features.

NLX Style

The NLX specification, also developed by Intel, defines a low-profile case and moth-
erboard design with many of the same attributes as the ATX. In fact, for interchangeability,
NLX systems were designed to use ATX power supplies, even though the case and mother-
board dimensions are different.

As in previous LPX systems, the NLX motherboard uses a riser board for the expan-
sion bus slots. Where NLX differs is that it is a true (and not proprietary) standard. See Chap-
ter 4, "Motherboards and Buses," for more information on the NLX form factor.

For the purposes of this discussion, NLX systems use ATX power supplies. The only
real difference is that the supply plugs into the riser card and not the motherboard, enabling
NLX motherboards to be more quickly and easily removed from their chassis for service.

10 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 10 of 138
SFX Style

SFX style power supply (with 90mm top-mounted cooling fan).

Intel released the smaller Micro-ATX motherboard form factor in December of 1997,
and at the same time also released a new smaller SFX (Small form factor) power supply de-
sign to go with it. Even so, most Micro-ATX chassis used the standard ATX power supply
instead. Then in March 1999, Intel released the Flex-ATX addendum to the Micro-ATX
specification, which was a very small board designed for low-end PCs or PC-based applianc-
es. At this point, the SFX supply has found use in many new compact system designs.

The SFX power supply is specifically designed for use in small systems containing a
limited amount of hardware and limited upgradability. Most SFX supplies can provide 90
watts of continuous power (135 watts at its peak) in four voltages (+5, +12, 12, and +3.3v).
This amount of power has proved to be sufficient for a small system with a processor, an
AGP interface, up to four expansion slots, and three peripheral devicessuch as hard drives
and CD-ROMs.

Although Intel designed the SFX power supply specification with the Micro-ATX and
Flex-ATX motherboard form factors in mind, SFX is a wholly separate standard that is com-
pliant with other motherboards as well. SFX power supplies use the same 20-pin connector
defined in the ATX standard and include both the Power_On and 5v_Standby outputs.
Whether you will use an ATX or SFX power supply in a given system is dependent more on
the case or chassis than the motherboard. Each has the same basic electrical connectors; the
main difference is which type of power supply the case is physically designed to accept.

11 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 11 of 138
One limiting factor on the SFX design is that it lacks the 5v and so shouldn't be used
with motherboards that have ISA slots (most Micro-ATX and Flex-ATX boards do NOT
have ISA slots). SFX power supplies also won't have the Auxiliary (3.3v and 5v) or ATX12V
power connectors, and therefore shouldn't be used with full-size ATX boards that require
those connections.

SFX form factor power


supply dimensions with a
standard internal 60mm fan.

On a standard model SFX power supply, a 60mm diameter cooling fan is located on
the surface of the housing, facing the inside of the computer's case. The fan draws the air into
the power supply housing from the system cavity and expels it through a port at the rear of
the system. Internalizing the fan in this way reduces system noise and results in a standard
negative-pressure design. In many cases, additional fans might be needed in the system to
cool the processor.

SFX form factor power supply dimensions with an internal 90mm top-mounted fan.
12 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 12 of 138
For systems that require more cooling capability, a version that allows for a larger
90mm top-mounted cooling fan also is available. The larger fan provides more cooling capa-
bility and airflow for systems that need it.

Summary for Power Supply Form Factors

13 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 13 of 138
1.3 Search for Online Voltage Calculator and use any two of them (Give
screen shots).

Ohms calculations

The resistance R in ohms () is equal to the voltage V in volts (V) divided by the current I
in amps (A):

The resistance R in ohms () is equal to the squared voltage V in volts (V) divided by the
power P in watts (W):

The resistance R in ohms () is equal to the power P in watts (W) divided by the squared
current I in amps (A):

Amps calculations

The current I in amps (A) is equal to the voltage V in volts (V) divided by the resistance R
in ohms ():

The current I in amps (A) is equal to the power P in watts (W) divided by the voltage V in
volts (V):

The current I in amps (A) is equal to the square root of the power P in watts (W) divided by
the resistance R in ohms ():

Volts calculations

The voltage V in volts (V) is equal to the current I in amps (A) times the resistance R in
ohms ():

14 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 14 of 138
The voltage V in volts (V) is equal to the power P in watts (W) divided by the current I in
amps (A):

The voltage V in volts (V) is equal to the square root of the power P in watts (W) times the
resistance R in ohms ():

Watts calculation

The power P in watts (W) is equal to the voltage V in volts (V) times the current I in amps
(A):

The power P in watts (W) is equal to the squared voltage V in volts (V) divided by the re-
sistance R in ohms ():

The power P in watts (W) is equal to the squared current I in amps (A) times the resistance
R in ohms ():

Online Voltage Calculators

15 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 15 of 138
1.Online Conversion Calculator

16 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 16 of 138
2. Rapid Tables Calculator

17 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 17 of 138
1.4 Find details about any Thousand V Power Supply.

Rack Mounting Regulated Power Supplies

Specifications;

Input Voltage: 105-125 VAC, 50-400 Hz, single phase.

Input Current: 30 watt output ratings: 0.6A, 60 watt output ratings: 1.2A

Output Polarity: Positive output is standard. For negative output, change first letter of mod-
el number from P to N.

Regulation (constant voltage operation): Load: 0.05%, Line: 0.05%

Regulation (constant current operation): Load: 0.1% plus 50 A, Line: 0.1%

Ripple: 0.05%, peak-to-peak.

Output Controls: Voltage and current may be controlled by means of two 10-turn front pan-
el adjustments with locking vernier dials. Control linearity is 1% of full rated output. Calibra-
tion accuracy is 1% of rated output plus 1% of setting. (Remotely located 1000 ohm potenti-
ometers may alternately be used for output control.)

Output Programming: Output voltage and current may be programmed from 0 to full rating
by means of control voltage inputs of 0 to +5.1 VDC, 2%.

Metering: Voltmeter and ammeter are standard. Accuracy is 2% of full scale.

Voltage Monitor Terminal: Permits remote monitoring of output voltage, stepped down by
ratio shown. Accuracy is 2% of maximum rated output voltage.

18 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 18 of 138
Current Monitor Terminal: Permits remote monitoring of output current, at mV/mA ratio
shown. Accuracy is 2% of maximum rated output current.

Inhibit Terminal: Grounding inhibits output.

Input Protection: "Soft start" circuit minimizes start-up power stresses.

Output Protection: Current regulation circuit protects power supply from short circuits,
overload and arcing.

Response Time: Less than 5 mS for 100 uA load step change.

Stability: 0.05% over eight hours, after 30 minute warmup.

Temperature Coefficient: 200 PPM/C = 0.02%/C (Typical).

Ambient Operating Temperature: -10 to +60C. No derating required.

Storage Temperature: -20 to +85C.

Humidity: Maximum of 90% relative, non-condensing.

Connections: Input: Control and Monitoring: Screw terminals.


Output: High voltage connector (Type varies with model number).

19 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 19 of 138
1.5 Give examples for Line Interactive & Online UPS with technical details.

UBT 800VA - 4000VA; A Line Interactive UPS

Features:

Pure sine wave output


Microprocessor based design with true Line-Interactive structure.
Adjustable voltage sensitivity, charging voltage, & voltage-transfer points.
Remaining Estimated Backup Time indication, Smart battery management with intelligent
double stages of charging control.
Smart battery management with intelligent double stages of charging control.
Real time auto-detection for battery condition.
Automatic restart of load after UPS shutdown.
Smart AVR function (Two buck / boost modes).
Zero Transference.
Generator compatible.
"Green Power" design with auto on/off function & adjustable level.
Hot-swappable batteries (optional for 3K & 4K models).
Network manageable (SNMP optional).
RS-232 interface for communication, compatible with all major O.S., including Windows,
Linux, SCO UNIX, & DOS
Protection for overload, short circuit, & over heat.

20 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 20 of 138
UTP-33 Series (10-400KVA); an Online UPS

Features:

Online double conversion


IGBT inverter and output isolation transformer
3 phases UPS allow 100% unbalance load
Fully DSP control, high reliability and performance
Wide input voltage range
DC cold start function
Advanced battery charging management
Intelligent fan speed control
ECO mode and EPO function.
Intelligent RS232 /RS485 communication port
SNMP adapter (optional)
Advanced no-master-slave parallel technology (optional)
Intelligent battery monitor system - MMBM (optional)
12 Pulse rectifier (optional)
Bypass isolation transformer (optional)
Specification is subject to change without prior notice.
208V, 220V systems are available and with some different parameters

21 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 21 of 138
Task 02
2.1 Search Motherboard Form Factors.
In computing, the form factor is the specification of a motherboard - the dimensions,
power supply type, location of mounting holes, number of ports on the back panel, etc. Spe-
cifically, in the IBM PC compatible industry, standard form factors ensure that parts are in-
terchangeable across competing vendors and generations of technology, while in enterprise
computing, form factors ensure that server modules fit into existing rack mount systems. Tra-
ditionally, the most significant specification is for that of the motherboard, which generally
dictates the overall size of the case. Small form factors have been developed and implement-
ed.
Overview of Form Factors

Comparison of some common motherboard form factors

A Personal Computer motherboard is the main circuit board within a typical desktop comput-
er, laptop or server. Its main functions are as follows:

To serve as a central backbone to which all other modular parts such as CPU, RAM,
and hard drives can be attached as required to create a computer
To be interchangeable (in most cases) with different components (in particular CPU
and expansion cards) for the purposes of customization and upgrading
To distribute power to other circuit boards
To electronically co-ordinate and interface the operation of the components
As new generations of components have been developed, the standards of mother-
boards have changed too. For example, the introduction of AGP and, more recently, PCI Ex-
press have influenced motherboard design. However, the standardized size and layout of
motherboards have changed much more slowly and are controlled by their own standards.
The list of components required on a motherboard changes far more slowly than the compo-
nents themselves. For example, north bridge microchips have changed many times since their
introduction with many manufacturers bringing out their own versions, but in terms of form
factor standards, provisions for north bridges have remained fairly static for many years.

22 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 22 of 138
Although it is a slower process, form factors do evolve regularly in response to chang-
ing demands. IBM's long-standing standard, AT (Advanced Technology), was superseded in
1995 by the current industry standard ATX (Advanced Technology Extended), which still
governs the size and design of the motherboard in most modern PCs. The latest update to the
ATX standard was released in 2007. A divergent standard
by chipset manufacturer VIA called EPIA (also known as ITX, and not to be confused with
EPIC) is based upon smaller form factors and its own standards.
Differences between form factors are most apparent in terms of their intended market
sector, and involve variations in size, design compromises and typical features. Most modern
computers have very similar requirements, so form factor differences tend to be based upon
subsets and supersets of these. For example, a desktop computer may require more sockets
for maximum flexibility and many optional connectors and other features on board, whereas a
computer to be used in a multimedia system may need to be optimized for heat and size, with
additional plug-in cards being less common. The smallest motherboards may sacrifice CPU
flexibility in favor of a fixed manufacturer's choice.
Comparisons

Form fac- Notes


Originated Max. size
tor (typical usage, Market adoption, etc.)

Obsolete, see Industry Standard Architecture.


The IBM Personal Computer XT was the successor
8.5 11 in to the original IBM PC, its first home computer. As
XT IBM 1983
216 279 mm the specifications were open,
many clone motherboards were produced and it be-
came a de facto standard.

Obsolete, see Industry Standard Architecture. Creat-


ed by IBM for the IBM Personal Computer/AT,
AT (Advanced 12 1113 in
IBM 1984 an Intel 80286 machine. Also known as Full AT, it
Technology) 305 279330 mm
was popular during the era of the Intel
80386 microprocessor. Superseded by ATX.

IBM's 1985 successor to the AT motherboard. Func-


8.5 1013 in
Baby-AT IBM 1985 tionally equivalent to the AT, it became popular due
216 254330 mm
to its significantly smaller size.

23 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 23 of 138
Form fac- Notes
Originated Max. size
tor (typical usage, Market adoption, etc.)

Created by Intel in 1995. As of 2007, it is the most


12 9.6 in popular form factor for commodity motherboards.
ATX Intel 1996
305 244 mm Typical size is 9.6 12 in although some companies
extend that to 10 12 in.

Created by the Server System Infrastructure (SSI)


forum. Derived from the EEB and ATX specifica-
12 10.5 in
SSI CEB SSI tions. This means that SSI CEB motherboards have
305 267 mm
the same mounting holes and the same IO connector
area as ATX motherboards.

Created by the Server System Infrastructure (SSI)


forum. Derived from the EEB and ATX specifica-
12 13 in tions. This means that SSI CEB motherboards have
SSI EEB SSI
305 330 mm the same mounting holes and the same IO connector
area as ATX motherboards, but SSI EEB mother-
boards do not.

Created by the Server System Infrastructure (SSI)


forum. Derived from the EEB and ATX specifica-
16.2 13 in
SSI MEB SSI tions. This means that SSI CEB motherboards have
411 330 mm
the same mounting holes and the same IO connector
area as ATX motherboards.

A smaller variant of the ATX form factor (about


25% shorter). Compatible with most ATX cases, but
9.6 9.6 in
microATX 1996 has fewer slots than ATX, for a smaller power sup-
244 244 mm
ply unit. Very popular for desktop and small form
factor computers as of 2007.

Mini-ATX is considerably smaller than Micro-ATX.


Mini-ATX motherboards were designed with MoDT
5.9 5.9 in
Mini-ATX AOpen 2005 (Mobile on Desktop Technology) which adapt mo-
150 150 mm
bile CPUs for lower power requirement, less heat
generation and better application capability.

24 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 24 of 138
Form fac- Notes
Originated Max. size
tor (typical usage, Market adoption, etc.)

A subset of microATX developed by Intel in 1999.


9.0 7.5 in
Allows more flexible motherboard design, compo-
FlexATX Intel 1999 228.6 190.5 mm
nent positioning and shape. Can be smaller than reg-
max.
ular microATX.

6.7 6.7 in A small, highly integrated form factor, designed for


Mini-ITX VIA 2001
170 170 mm max. small devices such as thin clients and set-top boxes.

Targeted at smart digital entertainment devices such


4.7 4.7 in
Nano-ITX VIA 2003 as PVRs, set-top boxes, media centers and Car PCs,
120 120 mm
and thin devices.

3.9 2.8 in
Pico-ITX VIA 2007
100 72 mm max.

2.953 1.772 in
Mobile-ITX VIA 2007
75 45 mm

Neo-ITX VIA 2012 170 85 35 mm Used in the VIA Android PC

A standard proposed by Intel as a successor to ATX


in the early 2000s, according to Intel the layout has
better cooling. BTX Boards are flipped in compari-
BTX (Balance son to ATX Boards, so a BTX or MicroBTX Board
12.8 10.5 in needs a BTX case, while an ATX style board fits in
d Technology Intel 2004
325 267 mm max. an ATX case. The RAM slots and the PCI slots are
Extended)
parallel to each other.

Processor is placed closest to the fan. May contain a


CNR board.

Mi-
10.4 10.5 in
croBTX (or u Intel 2004
264 267 mm max.
BTX)
8.0 10.5 in
PicoBTX Intel 2004
203 267 mm max.
25 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 25 of 138
Form fac- Notes
Originated Max. size
tor (typical usage, Market adoption, etc.)

DTX AMD 2007 200 244 mm max.


Mini-DTX AMD 2007 200 170 mm max.
Used in embedded systems and single board com-
smartModule Digital-Logic 66 85 mm
puters. Requires a baseboard.
Used in embedded systems and single board com-
ETX Kontron 95 114 mm
puters. Requires a baseboard.
Used in embedded systems and single board com-
COM Ex-
PICMG 95 125 mm puters. Requires a carrier board. Formerly referred to
press Basic
as ETXexpress by Kontron.
Used in embedded systems and single board com-
COM Ex-
PICMG 95 95 mm puters. Requires a carrier board. Formerly referred to
press Compact
as microETXexpress by Kontron.
A general-purpose "eco-conscious" mass-volume
Luke Ken- standard based around re-use of legacy PCMCIA.
EOMA68 neth Casson 85.6 54 mm Has two variants: Type I (3.3mm high) and Type II
Leighton (5.0mm high). Does not require a carrier board if the
user-facing end provides power.
Used in embedded systems and single board com-
puters. Requires a carrier board. Formerly referred to
COM Ex-
PICMG 55 84 mm as nanoETXexpress by Kontron. Also known as
press Mini
COM Express Ultra and adheres to pin-outs Type 1
or Type 10
Used in embedded systems and single board com-
CoreExpress SFF-SIG 58 65 mm
puters. Requires a carrier board.
Used in rackmount server systems. Typically used
for server-class type motherboards with dual proces-
Extended 12 13 in
Unknown sors and too much circuitry for a standard ATX
ATX (EATX) 305 330 mm
motherboard. The mounting hole pattern for the up-
per portion of the board matches ATX.
Enhanced Ex- Used in rackmount server systems. Typically used
tended 13.68 13 in for server-class type motherboards with dual proces-
Supermicro
ATX (EEATX 347 330 mm sors and too much circuitry for a standard E.ATX
) motherboard.
Based on a design by Western Digital, it allowed
smaller cases than the AT standard, by putting the
9 1113 in
LPX Unknown expansion card slots on a Riser card. Used in slim-
229 279330 mm
line retail PCs. LPX was never standardized and
generally only used by large OEMs.
Mini-LPX Unknown 89 1011 in Used in slimline retail PCs.
26 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 26 of 138
Form fac- Notes
Originated Max. size
tor (typical usage, Market adoption, etc.)

203229 254
279 mm
PC/104 Con- Used in embedded systems. AT Bus (ISA) architec-
PC/104 3.8 3.6 in
sortium 1992 ture adapted to vibration-tolerant header connectors.
PC/104 Con- Used in embedded systems. PCI Bus architecture
PC/104-Plus 3.8 3.6 in
sortium 1997 adapted to vibration-tolerant header connectors.
Used in embedded systems.
PCI/104- PC/104 Con-
3.8 3.6 in PCI Express architecture adapted to vibration-
Express sortium 2008
tolerant header connectors.
PC/104 Con- Used in embedded systems.
PCIe/104 3.8 3.6 in
sortium 2008 PCI/104-Express without the legacy PCI bus.
89 1013.6 in A low-profile design released in 1997. It also incor-
NLX Intel 1999 203229 254 porated a riser for expansion cards, and never be-
345 mm came popular.
TQ-
Used in embedded systems and IPCs. Requires a
UTX Components 88 108 mm
baseboard.
2001
14 16.75 in A large design for servers and high-end workstations
WTX Intel 1998
355.6 425.4 mm featuring multiple CPUs and hard drives.
16.48 13 in A proprietary design for servers and high-end work-
SWTX Unknown
418 330 mm stations featuring multiple CPUs.
A large design by EVGA currently featured on two
13.6 15 in motherboards; the eVGA SR2 and SRX. Intended
HPTX EVGA 2008
345.44 381 mm for use with multiple CPUs. Cases require 9 expan-
sion slots to contain this form-factor.
XTX 2005 95 114 mm Used in embedded systems. Requires a baseboard.

27 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 27 of 138
Graphical comparison of physical sizes

28 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 28 of 138
Maximum number of expansion card slots

Specification Number

HPTX 9

ATX/EATX/SSI EEB/SSI CEB 7

MicroATX 4

FlexATX 3

DTX/Mini-DTX 2

Mini-ITX 1

Visual examples of different form factors

ATX (Abit KT7) mini-ITX(VIA EPIA 5000AG)

Pico-ITX
(VIA EPIA PX10000G)

29 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 29 of 138
PC/104 and EBX
PC/104 is an embedded computer standard which defines both a form factor and
computer bus. PC/104 is intended for embedded computing environments. Single board com-
puters built to this form factor are often sold by COTS vendors, which benefits users who
want a customized rugged system, without months of design and paper work.
The PC/104 form factor was standardized by the PC/104 Consortium in 1992. An
IEEE standard corresponding to PC/104 was drafted as IEEE P996.1, but never ratified.
The 5.75 8.0 in Embedded Board eXpandable (EBX) specification, which was de-
rived from Ampro's proprietary Little Board form-factor, resulted from a collaboration be-
tween Ampro and Motorola Computer Group.
As compared with PC/104 modules, these larger (but still reasonably embeddable)
SBCs tend to have everything of a full PC on them, including application oriented interfaces
like audio, analog, or digital I/O in many cases. Also it's much easier to fit Pentium CPUs,
whereas it's a tight squeeze (or expensive) to do so on a PC/104 SBC. Typically, EBX SBCs
contain: the CPU; upgradeable RAM subassemblies (e.g., DIMM); Flash memory for solid
state disk; multiple USB, serial, and parallel ports; onboard expansion via a PC/104 module
stack; off-board expansion via ISA and/or PCI buses (from the PC/104 connectors); network-
ing interface (typically Ethernet); and video (typically CRT, LCD, and TV).
Mini PC
Mini PC is a PC small form factor very close in size to an external CD or DVD drive.
Mini PCs have proven popular for use as HTPCs.
Examples:

AOpen XC mini
Apple Mac mini
Intel NUC
Gigabyte Brix
Zotac ZBOX
Asus Vivopc

30 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 30 of 138
2.2 Explain any motherboard which having North Bridge & South Bridge
(Search Images).

North Bridge
A northbridge or host bridge is one of the two chips in the core log-
ic chipset architecture on a PC motherboard, the other being the southbridge. Unlike the
southbridge, northbridge is connected directly to the CPU via the front-side bus (FSB) and is
thus responsible for tasks that require the highest performance. The northbridge is usually
paired with a southbridge, also known as I/O controller hub. In systems where they are in-
cluded, these two chips manage communications between the CPU and other parts of the
motherboard, and constitute the core logic chipset of the PC motherboard.
On older Intel based PCs, the northbridge was also named external memory controller
hub (MCH) or graphics and memory controller hub (GMCH) if equipped with integrated
graphics. Increasingly these functions became integrated into the CPU chip itself, beginning
with memory and graphics controllers. For Intel Sandy Bridge and AMD Accelerated Pro-
cessing Unit processors introduced in 2011, all of the functions of the northbridge reside on
the CPU, while some high-performance CPUs still require northbridge and southbridge chips
externally.
Separating the different functions into the CPU, northbridge, and southbridge chips
was due to the difficulty of integrating all components onto a single chip. In some instances,
the northbridge and southbridge functions have been combined onto one die when design
complexity and fabrication processes permitted it; for example, the Nvidia GeForce 320M in
the 2010 MacBook Air is a northbridge/southbridge/GPU combo chip.
As CPU speeds increased over time, a bottleneck eventually emerged between the
processor and the motherboard, due to limitations caused by data transmission between the
CPU and its support chipset.[7] Accordingly, starting with the AMD Athlon64 series CPUs
(based on the Opteron), a new architecture was used where some functions of the north- and
southbridge chips were moved to the CPU. Modern Intel Core processors have the north-
bridge integrated on the CPU die, where it is known as the uncore or system agent.
Overview
The northbridge typically handles communications among the CPU, in some cas-
es RAM, and PCI Express (or AGP) video cards, and the southbridge. Some northbridges al-
so contain integrated video controllers, also known as a Graphics and Memory Controller
Hub (GMCH) in Intel systems. Because different processors and RAM require different sig-
naling, a given northbridge will typically work with only one or two classes of CPUs and
generally only one type of RAM.
There are a few chipsets that support two types of RAM (generally these are available
when there is a shift to a new standard). For example, the northbridge from
the Nvidia nForce2 chipset will only work with Socket A processors combined with DDR
31 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 31 of 138
SDRAM; the Intel i875 chipset will only work with systems using Pentium 4 processors
or Celeron processors that have a clock speed greater than 1.3 GHz and utilize DDR
SDRAM, and the Intel i915g chipset only works with the Intel Pentium 4 and the Celeron,
but it can use DDR or DDR2 memory.
Etymology
The name is derived from drawing the architecture in the fashion of a map. The CPU
would be at the top of the map comparable to due north on most general purpose geograph-
ical maps. The CPU would be connected to the chipset via a fast bridge (the northbridge) lo-
cated north of other system devices as drawn. The northbridge would then be connected to
the rest of the chipset via a slow bridge (the southbridge) located south of other system devic-
es as drawn.

Intel i815EP northbridge

Overclocking
The northbridge plays an important part in how far a computer can be overclocked, as
its frequency is commonly used as a baseline for the CPU to establish its own operating fre-
quency. This chip typically gets hotter as processor speed becomes faster, requiring more
cooling. There is a limit to CPU overclocking, as digital circuits are limited by physical fac-
tors such as slew rate of operational amplifiers and propagation delay, which increases with
(among other factors) operating temperature; consequently most overclocking applications
have software-imposed limits on the multiplier and external clock setting.

32 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 32 of 138
Evolution

A part of an IBM T42 laptop motherboard.

The overall trend in processor design has been to integrate more functions onto fewer
components, which decreases overall motherboard cost and improves performance.
The memory controller, which handles communication between the CPU and RAM, was
moved onto the processor die by AMD beginning with their AMD64 processors and by Intel
with their Nehalem processors. One of the advantages of having the memory controller inte-
grated on the CPU die is to reduce latency from the CPU to memory.
Another example of this kind of change is Nvidia's nForce3 for AMD64 systems. It
combines all of the features of a normal southbridge with an Accelerated Graphics
Port (AGP) port and connects directly to the CPU. On nForce4 boards it was marketed as a
media communications processor (MCP).
AMD Accelerated Processing Unit processors feature full integration of northbridge
functions onto the CPU chip, along with processor cores, memory controller and graphics
processing unit (GPU). This was an evolution of the AMD64, since the memory controller
was integrated on the CPU die in the AMD64.
The northbridge was replaced by the system agent introduced by the Sandy
Bridge microarchitecture in 2011, which essentially handles all previous Northbridge func-
tions.[10] Intels Sandy Bridge processors feature full integration of northbridge functions
onto the CPU chip, along with processor cores, memory controller and graphics processing
unit (GPU). This was a further evolution of the Westmere architecture, which also featured a
CPU and GPU in the same package.
South Bridge
The southbridge is one of the two chips in the core logic chipset on a personal com-
puter (PC) motherboard, the other being the northbridge. The southbridge typically imple-
ments the slower capabilities of the motherboard in a northbridge / southbridge chipset
computer architecture. In systems with Intel chipsets, the southbridge is named I/O Control-
ler Hub (ICH), while AMD has named its southbridge Fusion Controller Hub (FCH) since the
introduction of its Fusion APUs.
The southbridge can usually be distinguished from the northbridge by not being di-
rectly connected to the CPU. Rather, the northbridge ties the southbridge to the CPU.

33 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 33 of 138
Through the use of controller integrated channel circuitry, the northbridge can directly link
signals from the I/O units to the CPU for data control and access.
Current status
Due to the push for system-on-a-chip (SoC) processors, modern devices increasingly
have the northbridge integrated into the CPU die itself; examples are Intel's Sandy
Bridge[1] and AMD's Fusion processors, both released in 2011. The southbridge became re-
dundant and it was replaced by the Platform Controller Hub (PCH) architecture introduced
with the Intel 5 Series chipset in 2008. All southbridge features and remaining I/O functions
are managed by the PCH which is directly connected to the CPU via the Direct Media Inter-
face (DMI).
Overview
A southbridge chipset handles all of a computer's I/O functions, such as USB, audio,
serial, the system BIOS, the ISA bus, the interrupt controller and the IDE chan-
nels.[4] Different combinations of Southbridge and Northbridge chips are possible,[5] but these
two kinds of chips must be designed to work together; there is no industry-wide standard for
interoperability between different core logic chipset designs. Traditionally, the interface be-
tween a northbridge and southbridge was the PCI bus. The main bridging interfaces used now
are DMI (Intel) and UMI (AMD).
Etymology
The name is derived from representing the architecture in the fashion of a map and
was first described as such with the introduction of the PCI Local Bus Architecture in 1991.
At Intel, the authors of the PCI specification viewed the PCI local bus as being at the very
centre of the PC platform architecture (i.e., at the Equator).
The northbridge extends to the north of the PCI bus backbone in support of CPU,
memory/cache, and other performance-critical capabilities. Likewise the southbridge extends
to the south of the PCI bus backbone and bridges to less performance-critical I/O capabilities
such as the disk interface, audio, etc.
The CPU is located at the top of the map at due north. The CPU is connected to the
chipset via a fast bridge (the northbridge) located north of other system devices as drawn. The
northbridge is connected to the rest of the chipset via a slow bridge (the southbridge) locat-
ed south of other system devices as drawn.
Although the current PC platform architecture has replaced the PCI bus backbone
with faster I/O backbones, the bridge naming convention remains.

34 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 34 of 138
Functionality

Diagram of an old motherboard, which supports many on-board peripheral func-


tions as well as several expansion slots.

The functionality found in a contemporary southbridge includes:

PCI bus. The PCI bus support includes the traditional PCI specification, but may also in-
clude support for PCI-X and PCI Express (PCIe).
ISA bus or LPC bridge. ISA support remains an integrated part of the modern south-
bridge, though ISA slots are no longer provided on more recent motherboards. The LPC
bridge provides a data and control path to the super I/O (the normal attachment for the
keyboard, mouse, parallel port, serial port, IR port, and floppy controller) and FWH
(firmware hub which provides access to BIOS flash storage).
SPI bus. The SPI bus is a simple serial bus mostly used for firmware (e.g., BIOS) flash
storage access.
SMBus. The SMBus is used to communicate with other devices on the motherboard (e.g.,
system temperature sensors, fan controllers, SPDs).
35 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 35 of 138
DMA controller. The DMA controller allows ISA or LPC devices direct access to main
memory without needing help from the CPU.
Interrupt controllers such as 8259A and/or I/O APIC. The interrupt controller provides a
mechanism for attached devices to get attention from the CPU.
Mass storage controllers such as PATA and/or SATA. This typically allows direct at-
tachment of system hard drives.
Real-time clock. The real time clock provides a persistent time account.
Power management (APM and ACPI). The APM or ACPI functions provide methods and
signaling to allow the computer to sleep or shut down to save power.
Nonvolatile BIOS memory. The system CMOS (BIOS configuration memory), assisted
by battery supplemental power, creates a limited non-volatile storage area for system
configuration data.
AC'97 or Intel High Definition Audio sound interface.
Out-of-band management controller such as a BMC or HECI.
Optionally, a southbridge also includes support for Ethernet, RAID, USB, audio co-
dec, and FireWire. Where support is provided for non-USB keyboard, mouse, and serial
ports, a machine normally does so through a device referred to as a Super I/O; still more rare-
ly, a southbridge may directly support the keyboard, mouse, and serial ports.

The difference between Northbridge and Southbridge in computer hardware:

"A northbridge or host bridge is a microchip on some PC motherboards and is con-


nected directly to the CPU (unlike the southbridge) and thus responsible for tasks that require
the highest performance.[1] The northbridge is usually paired with a southbridge, also known
as I/O controller hub. In systems where they are included, these two chips manage communi-
cations between the CPU and other parts of the motherboard, and constitute the core log-
ic chipset of the PC motherboard." - Direct quote from -- Northbridge (computing)
"The southbridge is one of the two chips in the core logic chipset on a personal comput-
er (PC) motherboard, the other being the northbridge. The southbridge typically implements
the slower capabilities of the motherboard in a northbridge/southbridge chipset computer ar-
chitecture. In Intel chipset systems, the southbridge is named Input/Output Controller Hub
(ICH). AMD, beginning with its Fusion APUs, has given the label FCH, or Fusion Controller
Hub, to its southbridge. The southbridge can usually be distinguished from the northbridge by
not being directly connected to the CPU. Rather, the northbridge ties the southbridge to the
CPU. Through the use of controller integrated channel circuitry, the northbridge can directly
link signals from the I/O units to the CPU for data control and access."

36 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 36 of 138
37 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 37 of 138
2.3 Explain any motherboard which supports Intel Core i Series CPU
(Ex : Core i3,Corei5, Corei7).

Intel Desktop Board DX58S02

38 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 38 of 138
39 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 39 of 138
40 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 40 of 138
2.4 Compare difference between ATI Crossfire and Nvedia SLI mother-
boards using images.

ATI Crossfire
AMD CrossFire (also known as CrossFireX) is a brand name for the multi-
GPU technology by Advanced Micro Devices, originally developed by ATI Technologies.
The technology allows up to four GPUs to be used in a single computer to improve graphics
performance.
Associated technology used in mobile computers with external graphics cards, such as
in laptops or notebooks, is called AMD Hybrid Graphics.
Configurations

AMD CrossFireX and some R700


GPUs, in Radeon HD 4000 Series

First-generation
CrossFire was first made available to the public on September 27, 2005.[3] The system
required a CrossFire-compliant motherboard with a pair of ATI Radeon PCI Express (PCIe)
graphics cards. Radeon x800s, x850s, x1800s and x1900s came in a regular edition, and a
"CrossFire Edition" which has "master" capability built into the hardware. "Master" capabil-
ity is a term used for 5 extra image compositing chips, which combine the output of both
cards. One had to buy a Master card, and pair it with a regular card from the same series. The
Master card shipped with a proprietary DVI Y-dongle, which plugged into the primary DVI
ports on both cards, and into the monitor cable. This dongle serves as the main link between
both cards, sending incomplete images between them, and complete images to the monitor.
Low-end Radeon x1300 and x1600 cards have no "CrossFire Edition" but are enabled via
41 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 41 of 138
software, with communication forwarded via the standard PCI Express slots on the mother-
board. ATI currently has not created the infrastructure to allow FireGL cards to be set up in a
CrossFire configuration. The "slave" graphics card needed to be from the same family as the
"master".
An example of a limitation in regard to a Master-card configuration would be the
first-generation CrossFire implementation in the Radeon X850 XT Master Card. Because it
used a compositing chip from Silicon Image (SiI 163B TMDS), the maximum resolution on
an X850 CrossFire setup was limited to 16001200 at 60 Hz, or 19201440 at 52 Hz. This
was considered a problem for CRT owners wishing to use CrossFire to play games at high
resolutions, or owners of Widescreen LCD monitors. As many people found a 60 Hz refresh
rate with a CRT to strain one's eyes, the practical resolution limit became 12801024, which
did not push CrossFire enough to justify the cost. The next generation of CrossFire, as em-
ployed by the X1800 Master cards, used two sets of compositing chips and a custom double
density dual-link DVI Y-dongle to double the bandwidth between cards, raising the maxi-
mum resolution and refresh rate to far higher levels.
Second-generation (Software CrossFire)
When used with ATI's "CrossFire Xpress 3200" motherboard chipset, the "master"
card is no longer required for every "CrossFire Ready" card (with the exception of the Rade-
on X1900 series). With the CrossFire Xpress 3200, two normal cards can be run in a Cross-
fire setup, using the PCI Express bus for communications. This is similar to X1300 Cross-
Fire, which also uses PCI Express, except that the Xpress 3200 had been built for low-latency
and high-speed communication between graphics cards.[6] While performance was impacted,
this move was viewed as an overall improvement in market strategy, because Crossfire Mas-
ter cards were expensive, in high demand, and largely unavailable at the retail level.
Although the CrossFire Xpress 3200 chipset is indeed capable of CrossFire through
the PCI Express bus for every Radeon series below the X1900s, the driver accommodations
for this CrossFire method have not yet materialized for the X1800 series. ATI has said that
future revisions of the Catalyst driver suite will contain what is required for
X1800 dongleless CrossFire, but has not yet mentioned a specific date.

Third-generation (CrossFireX)

A CrossFireX connection on
a graphics card

42 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 42 of 138
Top view and bottom view of a CrossFireX bridge connection

An example of CrossFire usage,


with two Radeon HD 4850 cards
(RV770 GPU)

With the release of the Radeon X1950 Pro (RV570 GPU), ATI has completely revised
CrossFire's connection infrastructure to further eliminate the need for past Y-dongle/Master
card and slave card configurations for CrossFire to operate. ATI's CrossFire connector is now
a ribbon-like connector attached to the top of each graphics adapter, similar to nVidi-
a's SLI bridges, but different in physical and logical natures. As such, Master Cards no longer
exist, and are not required for maximum performance. Two dongles can be used per card;
these were put to full use with the release of CrossFireX. Radeon HD 2900 and HD 3000 se-
ries cards use the same ribbon connectors, but the HD 3800 series of cards only require one
ribbon connector, to facilitate CrossFireX.[9] Unlike older series of Radeon cards, different
HD 3800 series cards can be combined in CrossFire, each with separate clock control.
Since the release of the codenamed Spider desktop platform from AMD on November
19, 2007, the CrossFire setup has been updated with support for a maximum of four video
cards with the 790FX chipset; the CrossFire branding was then changed to "ATI Cross-
FireX". The setup, which, according to internal testing by AMD, will bring at least 3.2x per-

43 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 43 of 138
formance increase in several games and applications which required massive graphics capa-
bilities of the computer system, is targeted to the enthusiast market.
A later development to the CrossFire infrastructure includes a dual GPUs with on-
board PCI Express bridge that was released in early 2008, the Radeon HD 3870 X2 and later
in Radeon HD 4870 X2 graphics cards, featuring only one CrossFire connector for dual card,
four GPU scalability. When using two GPUs on board the same system, the HDMI ports on
the GPUs cannot both work at the same time.
An earlier CrossFireX and chipset compatibility chart is shown here: The latest com-
patibility chart, as of March 2012, shows AMD 890, 990 and A75 chipsetsand many Intel
chipsets including Z68 and X79 chipsetsas being compatible with CrossFireX; it also
shows which GPU cards (in the HD 5750 / 6750 / 7750, HD 5770 / 6770 / 7770, and HD 58 /
59 / 68 / 69 / 78 / 79 series) may be paired with an external bridge (the new HD 7750 and HD
7770 cards may be paired without an external bridge).
Current generation (XDMA)

XDMA might be similar the AMD Di-


rectGMA (Direct Graphics Memory Ac-
cess) to be found on AMD FirePro-
branded product line.

The Radeon R9 290 and R9 290X graphics cards (based on Graphics Core Next 1.1
"Volcanic Islands") as well as GPUs using newer versions of GCN no longer have bridging
ports. Instead, they use XDMA to open a direct channel of communication between the mul-
tiple GPUs in a system, operating over the same PCI Express bus which is used by AMD
Radeon graphics cards.
PCI Express 3.0 lanes provide to up to 17.5 times higher bandwidth (15.754 GB/s for
a 16 slot) when compared to current external bridges (900 MB/s), rendering the use of a
CrossFire bridge unnecessary. Thus, XDMA was selected for greater GPU interconnection
bandwidth demands generated by AMD Eyefinity, and more recently by 4K resolu-
tion monitors. Bandwidth of the data channel opened by XDMA is fully dynamic, scaling it-
self together with the demands of the game being played, as well as adapting to advanced us-
er settings such as vertical synchronization (vsync).
Additionally, some newer cards are capable of pairing with 7000-series cards based
on the Graphics Core Next 1.0 "Southern Islands" architecture. For example, an R9-280X

44 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 44 of 138
card can be used in a CrossFireX setup together with a HD 7970 card, largely due to them
being the same product at different clock rates.
Hybrid CrossFireX (dual graphics)
There is also a hybrid mode of CrossFireX that combines on-board graphics using
the AMD northbridge architecture with select graphic cards, for increased performance. The
current generation is called Hybrid CrossFireX and is available for motherboards with inte-
grated AMD chipsets in the 7 and 8 series GPUs, referred to as Hybrid CrossFireX.
This combination results in power-savings when simple or 2D graphics are used and
performance increases of 25% to over 200% in 3D graphics over using a non CrossFire op-
tion. As of March 2012, it appears that this is now called "AMD Radeon Dual Graphics" and
means using A-series Fusion APUs together with video cards.
Comparisons to Nvidia SLI
Similarities:
In some cases CrossFire doesn't improve 3D performance in some extreme cases, it
can lower the framerate due to the particulars of an application's coding. This is also true for
Nvidia's SLI, as the problem is inherent in multi-GPU systems. This is often witnessed when
running an application at low resolutions.
When using CrossFire with AFR, the subjective framerate can often be lower than the
framerate reported by benchmarking applications, and may even be poorer than the frame rate
of its single-GPU equivalent. This phenomenon is known as micro stuttering and also applies
to SLI since it is inherent to multi-GPU configurations.
Advantages:
CrossFire can be implemented with varying-GPU cards of the same generation (this is
in contrast to Nvidia's SLI, which generally only works if all cards have the same GPU). This
allows buyers who have varying budgets over time to purchase different cards and still get
the benefits of increased performance. With the latest generation cards, they will only cross-
fire with other cards in their sub series. For example, GPU in the same series can be cross-
fired with each other. So a 5800 series GPU (e.g. a 5830) can run together with another 5800
series GPU (e.g. 5870). However, GPUs not in the same hundred series cannot be crossfired
successfully (e.g. a 5770 cannot run with a 5870). The only exception is that the HD 7870 XT
cards can be used with a HD 7900 series GPU (e.g., a 7950) in a crossfire configuration be-
cause they feature the same GPU.
ATI CrossFire configurations can run many monitors of varying size and resolution,
while SLI only allows three monitors (the exception being Nvidia Surround, which enables
connection of up to four 2D displays and three 3D displays, although all displays must be the
same resolution for this to work).

45 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 45 of 138
Disadvantages:
The first generation CrossFire implementations (the Radeon X800 to X1900 series)
require an external y-cable/dongle to operate in CrossFire mode due to the PCI Express bus
not being able to provide enough bandwidth to run CrossFire without losing a significant
amount of performance. CrossFire works only in fullscreen mode and not in windowed mode.

Nvedia SLI
Scalable Link Interface (SLI) is a brand name for a multi-GPU technology developed
by NVIDIA for linking two or more video cards together to produce a single output. SLI is an
algorithm of parallel processing for computer graphics, meant to increase the processing
power available for graphics.[1] The initialism SLI was first used by 3dfx for Scan-Line Inter-
leave, which was introduced to the consumer market in 1998 and used in the Voodoo2 line of
video cards. After buying out 3dfx, NVIDIA acquired the technology but did not use it.
NVIDIA later reintroduced the SLI name in 2004 and intended for it to be used in modern
computer systems based on the PCI Express (PCIe) bus; however, the technology behind the
name SLI has changed dramatically.

Example of 3-way SLI using a rigid bridg- Simple 2-way SLI bridge.
ing connector

A dual-GPU graphics card for a


high-performance gaming laptop,
utilizing on-board SLI.

46 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 46 of 138
Computer with 2-way SLI graphics cards installed.

Implementation
SLI allows two, three, or four graphics processing units (GPUs) to share the workload
when rendering real-time 3D computer graphics. Ideally, identical GPUs are installed on the
motherboard that contains enough PCI-Express slots, set up in a master-slave configuration.
All graphics cards are given an equal workload to render, but the final output of each card is
sent to the master card via a connector called the SLI Bridge. An example, in a two graphics
card setup, the master works on the top half of the scene, the slave the bottom half. Once the
slave is done, it sends its render to the master to combine into one image before sending it to
the monitor.
The SLI bridge is used to reduce bandwidth constraints and send data between both
graphics cards directly. It is possible to run SLI without using the bridge connector on a pair
of low-end to mid-range graphics cards (e.g. 7100GS or 6600GT) with NVIDIA's Forceware
drivers 80.XX or later. Since these graphics cards do not use as much bandwidth, data can be
relayed through just the chipsets on the motherboard. However, if there are two high-
end graphics cards installed and the SLI bridge is omitted, the performance will suffer severe-
ly as the chipset does not have enough bandwidth.
Configurations currently include:

2-Way, 3-Way, and 4-Way SLI. Uses two, three, or four individual graphics cards respec-
tively.
Two GPUs on one graphics card. Examples include the GeForce GTX 590, GeForce
GTX 690 and the GeForce GTX Titan Z. This configuration has the advantage of imple-
menting Two-Way SLI, while only occupying one PCI-Express slot and (usually) two
expansion I/O slots. This also allows for Four-Way SLI using only two cards (which is
referred to as Quad SLI).
47 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 47 of 138
NVIDIA has created a set of custom video game profiles in cooperation with video
game publishers that will automatically enable SLI in the mode that gives the largest perfor-
mance boost.
NVIDIA has 3 types of SLI bridges:
I. Standard Bridge (400 MHz Pixel Clock and 1GB/s bandwidth)
II. LED Bridge (540 MHz Pixel Clock)
III.High-Bandwidth Bridge (650 MHz Pixel Clock[)
The Standard Bridge is traditionally included with motherboards that support SLI and is
recommended for monitors up to 1920x1080 and 2560x1440@60 Hz. The LED Bridge is
sold by NVIDIA, EVGA, and others and is recommended for monitors up to
2560x1440@120 Hz+ and 4K. The LED Bridges can only function at the increased Pixel
Clock if the GPU supports that clock. The High-Bandwidth Bridge is only sold by NVID-
IA and is recommended for monitors up to 5K and Surround.
SLI Modes
Split Frame Rendering (SFR):
This analyzes the rendered image in order to split the workload equally between the
two GPUs. To do this, the frame is split horizontally in varying ratios depending on geome-
try. For example, in a scene where the top half of the frame is mostly empty sky, the dividing
line will lower, balancing geometry workload between the two GPUs. This method does not
scale geometry or work as well as AFR, however.
Alternate Frame Rendering (AFR):
Each GPU renders entire frames in sequence. For example, in a Two-Way setup, one
GPU renders the odd frames, the other the even frames, one after the other. Finished outputs
are sent to the master for display. Ideally, this would result in the rendering time being cut by
the number of GPUs available. In their advertising, NVIDIA claims up to 1.9x the perfor-
mance of one card with the Two-Way setup. While AFR may produce higher overall framer-
ates than SFR, it also exhibits the temporal artifact known as Micro stuttering, which may
affect frame rate perception. It is noteworthy that while the frequency at which frames arrive
may be doubled, the time to produce the frame is not reduced - which means that AFR is not
a viable method of reducing input lag.
SLI Antialiasing
This is a standalone rendering mode that offers up to double the antialiasing perfor-
mance by splitting the antialiasing workload between the two graphics cards, offering superi-
or image quality. One GPU performs an antialiasing pattern which is slightly offset to the
usual pattern (for example, slightly up and to the right), and the second GPU uses a pattern
offset by an equal amount in the opposite direction (down and to the left). Compositing both
the results gives higher image quality than is normally possible. This mode is not intended for
higher frame rates, and can actually lower performance, but is instead intended for games

48 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 48 of 138
which are not GPU-bound, offering a clearer image in place of better performance. When en-
abled, SLI Antialiasing offers advanced antialiasing options: SLI 8X, SLI 16X, and SLI 32x
(for Quad SLI systems only).
Hybrid SLI
Hybrid SLI is the generic name for two technologies, GeForce Boost and HybridPow-
er. GeForce Boost allowed the rendering power of an IGP and a discrete GPU to be combined
in order to increase performance.
HybridPower, on the other hand, is another mode that is not for performance en-
hancement. The setup consists of an IGP as well as a GPU on MXM module. The IGP would
assist the GPU to boost performance when the laptop is plugged to a power socket while the
MXM module would be shut down when the laptop was unplugged from power socket to
lower overall graphics power consumption. Hybrid SLI is also available on desktop Mother-
boards and PCs with PCI-E discrete video cards. NVIDIA claims that twice the performance
can be achieved with a Hybrid SLI capable IGP motherboard and a GeForce 8400 GS video
card. HybridPower was later renamed as Nvidia Optimus.
SLI HB
In May 2016 Nvidia announced that the GeForce 10 series would feature a new SLI
HB (High Bandwidth) bridge; this bridge uses 2 SLI fingers on the PCB of each card and es-
sentially doubles the available bandwidth between them. Currently, only GeForce 10 series
cards support SLI HB and only 2-way SLI is supported over this bridge for single-GPU cards.
SLI HB interface runs at 650 MHz, while legacy SLI interface runs at slower 400 MHz.
Electrically there is little difference between the regular SLI bridge and the SLI HB
bridge. It is similar to two regular bridges combined in one PCB. The signal quality of the
bridge improved, however, as the SLI HB bridge has an adjusted trace-length to make sure all
traces on the bridge have exactly the same length.

Caveats

Not all motherboards with multiple PCI-Express x16 slots support SLI. Recent mother-
boards as of May 2016 that support it are Intel's Z and X series chipsets (Z68, Z77, Z87,
Z97, Z170, X79 and X99) and AMD's 990FX chipset.[16] Aside from a few exceptions,
older motherboards needed certain models of nForce chipsets to support SLI.
In an SLI configuration, cards can be of mixed manufacturers, card model names, BIOS
revisions or clock speeds. However, they must be of the same GPU series (e.g. 8600,
8800) and GPU model name (e.g. GT, GTS, GTX). There are rare exceptions for "mixed
SLI" configurations on some cards that only have a matching core codename (e.g. G70,
G73, G80, etc.), but this is otherwise not possible, and only happens when two matched
cards differ only very slightly, an example being a differing amount of video memory,
stream processors, or clockspeed. In this case, the slower/lesser card becomes dominant,

49 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 49 of 138
and the other card matches. Another exception is the GTS 250, which can be paired with
the 9800 GTX+, as the GTS 250 GPU is a rebadged 9800 GTX+ GPU.
In cases where two cards are not identical, the faster card or the card with more
memory - will run at the speed of the slower card or disable its additional memory. (Note
that while the FAQ still claims different memory size support, the support has been re-
moved since revision 100.xx of NVIDIA's Forceware driver suite.)
SLI does not always give a performance benefit in some extreme cases, it can lower
the frame rate due to the particulars of an application's coding. This is also true for
AMD's CrossFire, as the problem is inherent in multi-GPU systems. This is often wit-
nessed when running an application at low resolutions.
Vsync + Triple buffering is not supported in some cases in SLI AFR mode.
Users having a Hybrid SLI setup must manually change modes be-
tween HybridPower and GeForce Boost, while automatically changing mode will not be
available until future updates become available. Hybrid SLI currently supports only sin-
gle link DVI at 19201200 screen resolution.
When using SLI with AFR, the subjective framerate can often be lower than the framer-
ate reported by benchmarking applications, and may even be poorer than the framerate of
its single-GPU equivalent. This phenomenon is known as micro stuttering and also ap-
plies to CrossFire since it is inherent to multi-GPU configurations.

50 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 50 of 138
51 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 51 of 138
Task 03
3.1 Search images for different back side connections of CD/DVD and Blue
Ray.
In computing, an optical disc drive (ODD) is a disk drive that uses laser light or elec-
tromagnetic waves within or near the visible light spectrum as part of the process of reading
or writing data to or from optical discs. Some drives can only read from certain discs, but re-
cent drives can both read and record, also called burners or writers. Compact discs, DVDs,
and Blu-ray discs are common types of optical media which can be read and recorded by such
drives. Optical disc drives that are no longer in production include CD-ROM drive, CD writer
drive, combo (CD-RW/DVD-ROM) drive, and DVD writer drive supporting certain recorda-
ble and rewritable DVD formats (such as DVD-R(W) only, DVD+R(W) only, DVD-RAM
only, and all DVD formats except DVD-R DL). As of 2015, DVD writer drive supporting all
existing recordable and rewritable DVD formats is the most common for desktop PCs and
laptops. There are also the DVD-ROM drive, BD-ROM drive, Blu-ray Disc combo (BD-
ROM/DVDRW/CD-RW) drive, and Blu-ray Disc writer drive.
Optical disc drives are an integral part of standalone appliances such as CD players,
VCD players, DVD players, Blu-ray disc players, DVD recorders, certain desktop video
game consoles, such as Sony PlayStation 4, Microsoft Xbox One, Nintendo Wii U, and Sony
PlayStation 3, and certain portable video game consoles, such as Sony PlayStation Portable.
They are also very commonly used in computers to read software and consumer media dis-
tributed on disc, and to record discs for archival and data exchange purposes. Floppy disk
drives, with capacity of 1.44 MB, have been made obsolete: optical media are cheap and have
vastly higher capacity to handle the large files used since the days of floppy discs, and the
vast majority of computers and much consumer entertainment hardware have optical writ-
ers. USB flash drives, high-capacity, small, and inexpensive, are suitable where read/write
capability is required.
Disc recording is restricted to storing files playable on consumer appliances (films,
music, etc.), relatively small volumes of data (e.g. a standard DVD holds 4.7 gigabytes) for
local use, and data for distribution, but only on a small scale; mass-producing large numbers
of identical discs is cheaper and faster than individual recording.
Optical discs are used to backup relatively small volumes of data, but backing up of
entire hard drives, which as of 2015 typically contain many hundreds of gigabytes or even
multiple terabytes, is less practical. Large backups are often instead made on external hard
drives, as their price has dropped to a level making this viable; in professional environments
magnetic tape drives are also used.

52 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 52 of 138
History
The first laser disc, demonstrated in 1972, was the Laservision 12-inch video disc.
The video signal was stored as an analog format like a video cassette. The first digitally rec-
orded optical disc was a 5-inch audio compact disc (CD) in a read-only format created
by Sony and Philips in 1975.[1]
The CD-ROM format was developed by Sony and Denon, introduced in 1984, as an
extension of Compact Disc Digital Audio and adapted to hold any form of digital data. The
CD-ROM has a storage capacity of 650 MB. Also in 1984, Sony introduced a LaserDisc data
storage format, with a larger data capacity of 3.28 GB. In 1987, Sony demonstrate the erasa-
ble and rewritable 5.25-inch optical drive.
The first Blu-Ray prototype was unveiled by Sony in October 2000, and the first
commercial recording device was released to market on April 10, 2003. In January
2005, TDK announced that they had now developed an ultra-hard yet very thin polymer coat-
ing ("Durabis") for Blu-ray discs; this was a significant technical advance because a far
tougher protection was desired in the consumer market to protect bare discs against scratch-
ing and damage compared to DVD, while technically Blu-ray Disc required a
much thinner layer for the denser and higher frequency blue laser. The first BD-ROM players
(Samsung BD-P1000) were shipped in mid-June 2006. The first Blu-ray Disc titles were re-
leased by Sony and MGM on June 20, 2006. The first mass-market Blu-ray Disc rewritable
drive for the PC was the BWU-100A, released by Sony on July 18, 2006.
Key components
Laser and optics
The most important part of an optical disc drive is an optical path, placed in a pickup
head (PUH),[9] usually consisting of semiconductor laser, a lens for guiding the laser beam,
and photodiodes detecting the light reflection from disc's surface. Initially, CD lasers with
a wavelength of 780 nm were used, being within infrared range. For DVDs, the wavelength
was reduced to 650 nm (red color), and the wavelength for Blu-ray Disc was reduced to
405 nm (violet color).Two main servomechanisms are used, the first one to maintain a correct
distance between lens and disc, and ensure the laser beam is focused on a small laser spot on
the disc. The second servo moves a head along the disc's radius, keeping the beam on
a groove, a continuous spiral data path.

The optical sensor out of a CD/DVD drive.


The two larger rectangles are the photodi-
odes for pits, the inner one for land. This
one also includes amplification and minor
processing.

53 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 53 of 138
On read only media (ROM), during the manufacturing process the groove, made
of pits, is pressed on a flat surface, called land. Because the depth of the pits is approximately
one-quarter to one-sixth of the laser's wavelength, the reflected beam's phase is shifted in re-
lation to the incoming reading beam, causing mutual destructive interference and reducing
the reflected beam's intensity. This is detected by photodiodes that output electrical signals.
A recorder encodes (or burns) data onto a recordable CD-R, DVD-R, DVD+R, or BD-
R disc (called a blank) by selectively heating parts of an organic dye layer with a laser. This
changes the reflectivity of the dye, thereby creating marks that can be read like the pits and
lands on pressed discs. For recordable discs, the process is permanent and the media can be
written to only once. While the reading laser is usually not stronger than 5 mW, the writing
laser is considerably more powerful. The higher the writing speed, the less time a laser has to
heat a point on the media, thus its power has to increase proportionally. DVD burners' lasers
often peak at about 200 mW, either in continuous wave and pulses, although some have been
driven up to 400 mW before the diode fails.
For rewritable CD-RW, DVD-RW, DVD+RW, DVD-RAM, or BD-RE media, the la-
ser is used to melt a crystalline metal alloy in the recording layer of the disc. Depending on
the amount of power applied, the substance may be allowed to melt back (change
the phase back) into crystalline form or left in an amorphous form, enabling marks of varying
reflectivity to be created.
Double-sided media may be used, but they are not easily accessed with a standard
drive, as they must be physically turned over to access the data on the other side.
Double layer (DL) media have two independent data layers separated by a semi-
reflective layer. Both layers are accessible from the same side, but require the optics to
change the laser's focus. Traditional single layer (SL) writable media are produced with a spi-
ral groove molded in the protective polycarbonate layer (not in the data recording layer), to
lead and synchronize the speed of recording head. Double-layered writable media have: a
first polycarbonate layer with a (shallow) groove, a first data layer, a semi-reflective layer, a
second (spacer) polycarbonate layer with another (deep) groove, and a second data layer. The
first groove spiral usually starts on the inner edge and extends outwards, while the second
groove starts on the outer edge and extends inwards.[11][12]
Some drives support Hewlett-Packard's LightScribe photothermal printing technology
for labeling specially coated discs.

54 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 54 of 138
Rotational mechanism

A CD-ROM drive (without case)

Comparison of several forms of disk storage showing


tracks (not-to-scale); green denotes start and red de-
notes end. Some CD-R(W) and DVD-
R(W)/DVD+R(W) recorders operate in ZCLV, CAA
or CAV modes.

The rotational mechanism in an optical drive differs considerably from that of a hard
disk drives, in that the latter keeps a constant angular velocity (CAV), in other words a con-
stant number of revolutions per minute (RPM). With CAV, a higher throughput is generally
achievable at the outer disc compared to the inner.
On the other hand, optical drives were developed with an assumption of achieving a
constant throughput, in CD drives initially equal to 150 KiB/s. It was a feature important for
streaming audio data that always tend to require a constant bit rate. But to ensure no disc ca-
pacity was wasted, a head had to transfer data at a maximum linear rate at all times too, with-
out slowing on the outer rim of disc. This led to optical drivesuntil recentlyoperating
with a constant linear velocity (CLV). The spiral groove of the disc passed under its head at a
constant speed. The implication of CLV, as opposed to CAV, is that disc angular velocity is
no longer constant, and the spindle motor needed to be designed to vary its speed from be-
tween 200 RPM on the outer rim and 500 RPM on the inner.
Later CD drives kept the CLV paradigm, but evolved to achieve higher rotational
speeds, popularly described in multiples of a base speed. As a result, a 4 drive, for instance,
would rotate at 800-2000 RPM, while transferring data steadily at 600 KiB/s, which is equal
to 4 150 KiB/s.

55 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 55 of 138
For DVDs, base or 1 speed is 1.385 MB/s, equal to 1.32 MiB/s, approximately nine
times faster than the CD base speed. For Blu-ray drives, base speed is 6.74 MB/s, equal to
6.43 MiB/s.

The Z-CLV recording pattern is


easily visible after burning a
DVD-R.

Because keeping a constant transfer rate for the whole disc is not so important in most
contemporary CD uses, a pure CLV approach had to be abandoned to keep the rotational
speed of the disc safely low while maximizing data rate. Some drives work in a partial CLV
(PCLV) scheme, by switching from CLV to CAV only when a rotational limit is reached. But
switching to CAV requires considerable changes in hardware design, so instead most drives
use the zoned constant linear velocity (Z-CLV) scheme. This divides the disc into several
zones, each having its own constant linear velocity. A Z-CLV recorder rated at "52", for ex-
ample, would write at 20 on the innermost zone and then progressively increase the speed in
several discrete steps up to 52 at the outer rim. Without higher rotational speeds, increased
read performance may be attainable by simultaneously reading more than one point of a data
groove, but drives with such mechanisms are more expensive, less compatible, and very un-
common.
Limit

An exploded disc

Both DVDs and CDs have been known to explode [14] when damaged or spun at ex-
cessive speed. This imposes a constraint on the maximum speed (56 for CDs or around 18
in the case of DVDs) at which drives can operate.

56 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 56 of 138
Loading mechanisms
Current optical drives use either a tray-loading mechanism, where the disc is loaded
onto a motorized or manually operated tray, or a slot-loading mechanism, where the disc is
slid into a slot and drawn in by motorized rollers. With both types of mechanism, if a CD or
DVD is left in the drive after the computer is turned off, the disc cannot be ejected using the
normal eject mechanism of the drive. However, tray-loading drives account for this situation
by providing a small hole where one can insert a straightened paperclip to manually open the
drive tray to retrieve the disc. Slot-loading optical disc drives have the disadvantages that
they cannot usually accept the smaller 80 mm discs (unless 80 mm optical disc adapter is
used) or any non-standard sizes, usually have no emergency eject hole or eject button, and
therefore have to be disassembled if the optical disc cannot be ejected normally. However,
the Nintendo Wii, because of backward compatibility with Nintendo Game-
[15] [16]
Cube games, and PlayStation 3 video game consoles are able to load standard size
DVDs and 80 mm discs in the same slot-loading drive.
A small number of drive models, mostly compact portable units, have a top-
loading mechanism where the drive lid is opened upwards and the disc is placed directly onto
the spindle[17] (for example, all PlayStation One consoles, portable CD players, and some
standalone CD recorders all feature top-loading drives). These sometimes have the advantage
of using spring-loaded ball bearings to hold the disc in place, minimizing damage to the disc
if the drive is moved while it is spun up.
Some early CD-ROM drives used a mechanism where CDs had to be inserted into
special cartridges or caddies, somewhat similar in appearance to a 3.5" floppy diskette. This
was intended to protect the disc from accidental damage by enclosing it in a tougher plastic
casing, but did not gain wide acceptance due to the additional cost and compatibility con-
cernssuch drives would also inconveniently require "bare" discs to be manually inserted
into an openable caddy before use. Ultra Density Optical and Universal Media Disc use opti-
cal disc cartridges.There were also some early CD-ROM drives for desktop PCs in which its
tray-loading mechanism will eject slightly and user has to pull out the tray manually to load
CD, similar to the tray ejecting method used in internal optical disc drives of modern laptops
and modern external slim portable optical disc drives. Like the top-loading mechanism, they
have spring-loaded ball bearings on the spindle.
Computer interfaces
Most internal drives for personal computers, servers and workstations are designed to fit in a
standard 5.25" drive bay and connect to their host via an ATA or SATA interface. Addition-
ally, there may be digital and analog outputs for audio. The outputs may be connected via a
header cable to the sound card or the motherboard. At one time, computer software resem-
bling cd players controlled playback of the CD. Today the information is extracted from the
disc as data, to be played back or converted to other file formats.
External drives usually have USB or FireWire interfaces. Some portable versions for laptops
power themselves from batteries or directly from their interface bus.
57 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 57 of 138
Digital audio output, analog audio
output, and parallel ATA interface.

Drives with SCSI interface were made, but they are less common and tend to be more
expensive, because of the cost of their interface chipsets, more complex SCSI connectors, and
small volume of sales.
When the optical disc drive was first developed, it was not easy to add to computer
systems. Some computers such as the IBM PS/2 were standardizing on the 3.5" floppy and
3.5" hard disk, and did not include a place for a large internal device. Also IBM PCs and
clones at first only included a single (parallel) ATA drive interface, which by the time the
CDROM was introduced, was already being used to support two hard drives. Early laptops
simply had no built-in high-speed interface for supporting an external storage device.
This was solved through several techniques:

Early sound cards could include a CD-ROM drive interface. Initially, such interfaces
were proprietary to each CD-ROM manufacturer. A sound card could often have two or
three different interfaces which are able to communicate with cdrom drive.
A parallel port external drive was developed that connected between a printer and the
computer. This was slow but an option for laptops
A PCMCIA optical drive interface was also developed for laptops
A SCSI card could be installed in desktop PCs for an external SCSI drive enclosure,
though SCSI was typically much more expensive than other options
Internal mechanism of a drive
The optical drives in the photos are shown right side up; the disc would sit on top of
them. The laser and optical system scans the underside of the disc.With reference to the top
photo, just to the right of image center is the disc motor, a metal cylinder, with a gray center-
ing hub and black rubber drive ring on top. There is a disc-shaped round clamp, loosely held
inside the cover and free to rotate; it's not in the photo. After the disc tray stops moving in-
ward, as the motor and its attached parts rise, a magnet near the top of the rotating assembly
contacts and strongly attracts the clamp to hold and center the disc. This motor is an "outrun-
ner"-style brushless DC motor which has an external rotor every visible part of it spins.

58 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 58 of 138
Internal mechanism of a
DVD-ROM
Two parallel guide rods that run between upper left and lower right in the photo carry
the "sled", the moving optical read-write head. As shown, this "sled" is close to, or at the po-
sition where it reads or writes at the edge of the disc. To move the "sled" during continuous
read or write operations, a stepper motor rotates a leadscrew to move the "sled" throughout its
total travel range. The motor, itself, is the short gray cylinder just to the left of the most-
distant shock mount; its shaft is parallel to the support rods. The leadscrew is the rod with
evenly-spaced darker details; these are the helical grooves that engage a pin on the "sled".
In contrast, the mechanism shown in the second photo, which comes from a cheaply
made DVD player, uses less accurate and less efficient brushed DC motors to both move the
sled and spin the disc. Some older drives use a DC motor to move the sled, but also have a
magnetic rotary encoder to keep track of the position. Most drives in computers use stepper
motors.
The gray metal chassis is shock-mounted at its four corners to reduce sensitivity to ex-
ternal shocks, and to reduce drive noise from residual imbalance when running fast. The soft
shock mount grommets are just below the brass-colored screws at the four corners (the left
one is obscured).
In the third photo, the components under the cover of the lens mechanism are visible.
The two permanent magnets on either side of the lens holder as well as the coils that move
the lens can be seen. This allows the lens to be moved up, down, forwards, and backwards to
stabilize the focus of the beam.
In the fourth photo, the inside of the optics package can be seen. Note that since this is
a CD-ROM drive, there is only one laser, which is the black component mounted to the bot-
tom left of the assembly. Just above the laser are the first focusing lens and prism that direct
the beam at the disc. The tall, thin object in the center is a half-silvered mirror that splits the
laser beam in multiple directions. To the bottom right of the mirror is the
main photodiode that senses the beam reflected off the disc. Above the main photodiode is a
second photodiode that is used to sense and regulate the power of the laser.

59 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 59 of 138
The irregular orange material is flexible etched copper foil supported by thin sheet
plastic; these are "flexible printed circuits" that connect everything to the electronics (which
is not shown).
Compatibility
Most optical drives are backward compatible with their ancestors up to CD, although
this is not required by standards.
Compared to a CD's 1.2 mm layer of polycarbonate, a DVD's laser beam only has to
penetrate 0.6 mm in order to reach the recording surface. This allows a DVD drive to focus
the beam on a smaller spot size and to read smaller pits. DVD lens supports a different focus
for CD or DVD media with same laser. With the newer Blu-ray disc drives, the laser only has
to penetrate 0.1 mm of material. Thus the optical assembly would normally have to have an
even greater focus range. In practice, the Blu-ray optical system is separate from the
DVD/CD system.

Optical disc or optical media

Optical disc
drive
BD-
CD- Pressed DVD- DVD+ DVD- DVD+R DVD+ Pressed BD- BD- BD-
Pressed CD CD-R RE
RW DVD R R RW W R DL CAT BD R RE R DL
DL

Audio CD
Read Read 1 Read 2 None None None None None None None None None None None
player

CD-ROM drive Read Read 1 Read 2 None None None None None None None None None None None

CD-R recorder Read Write Read None None None None None None None None None None None

CD-
Read Write Write None None None None None None None None None None None
RW recorder

DVD-
Read Read 3 Read 3 Read Read 4 Read 4 Read 4 Read 4 Read 5 None None None None None
ROM drive

DVD-
Read Write Write Read Write Read 6 Read Read 6 Read 5 None None None None None
R recorder

DVD-
Read Write Write Read Write Read 7 Write 8 Read 6 Read 5 None None None None None
RW recorder

60 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 60 of 138
DVD+RW reco
Read Write Write Read Read 6 Read 9 Read 6 Write Read 5 None None None None None
rder

DVD+R record
Read Write Write Read Read 6 Write Read 6 Write Read 5 None None None None None
er

DVDRW
Read Write Write Read Write Write Write Write Read 5 None None None None None
recorder

DVDRW/DV
Write 1 Write 1
D+R DL re- Read Write Write Read 0 Write 0 Write Write None None None None None
corder

BD-ROM Read Read Read Read Read Read Read Read Read Read Read Read Read Read

Write 1 Write 1
BD-R recorder Read 11 1 1 Read Write Write Write Write Write Read Write Read Read Read

BD-RE record- Write 1 Write 1


Read 11 1 1 Read Write Write Write Write Write Read Write Write Read Read
er

BD-R DL Write 1 Write 1


Read 11 1 1 Read Write Write Write Write Write Read Write Read Write Read
recorder

BD-RE DL Write 1 Write 1


Read 11 1 1 Read Write Write Write Write Write Read Write Write Write Write
recorder

1 Some types of CD-R media with less-reflective dyes may cause problems.
2 May not work in non MultiRead-compliant drives.
3 May not work in some early-model DVD-ROM drives. CD-R would not work in any
drive that did not have a 780 nm laser. CD-RW compatibility varied.
4 DVD+RW discs did not work in early video players that played DVD-RW discs. This
was not due to any incompatibility with the format but was a deliberate feature built into
the firmware by one drive manufacturer.
5 Read compatibility with existing DVD drives may vary greatly with the brand of
DVD+R DL media used. Also drives that predated the media did not have the book code
for DVD+R DL media in their firmware (this was not an issue for DVD-R DL though
some drives could only read the first layer).

61 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 61 of 138
6 Early DVD+RW and DVD+R recorders could not write to DVD-R(W) media (and vice
versa).
7 Will work in all drives that read DVD-R as compatibility ID byte is the same.
8 Recorder firmware may blacklist or otherwise refuse to record to some brands of DVD-
RW media.
9 DVD+RW format was released before DVD+R. All DVD+RW only drives could be
upgraded to write DVD+R discs by a firmware upgrade.
10 As of April 2005, all DVD+R DL recorders on the market are Super Multi-capable.
11 As of October 2006, recently released BD drives are able to read and write CD media.
Recording performance
During the times of CD writer drives, they are often marked with three different speed
ratings. In these cases, the first speed is for write-once (R) operations, the second speed for
re-write (RW) operations, and the last speed for read-only (ROM) operations. For example, a
40/16/48 CD writer drive is capable of writing to CD-R media at speed of 40 (6,000
KB/s), writing to CD-RW media at speed of 16 (2,400 KB/s), and reading from a CD-ROM
media at speed of 48 (7,200 KB/s).
During the times of combo (CD-RW/DVD-ROM) drives, an additional speed rating
(e.g. the 16 in 52/32/52/16) is designated for DVD-ROM media reading operations.
For DVD writer drives, Blu-ray disc combo drives, and Blu-ray disc writer drives, the writing
and reading speed of their respective optical media are specified in its retail box, user's manu-
al, or bundled brochures or pamphlets.
In the late 1990s, buffer underruns became a very common problem as high-speed CD
recorders began to appear in home and office computers, whichfor a variety of reasons
often could not muster the I/O performance to keep the data stream to the recorder steadily
fed. The recorder, should it run short, would be forced to halt the recording process, leaving a
truncated track that usually renders the disc useless.
In response, manufacturers of CD recorders began shipping drives with "buffer un-
derrun protection" (under various trade names, such as Sanyo's "BURN-Proof", Ricoh's "Jus-
tLink" and Yamaha's "Lossless Link"). These can suspend and resume the recording process
in such a way that the gap the stoppage produces can be dealt with by the error-
correcting logic built into CD players and CD-ROM drives. The first of these drives were rat-
ed at 12 and 16.
While drives are burning DVD+R, DVD+RW and all Blu-ray formats, they do not re-
quire any such error correcting recovery as the recorder is able to place the new data exactly
on the end of the suspended write effectively producing a continuous track (this is what the
DVD+ technology achieved). Although later interfaces were able to stream data at the re-
quired speed, many drives now write in a 'zoned constant linear velocity'. This means that the
drive has to temporarily suspend the write operation while it changes speed and then recom-

62 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 62 of 138
mence it once the new speed is attained. This is handled in the same manner as a buffer un-
derrun.
The internal buffer of optical disc writer drives is: 8 MiB or 4 MiB when recording
BD-R/BD-R DL/BD-RE/BD-RE DL media; 2 MiB when recording DVD-R/DVD-
RW/DVD-R DL/DVD+R/DVD+RW/DVD+RW DL/DVD-RAM/CD-R/CD-RW media.
Recording schemes
CD recording on personal computers was originally a batch-oriented task in that it re-
quired specialised authoring software to create an "image" of the data to record, and to record
it to disc in the one session. This was acceptable for archival purposes, but limited the general
convenience of CD-R and CD-RW discs as a removable storage medium.
Packet writing is a scheme in which the recorder writes incrementally to disc in short
bursts, or packets. Sequential packet writing fills the disc with packets from bottom up. To
make it readable in CD-ROM and DVD-ROM drives, the disc can be closed at any time by
writing a final table-of-contents to the start of the disc; thereafter, the disc cannot be packet-
written any further. Packet writing, together with support from the operating system and a file
system like UDF, can be used to mimic random write-access as in media like flash memory
and magnetic disks.
Fixed-length packet writing (on CD-RW and DVD-RW media) divides up the disc in-
to padded, fixed-size packets. The padding reduces the capacity of the disc, but allows the
recorder to start and stop recording on an individual packet without affecting its neighbours.
These resemble the block-writable access offered by magnetic media closely enough that
many conventional file systems will work as-is. Such discs, however, are not readable in
most CD-ROM and DVD-ROM drives or on most operating systems without additional third-
party drivers. The division into packets is not as reliable as it may seem as CD-R(W) and
DVD-R(W) drives can only locate data to within a data block. Although generous gaps (the
padding referred to above) are left between blocks, the drive nevertheless can occasionally
miss and either destroy some existing data or even render the disc unreadable.
The DVD+RW disc format eliminates this unreliability by embedding more accurate timing
hints in the data groove of the disc and allowing individual data blocks (or even bytes) to be
replaced without affecting backward compatibility (a feature dubbed "lossless linking"). The
format itself was designed to deal with discontinuous recording because it was expected to be
widely used in digital video recorders. Many such DVRs use variable-rate video compression
schemes which require them to record in short bursts; some allow simultaneous playback and
recording by alternating quickly between recording to the tail of the disc whilst reading from
elsewhere. The Blu-ray disc system also encompasses this technology.
Mount Rainier aims to make packet-written CD-RW and DVD+RW discs as conven-
ient to use as that of removable magnetic media by having the firmware format new discs in
the background and manage media defects (by automatically mapping parts of the disc which
have been worn out by erase cycles to reserve space elsewhere on the disc). As of February

63 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 63 of 138
2007, support for Mount Rainier is natively supported in Windows Vista. All previous ver-
sions of Windows require a third-party solution, as does Mac OS X.
Recorder Unique Identifier
Owing to pressure from the music industry, as represented by
the IFPI and RIAA, Philips developed the Recorder Identification Code (RID) to allow media
to be uniquely associated with the recorder that has written it. This standard is contained in
the Rainbow Books. The RID-Code consists of a supplier code (e.g. "PHI" for Philips), a
model number and the unique ID of the recorder. Quoting Philips, the RID "enables a trace
for each disc back to the exact machine on which it was made using coded information in the
recording itself. The use of the RID code is mandatory.
Although the RID was introduced for music and video industry purposes, the RID is
included on every disc written by every drive, including data and backup discs. The value of
the RID is questionable as it is (currently) impossible to locate any individual recorder due to
there being no database.
Source IDentification Code
The Source IDentification Code (SID) is an eight character supplier code that is
placed on optical discs by the manufacturer. The SID identifies not only manufacturer, but
also the individual factory and machine that produced the disc. According to Phillips, the ad-
ministrator of the SID codes, the SID code provides an optical disc production facility with
the means to identify all discs mastered or replicated in its plant, including the specific Laser
Beam Recorder (LBR) signal processor or mould that produced a particular stamper or disc.
Use of RID and SID together in forensics
The standard use of RID and SID mean that each disc written contains a record of the
machine that produced a disc (the SID), and which drive wrote it (the RID). This combined
knowledge may be very useful to law enforcement, to investigative agencies, and to private
or corporate investigators.

64 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 64 of 138
3.2 Create a table for CD, DVD and Blueray data speeds.
CD & DVD writing speeds
Original CD-ROM drives could read data at 150 kibibytes (150 210 bytes) per sec-
ond. As faster drives were released, the write speeds and read speeds for optical discs were
multiplied by manufacturers, far exceeding the drive speeds originally released onto the mar-
ket. In order to represent this exponential growth in drive speeds, manufacturers used the
symbol nX, whereby n is the multiple of the original speed. For example, writing to a CD at
8X will be twice as fast as writing onto a disc at 4X.
CD, DVD and Blu-ray writing speeds

1X speed Capacity Capacity Full Read Time


Media
(GB) (min)
Mbit/s kB/s KiB/s MiB/s (GiB)

CD 1.229 153.6 150.0 0.15 734 MB 700 MiB 80


DVD 11.080 1,385.0 1,352.5 1.32 4.7 GB 4.38 GiB 120
Blu-ray Disc 36.000 4,500.0 4,394.5 4.29 25.0 GB 23.25 GiB 180

Modern compact discs support a writing speed of 52X and higher, with some mod-
ern DVDs supporting speeds of 16X and higher. It is important to note that the speed of writ-
ing a DVD at 1X (1,385,000 bytes per second) is approximately 9 times faster compared to
writing a CD at 1X (153,600 bytes per second). However, the actual speeds depend on the
type of data being written to the disc. For Blu-ray discs, 1x speed is defined as 36 megabits
per second (Mbit/s), which is equal to 4.5 megabytes per second (MB/s). However, as the
minimum required data transfer rate for Blu-ray movie discs is 54 Mbit/s, the minimum speed
for a Blu-ray drive intended for commercial movie playback should be 2X.
Historically, the 1X writing speed is equivalent to the 1X reading speed, which in turn
represents the speed at which a piece of media can be read in its entirety - 74 minutes. Those
74 minutes come from the maximum playtime that the Red Book (audio CD stand-
ard) specifies for a digital audio CD (CD-DA); although now, most recordable CDs can hold
80 minutes worth of data. The DVD and Blu-ray discs hold a higher capacity of data, so read-
ing or writing those discs in the same 74-minute time-frame requires a higher data transfer
rate.
Theoretical versus practical writing speed
Almost all modern CD/DVD burning software supports a selection of speeds at which
the writeable disc can be written. However, the option a user chooses only defines the theo-
retical maximum of disc burning process. There are other factors that influence the time taken
for a disc to be written to:

65 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 65 of 138
Resources available to the program: Reading or writing data on a disc consumes moder-
ate to high level of system resources (including memory and CPU resources), and run-
ning other programs at the same time may force the CD/DVD drive to choose a lower
speed automatically, to accommodate the available resources.
Disc quality: Optical disc recorders detect the available speed options based on the data
which is available on the disc itself. However, some low quality discs make a high speed
option available to the software, while the burning process can never reach that speed in
practice.
The reading and writing process may not happen at a steady speed. CD drives and many
early DVD drives stored data with constant linear velocity, so that the data rate remained
the same regardless of the position of the optical head. Modern DVD drives use Zoned
Constant Linear Velocity with different data rates in different zones.
Optimal writing speed
A higher writing speed results in a faster disc burn, but the optical quality may be
lower (i.e. the disc is less reflective). If the reflectivity is too low for the disc to be read accu-
rately, some parts may be skipped or it may result in unwanted audio artifacts such as squeak-
ing and clicking sounds. For optimal results, it is suggested that a disc be burnt at its rated
speed.
Other media
Removable flash based storage is often rated in ratio to standard CD space. For exam-
ple, a 100X flash card claims to be able to sustain 100 * 154 kB/s or 15.4 MB/s (100 * 150
KiB/s or 14.6 MiB/s). Read and write speeds will usually have different X ratings.

Summary of CD, DVD and


Blu-ray writing speeds

66 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 66 of 138
3.3 Explain what is a Dual Layer Disk is?
DVD+R DL (DL stands for Double Layer) also called DVD+R9, is a derivative of
the DVD+R format created by the DVD+RW Alliance. Its use was first demonstrated in Oc-
tober 2003. DVD+R DL discs employ two recordable dye layers, each capable of storing
nearly the 4.7 GB capacity of a single-layer disc, almost doubling the total disc capacity to
8.5 GB. Discs can be read in many DVD devices (older units are less compatible) and can
only be created using DVD+R DL and Super Multi drives. DL drives started appearing on the
market during mid-2004, at prices comparable to those of existing single-layer drives. As of
March 2011 DL media is up to twice as expensive as single-layer media. The latest DL drives
write double layer discs at a slower rate (up to 12) than current single-layer discs (up to
24).
Dual-layer recording
Dual-layer recording allows DVD-R and DVD+R discs to store significantly more da-
ta, up to 8.5 gigabytes per disc, compared with 4.7 gigabytes for single-layer discs. DVD-R
DL was developed for the DVD Forum by Pioneer Corporation, while DVD+R DL was de-
veloped for the DVD+RW Alliance by Philips and Mitsubishi Kagaku Media (MKM).[1]
A dual-layer disc differs from its usual DVD counterpart by employing a second
physical layer within the disc itself. The drive with dual-layer capability accesses the second
layer by shining the laser through the first semi-transparent layer. The layer change can ex-
hibit a noticeable pause in some DVD players, up to several seconds.[2] This caused more
than just a few viewers to worry that their dual-layer discs were damaged or defective, with
the end result that studios began listing a standard message explaining the dual-layer pausing
effect on all dual-layer disc packaging.
DVD recordable discs supporting this technology are backward compatible with some
existing DVD players and DVD-ROM drives.[1] Many current DVD recorders support dual-
layer technology, and the price is now comparable to that of single-layer drives, though the
blank media remain more expensive. The recording speeds reached by dual-layer media are
still well below those of single-layer media.
There are two modes for dual-layer orientation, parallel track path (PTP) and opposite
track path (OTP). In PTP mode, used for DVD-ROM, both layers start recording at the inside
diameter (ID) with the lead-in and end at the outside diameter (OD) with the lead-out. Sectors
are sequenced from the beginning of the first layer to the end of the first layer, then the be-
ginning of the second layer to the end of the second layer. In OTP mode, the second layer is
read from the outside of the disk.
For DVD-Video a variation of the technique is employed. DVD-Video is always rec-
orded in OTP mode, but the video data is read from the beginning of the first layer towards
the end of the first layer, when this ends (not necessarily at the end of the track) then reading
is transferred to the second layer, but the video data commences from the same physical loca-
tion that the first layer ends back towards the beginning of the second layer. This means that
67 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 67 of 138
the 'start' of the second layer may not have any recorded material present. This is in order to
minimise the time that the video player takes to locate and focus on the second layer and thus
provide the shortest possible pause in the content as the layer changes.
A common misconception is that the disc spins first in one direction, and then anoth-
er, either for PTP or OTP recording, when in fact DVD-Writers always spin a disc in the
clockwise direction. A simpler way to understand what's written above is to think of the little
hole in the center of the DVD as the "inside" and the rim of the DVD as the "outside". Since
dual-layer DVDs have two data layers, placed one on top of the other Layer 0 (L0) and
Layer 1 (L1), there are two ways in which these two layers may be written to - L0, inside to
outside and then L1 inside to outside again (PTP), or L0 inside to outside and then L1 outside
to inside (OTP). OTP is usually used for DVD-Video, to prevent the inherent delay that PTP
involves: in PTP, the laser head moves from the outside edge of the DVD to the inside to start
reading L1 when it reaches the end of L0. This results in the video skipping or freezing up for
some time as the laser head repositions itself and the system waits to start receiving data
again.
Recordable DVD capacity comparison
For comparison, the table below shows storage capacities of the four most common DVD re-
cordable media, excluding DVD-RAM. (SL) stands for standard single-layer discs, while DL
denotes the dual-layer variants. See articles on the formats in question for information on
compatibility issues.

Disk Type Number of sectors for data (2,048 B each) Capacity in bytes Nominal capacity in GB

DVD-R (SL) 2,298,496 4,707,319,808 4.7

DVD+R (SL) 2,295,104 4,700,372,992 4.7

DVD-R DL 4,171,712 8,543,666,176 8.5

DVD+R DL 4,173,824 8,547,991,552 8.5

68 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 68 of 138
3.4 Find a chart for trouble shoot optical drive problems.

69 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 69 of 138
3.5 What are the hard disk types for desktop computers?
A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage de-
vice that uses magnetic storage to store and retrieve digital information using one or more
rigid rapidly rotating disks (platters) coated with magnetic material. The platters are paired
with magnetic heads, usually arranged on a moving actuator arm, which read and write data
to the platter surfaces. Data is accessed in a random-access manner, meaning that individu-
al blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are
a type of non-volatile memory, retaining stored data even when powered off.
Introduced by IBM in 1956, HDDs became the dominant secondary storage device
for general-purpose computers by the early 1960s. Continuously improved, HDDs have
maintained this position into the modern era of servers and personal computers. More than
200 companies have produced HDDs historically, though after extensive industry consolida-
tion most current units are manufactured by Seagate, Toshiba, and Western Digital. As of
2016, HDD production (in bytes per year) is growing, although unit shipments and sales rev-
enues are declining. The primary competing technology for secondary storage is flash
memory in the form of solid-state drives (SSDs), which have higher data-transfer rates, high-
er areal storage density, better reliability,[4] and much lower latency and access times. While
SSDs have higher cost per bit, SSDs are replacing HDDs where speed, power consumption,
small size, and durability are important.
The primary characteristics of an HDD are its capacity and performance. Capacity is
specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a ca-
pacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an
HDD's capacity is unavailable to the user because it is used by the file system and the com-
puter operating system, and possibly inbuilt redundancy for error correction and recovery.
Performance is specified by the time required to move the heads to a track or cylinder (aver-
age access time) plus the time it takes for the desired sector to move under the head (aver-
age latency, which is a function of the physical rotational speed in revolutions per minute),
and finally the speed at which the data is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch, for desktop com-
puters, and 2.5-inch, primarily for laptops. HDDs are connected to systems by stand-
ard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial
attached SCSI) cables.

Internal Hard disk of a


deasktop computer

70 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 70 of 138
Improvement of HDD characteristics over time

Parameter Started with (1956) Developed to (2016) Improvement

Capacity
3.75 megabytes 10 terabytes 2.7-million-to-one
(formatted)

Physical volume 68 cubic feet (1.9 m3) 2.1 cubic inches (34 cm3) 56,000-to-one

Weight 2,000 pounds (910 kg) 2.2 ounces (62 g) 15,000-to-one

2.5 ms to 10 ms; about


Average access time about 600 milliseconds
RW RAM dependant 200-to-one

US$9,200 per megabyte


Price $0.032 per gigabyte by 2015 300-million-to-one
(1961)

Areal density 2,000 bits per square inch 1.3 terabits per square inch in 2015 650-million-to-one

Average lifespan ~2000 hrs MTBF ~22500 hrs MTBF 11-to-one

Hard disk drives were introduced in 1956, as data storage for an IBM real-
time transaction processing computer and were developed for use with general-
purpose mainframe and minicomputers. The first IBM drive, the 350 RAMAC in 1956, was
approximately the size of two medium-sized refrigerators and stored five million six-bit char-
acters (3.75 megabytes) on a stack of 50 disks.
In 1962 the IBM 350 RAMAC disk storage unit was superseded by the IBM 1301
disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in
diameter. Whereas the IBM 350 used only two read/write heads which were pneumatically
actuated and moved in two dimensions, the 1301 was one of the first disk storage units to use
an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write opera-
tions were supported, and the heads flew about 250 micro-inches (about 6 m) above the
platter surface. Motion of the head array depended upon a binary adder system of hydraulic
actuators which assured repeatable positioning. The 1301 cabinet was about the size of three
home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit
bytes. Access time was about a quarter of a second.
Also in 1962, IBM introduced the model 1311 disk drive, which was about the size of
a washing machine and stored two million characters on a removable disk pack. Users could
buy additional packs and interchange them as needed, much like reels of magnetic tape. Later
71 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 71 of 138
models of removable pack drives, from IBM and others, became the norm in most computer
installations and reached capacities of 300 megabytes by the early 1980s. Non-removable
HDDs were called "fixed disk" drives.
Some high-performance HDDs were manufactured with one head per track
(e.g. IBM 2305 in 1970) so that no time was lost physically moving the heads to a
track. Known as fixed-head or head-per-track disk drives they were very expensive and are
no longer in production.
In 1973, IBM introduced a new type of HDD code-named "Winchester". Its primary
distinguishing feature was that the disk heads were not withdrawn completely from the stack
of disk platters when the drive was powered down. Instead, the heads were allowed to "land"
on a special area of the disk surface upon spin-down, "taking off" again when the disk was
later powered on. This greatly reduced the cost of the head actuator mechanism, but preclud-
ed removing just the disks from the drive as was done with the disk packs of the day. Instead,
the first models of "Winchester technology" drives featured a removable disk module, which
included both the disk pack and the head assembly, leaving the actuator motor in the drive
upon removal. Later "Winchester" drives abandoned the removable media concept and re-
turned to non-removable platters.
Like the first removable pack drive, the first "Winchester" drives used platters 14
inches (360 mm) in diameter. A few years later, designers were exploring the possibility that
physically smaller platters might offer advantages. Drives with non-removable eight-inch
platters appeared, and then drives that used a 5 14 in (130 mm) form factor (a mounting width
equivalent to that used by contemporary floppy disk drives). The latter were primarily intend-
ed for the then-fledgling personal computer (PC) market.
As the 1980s began, HDDs were a rare and very expensive additional feature in PCs,
but by the late 1980s their cost had been reduced to the point where they were standard on all
but the cheapest computers.
Most HDDs in the early 1980s were sold to PC end users as an external, add-on sub-
system. The subsystem was not sold under the drive manufacturer's name but under the sub-
system manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under
the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 in-
cluded an internal 10 MB HDD, and soon thereafter internal HDDs proliferated on personal
computers.
External HDDs remained popular for much longer on the Apple Macintosh. Many
Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, mak-
ing external expansion simple. Older compact Macintosh computers did not have user-
accessible hard drive bays (indeed, the Macintosh 128K, Macintosh 512K, and Macintosh
Plus did not feature a hard drive bay at all), so on those models external SCSI disks were the
only reasonable option for expanding upon any internal storage. The 2011 Thailand
floods damaged the manufacturing plants and impacted hard disk drive cost adversely be-
tween 2011 and 2013. Driven by ever increasing areal density since their invention, HDDs
72 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 72 of 138
have continuously improved their characteristics; a few highlights are listed in the table
above. At the same time, market application expanded from mainframe computers of the late
1950s to most mass storage applications including computers and consumer applications such
as storage of entertainment content.
Technology

Magnetic cross section & frequency


modulation encoded binary data

Magnetic recording
A modern HDD records data by magnetizing a thin film
[f]
of ferromagnetic material on a disk. Sequential changes in the direction of magnetization
represent binary data bits. The data is read from the disk by detecting the transitions in mag-
netization. User data is encoded using an encoding scheme, such as run-length lim-
ited encoding, which determines how the data is represented by the magnetic transitions.
A typical HDD design consists of a spindle that holds flat circular disks, also
called platters, which hold the recorded data. The platters are made from a non-magnetic ma-
terial, usually aluminum alloy, glass, or ceramic, and are coated with a shallow layer of mag-
netic material typically 1020 nm in depth, with an outer layer of carbon for protection. For
reference, a standard piece of copy paper is 0.070.18 millimeters (70,000180,000 nm).

Recording of single magnetisations


of bits on a 200 MB HDD-platter

73 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 73 of 138
Longitudinal recording (standard)
& perpendicular recording diagram
Diagram labeling the major components of a
computer HDD

The platters in contemporary HDDs are spun at speeds varying from 4,200 rpm in en-
ergy-efficient portable devices, to 15,000 rpm for high-performance servers.[28] The first
HDDs spun at 1,200 rpm[3] and, for many years, 3,600 rpm was the norm.[29] As of December
2013, the platters in most consumer-grade HDDs spin at either 5,400 rpm or 7,200 rpm.[30]
Information is written to and read from a platter as it rotates past devices called read-
and-write heads that are positioned to operate very close to the magnetic surface, with
their flying height often in the range of tens of nanometers. The read-and-write head is used
to detect and modify the magnetization of the material passing immediately under it.
In modern drives, there is one head for each magnetic platter surface on the spindle,
mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc
(roughly radially) across the platters as they spin, allowing each head to access almost the
entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in
some older designs a stepper motor. Early hard disk drives wrote data at some constant bits
per second, resulting in all tracks having the same amount of data per track but modern drives
(since the 1990s) use zone bit recordingincreasing the write speed from inner to outer zone
and thereby storing more data per track in the outer zones.
In modern drives, the small size of the magnetic regions creates the danger that their
magnetic state might be lost because of thermal effects, thermally induced magnetic instabil-
ity which is commonly known as the "superparamagnetic limit". To counter this, the platters
are coated with two parallel magnetic layers, separated by a 3-atom layer of the non-magnetic
element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforc-
ing each other.[31] Another technology used to overcome thermal effects to allow greater re-
cording densities is perpendicular recording, first shipped in 2005,[32] and as of 2007 the
technology was used in many HDDs.[33][34][35]
74 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 74 of 138
In 2004, a new concept was introduced to allow further increase of the data density in
magnetic recording, using recording media consisting of coupled soft and hard magnetic lay-
ers. That so-called exchange spring media, also known as exchange coupled composite me-
dia, allows good writability due to the write-assist nature of the soft layer. However, the
thermal stability is determined only by the hardest layer and not influenced by the soft layer.
Components

HDD with disks and motor hub removed exposing copper colored stator coils sur-
rounding a bearing in the center of the spindle motor. Orange stripe along the side of the arm
is thin printed-circuit cable, spindle bearing is in the center and the actuator is in the upper
left
A typical HDD has two electric motors; a spindle motor that spins the disks and an ac-
tuator (motor) that positions the read/write head assembly across the spinning disks. The disk
motor has an external rotor attached to the disks; the stator windings are fixed in place. Op-
posite the actuator at the end of the head support arm is the read-write head; thin printed-
circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the
actuator. The head support arm is very light, but also stiff; in modern drives, acceleration at
the head reaches 550 g.

Head stack with an actuator coil on the left


and read/write heads on the right
75 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 75 of 138
Close-up of a single read-write head,
showing the side facing the platter.

The actuator is a permanent magnet and moving coil motor that swings the heads to
the desired position. A metal plate supports a squat neodymium-iron-boron (NIB) high-
flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by anal-
ogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a
second NIB magnet, mounted on the bottom plate of the motor (some drives have only one
magnet).
The voice coil itself is shaped rather like an arrowhead, and made of doubly coated
copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds
the coil together after it is wound on a form, making it self-supporting. The portions of the
coil along the two sides of the arrowhead (which point to the actuator bearing center) then
interact with the magnetic field of the fixed magnet. Current flowing radially outward along
one side of the arrowhead and radially inward on the other produces the tangential force. If
the magnetic field were uniform, each side would generate opposing forces that would cancel
each other out. Therefore, the surface of the magnet is half north pole and half south pole,
with the radial dividing line in the middle, causing the two sides of the coil to see opposite
magnetic fields and produce forces that add instead of canceling. Currents along the top and
bottom of the coil produce radial forces that do not rotate the head.
The HDD's electronics control the movement of the actuator and the rotation of the
disk, and perform reads and writes on demand from the disk controller. Feedback of the drive
electronics is accomplished by means of special segments of the disk dedicated
to servo feedback. These are either complete concentric circles (in the case of dedicated servo
technology), or segments interspersed with real data (in the case of embedded servo technol-
ogy). The servo feedback optimizes the signal to noise ratio of the GMR sensors by adjusting
the voice-coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern
disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and
remapping sectors of the media which have failed.
Error rates and handling
Modern drives make extensive use of error correction codes (ECCs), particular-
ly ReedSolomon error correction. These techniques store extra bits, determined by mathe-
matical formulas, for each block of data; the extra bits allow many errors to be corrected in-
visibly. The extra bits themselves take up space on the HDD, but allow higher recording den-
sities to be employed without causing uncorrectable errors, resulting in much larger storage
76 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 76 of 138
capacity.[38] For example, a typical 1 TB hard disk with 512-byte sectors provides additional
capacity of about 93 GB for the ECC data.[39]
In the newest drives, as of 2009, low-density parity-check codes (LDPC) were sup-
planting Reed-Solomon; LDPC codes enable performance close to the Shannon Limit and
thus provide the highest storage density available.[40]
Typical hard disk drives attempt to "remap" the data in a physical sector that is failing
to a spare physical sector provided by the drive's "spare sector pool" (also called "reserve
pool"), while relying on the ECC to recover stored data while the amount of errors in a bad
sector is still low enough. The S.M.A.R.T (Self-Monitoring, Analysis and Reporting Tech-
nology) feature counts the total number of errors in the entire HDD fixed by ECC (although
not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered" and
"Soft ECC Correction" are not consistently supported), and the total number of performed
sector remappings, as the occurrence of many such errors may predict an HDD failure.
The "No-ID Format", developed by IBM in the mid-1990s, contains information
about which sectors are bad and where remapped sectors have been located.[42]
Only a tiny fraction of the detected errors ends up as not correctable. For example,
specification for an enterprise SAS disk (a model from 2013) estimates this fraction to be one
uncorrected error in every 1016 bits,[43] and another SAS enterprise disk from 2013 specifies
similar error rates.[44] Another modern (as of 2013) enterprise SATA disk specifies an error
rate of less than 10 non-recoverable read errors in every 1016 bits.[45][needs update?] An enterprise
disk with a Fibre Channel interface, which uses 520 byte sectors to support the Data Integrity
Field standard to combat data corruption, specifies similar error rates in 2005.[46]
The worst type of errors are those that go unnoticed, and are not even detected by the
disk firmware or the host operating system. These errors are known as silent data corruption,
some of which may be caused by hard disk drive malfunctions.[47]
Future development

Leading-edge hard disk drive areal


densities from 1956 through 2009
compared to Moore's law

The rate of areal density advancement was similar to Moore's law (doubling every
two years) through 2010: 60% per year during 19881996, 100% during 19962003 and 30%
during 20032010. Gordon Moore (1997) called the increase "flabbergasting," while observ-
ing later that growth cannot continue forever. Areal density advancement slowed to 10% per
77 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 77 of 138
year during 20112014, due to difficulty in migrating from perpendicular recording to newer
technologies.
Areal density is the inverse of bit cell size, so an increase in areal density corresponds
to a decrease in bit cell size. In 2013, a production desktop 3 TB HDD (with four platters)
would have had an areal density of about 500 Gbit/in2 which would have amounted to a bit
cell comprising about 18 magnetic grains (11 by 1.6 grains).[52] Since the mid-2000s areal
density progress has increasingly been challenged by a superparamagnetic trilemma involv-
ing grain size, grain magnetic strength and ability of the head to write.[53] In order to maintain
acceptable signal to noise smaller grains are required; smaller grains may self-reverse
(electrothermal instability) unless their magnetic strength is increased, but known write head
materials are unable to generate a magnetic field sufficient to write the medium. Several new
magnetic storage technologies are being developed to overcome or at least abate this trilem-
ma and thereby maintain the competitiveness of HDDs with respect to products such as flash
memory-based solid-state drives (SSDs).
In 2013, Seagate introduced one such technology, shingled magnetic record-
ing (SMR). Additionally, SMR comes with design complexities that may cause reduced write
performance. Other new recording technologies that, as of 2016, still remain under develop-
ment include heat-assisted magnetic recording (HAMR), microwave-assisted magnetic re-
cording (MAMR), two-dimensional magnetic recording (TDMR), bit-patterned record-
ing (BPR), and "current perpendicular to plane" giant magnetoresistance (CPP/GMR) heads.
The rate of areal density growth has dropped below the historical Moore's law rate of
40% per year, and the deceleration is expected to persist through at least 2020. Depending
upon assumptions on feasibility and timing of these technologies, the median forecast by in-
dustry observers and analysts for 2020 and beyond for areal density growth is 20% per year
with a range of 1030%.The achievable limit for the HAMR technology in combination with
BPR and SMR may be 10 Tbit/in2, which would be 20 times higher than the
500 Gbit/in2 represented by 2013 production desktop HDDs. As of 2015, HAMR HDDs have
been delayed several years, and are expected in 2018. They require a different architecture,
with redesigned media and read/write heads, new lasers, and new near-field optical transduc-
ers.
Capacity
The capacity of a hard disk drive, as reported by an operating system to the end user,
is smaller than the amount stated by the manufacturer, which has several reasons: the operat-
ing system using some space, use of some space for data redundancy, and space use for file
system structures. The difference in capacity reported in true SI-based units vs. binary prefix-
es can lead to a false impression of missing capacity.
Calculation
Modern hard disk drives appear to their host controller as a contiguous set of logical
blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the

78 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 78 of 138
block size. This information is available from the manufacturer's product specification, and
from the drive itself through use of operating system functions that invoke low-level drive
commands.
The gross capacity of older HDDs is calculated as the product of the number
of cylinders per recording zone, the number of bytes per sector (most commonly 512), and
the count of zones of the drive. Some modern SATA drives also report cylinder-head-
sector (CHS) capacities, but these are not physical parameters because the reported values are
constrained by historic operating system interfaces. The C/H/S scheme has been replaced
by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by
an integer index, which starts at LBA 0 for the first block and increments thereafter.[74] When
using the C/H/S method to describe modern large drives, the number of heads is often set to
64, although a typical hard disk drive, as of 2013, has between one and four platters.
In modern HDDs, spare capacity for defect management is not included in the pub-
lished capacity; however, in many early HDDs a certain number of sectors were reserved as
spares, thereby reducing the capacity available to the operating system.
For RAID subsystems, data integrity and fault-tolerance requirements also reduce the
realized capacity. For example, a RAID 1 array has about half the total capacity as a result of
data mirroring, while a RAID 5 array with x drives loses 1/x of capacity (which equals to the
capacity of a single drive) due to storing parity information. RAID subsystems are multiple
drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most
RAID vendors use checksums to improve data integrity at the block level. Some vendors de-
sign systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and
eight checksum bytes, or by using separate 512-byte sectors for the checksum data. Some
systems may use hidden partitions for system recovery, reducing the capacity available to the
end user.
System use
The presentation of a hard disk drive to its host is determined by the disk controller.
The actual presentation may differ substantially from the drive's native interface, particularly
in mainframes or servers. Modern HDDs, such as SAS and SATA drives, appear at their in-
terfaces as a contiguous set of logical blocks that are typically 512 bytes long, though the in-
dustry is in the process of changing to the 4,096-byte logical blocks layout, known as
the Advanced Format (AF).
The process of initializing these logical blocks on the physical disk platters is
called low-level formatting, which is usually performed at the factory and is not normally
changed in the field. As a next step in preparing an HDD for use, high-level format-
ting writes partition and file system structures into selected logical blocks to make the re-
maining logical blocks available to the host's operating system and its applications.[78] The
file system uses some of the disk space to structure the HDD and organize files, recording
their file names and the sequence of disk areas that represent the file. Examples of data struc-
tures stored on disk to retrieve files include the File Allocation Table (FAT) in the DOS file
79 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 79 of 138
system and inodes in many UNIX file systems, as well as other operating system data struc-
tures (also known as metadata). As a consequence, not all the space on an HDD is available
for user files, but this system overhead is usually negligible.
Units
Decimal and binary unit prefixes interpretation
Capacity advertised by Capacity expected by some Reported capacity
manufacturers consumers
macOS
With Windows, Linux
Bytes Bytes Diff. 10.6+
prefix

93.1 GB,
100 GB 100,000,000,000 107,374,182,400 7.37% 100 GB
95,367 MB

931 GB, 1,000 GB,


1 TB 1,000,000,000,000 1,099,511,627,776 9.95%
953,674 MB 1,000,000 MB

The total capacity of HDDs is given by manufacturers in SI-based units. such


as gigabytes and terabytes . The practice of using SI-based prefixes (denoting powers of
1,000) in the hard disk drive and computer industries dates back to the early days of compu-
ting;[86] by the 1970s, "million", "mega" and "M" were consistently used in the decimal sense
for drive capacity.[87][88][89]However, capacities of memory (RAM, ROM) and CDs are tradi-
tionally quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead
of 1000.
Internally, computers do not represent either hard disk drive or memory capacity in
powers of 1,024, but reporting it in this manner is a convention.[90] The Microsoft Win-
dows family of operating systems uses the binary convention when reporting storage capaci-
ty, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating sys-
tems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard") uses decimal convention when
reporting HDD capacity.[90] The default behavior of the df command-line utility on Linux is
to report the HDD capacity as a number of 1024-byte units.[91]
The difference between the decimal and binary prefix interpretation caused some con-
sumer confusion and led to class action suits against HDD manufacturers. The plaintiffs ar-
gued that the use of decimal prefixes effectively misled consumers while the defendants de-
nied any wrongdoing or liability, asserting that their marketing and advertising complied in
all respects with the law and that no class member sustained any damages or injuries.
Price evolution
HDD price per byte improved at the rate of 40% per year during 19881996, 51% per year
during 19962003, and 34% per year during 20032010.[13][48] The price improvement de-

80 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 80 of 138
celerated to 13% per year during 20112014, as areal density increase slowed and the 2011
Thailand floods damaged manufacturing facilities.
Form factors
Past and present HDD form factors

Platters
Capacity
Form factor Status Length Width Height Largest capacity
per platter
(max.)

19 or 10 TB (October
3.5-inch Current 146 mm 101.6 mm 5 or 7[96][l] 1149 GB[97]
25.4 mm 2015)

5, 7, 9.5,
2.5-inch Current 100 mm 69.85 mm 12.5, 15 or 4 TB[100](2015) 5[101] 800 GB[101]
19 mm

1.8-inch Obsolete 78.5 mm[n] 54 mm 5 or 8 mm 320 GB[11](2009) 2 220 GB [102]

8-inch Obsolete 362 mm 241.3 mm 117.5 mm ? ? ?

5.25-inch FH Obsolete 203 mm 146 mm 82.6 mm 47 GB (1998) 14 3.36 GB

5.25-inch HH Obsolete 203 mm 146 mm 41.4 mm 19.3 GB (1998) 4 4.83 GB

1.3-inch Obsolete ? 43 mm ? 40 GB(2007) 1 40 GB

1-inch
(CFII/ZIF/IDE- Obsolete ? 42 mm ? 20 GB (2006) 1 20 GB
Flex)

0.85-inch Obsolete 32 mm 24 mm 5 mm 8 GB[106][107](2004) 1 8 GB

81 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 81 of 138
8-, 5.25-, 3.5-, 2.5-, 1.8- and 1-inch A newer 2.5-inch (63.5 mm) 6,495 MB
HDDs, together with a ruler to show the HDD compared to an older 5.25-inch
length of platters and read-write heads full-height 110 MB HDD

IBM's first hard drive, the IBM 350, used a stack of fifty 24-inch platters and was of a
size comparable to two large refrigerators. In 1962, IBM introduced its model 1311 disk,
which used six 14-inch (nominal size) platters in a removable pack and was roughly the size
of a washing machine. This became a standard platter size and drive form-factor for many
years, used also by other manufacturers.[108] The IBM 2314 used platters of the same size in
an eleven-high pack and introduced the "drive in a drawer" layout, although the "drawer" was
not the complete drive.
Later drives were designed to fit entirely into a chassis that would mount in a 19-inch
rack. Digital's RK05 and RL01 were early examples using single 14-inch platters in remova-
ble packs, the entire drive fitting in a 10.5-inch-high rack space (six rack units). In the mid-
to-late 1980s the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5-inch platters,
was a popular product.
Such large platters were never used with microprocessor-based systems. With increasing
sales of microcomputers having built in floppy-disk drives (FDDs), HDDs that would fit to
the FDD mountings became desirable. Thus HDD Form factors, initially followed those of 8-
inch, 5.25-inch, and 3.5-inch floppy disk drives. Because there were no smaller floppy disk
drives, smaller HDD form factors developed from product offerings or industry standards.
8-inch
9.5 in 4.624 in 14.25 in (241.3 mm 117.5 mm 362 mm). In 1979, Shugart As-
sociates' SA1000 was the first form factor compatible HDD, having the same dimensions and
a compatible interface to the 8" FDD.
5.25-inch
5.75 in 3.25 in 8 in (146.1 mm 82.55 mm 203 mm). This smaller form factor,
first used in an HDD by Seagate in 1980,[109] was the same size as full-height 5 14-inch-
diameter (130 mm) FDD, 3.25-inches high. This is twice as high as "half height"; i.e., 1.63 in
82 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 82 of 138
(41.4 mm). Most desktop models of drives for optical 120 mm disks (DVD, CD) use the half
height 5" dimension, but it fell out of fashion for HDDs. The format was standardized
as EIA-741 and co-published as SFF-8501 for disk drives, with other SFF-85xx series stand-
ards covering related 5.25 inch devices (optical drives, etc.)[110] The Quantum Bigfoot HDD
was the last to use it in the late 1990s, with "low-profile" (25 mm) and "ultra-low-profile"
(20 mm) high versions.
3.5-inch
4 in 1 in 5.75 in (101.6 mm 25.4 mm 146 mm) = 376.77344 cm. This smaller
form factor is similar to that used in an HDD by Rodime in 1983,[111] which was the same
size as the "half height" 3" FDD, i.e., 1.63 inches high. Today, the 1-inch high ("slimline"
or "low-profile") version of this form factor is the most popular form used in most desktops.
The format was standardized in terms of dimensions and positions of mounting holes
as EIA/ECA-740, co-published as SFF-8301.[112]
2.5-inch
2.75 in 0.2750.75 in 3.945 in (69.85 mm 719 mm 100 mm) = 48.895
132.715 cm3. This smaller form factor was introduced by PrairieTek in 1988;[113] there is no
corresponding FDD. The 2.5-inch drive format is standardized in the EIA/ECA-720 co-
published as SFF-8201; when used with specific connectors, more detailed specifications are
SFF-8212 for the 50-pin (ATA laptop) connector, SFF-8223 with the SATA,
or SAS connector and SFF-8222 with the SCA-2 connector.[114]
It came to be widely used for HDDs in mobile devices (laptops, music players, etc.)
and for solid-state drives (SSDs), by 2008 replacing some 3.5 inch enterprise-class
drives.[115] It is also used in the PlayStation 3[116] and Xbox 360[117] video game consoles.
Drives 9.5 mm high became an unofficial standard for all except the largest-capacity
laptop drives (usually having two platters inside); 12.5 mm-high drives, typically with three
platters, are used for maximum capacity, but will not fit most laptop computers. Enterprise-
class drives can have a height up to 15 mm.[118] Seagate released a 7 mm drive aimed at entry
level laptops and high end netbooks in December 2009.[119] Western Digital released on April
23, 2013 a hard drive 5 mm in height specifically aimed at UltraBooks.[120]
1.8-inch
54 mm 8 mm 78.5 mm[n] = 33.912 cm. This form factor, originally introduced by
Integral Peripherals in 1993, evolved into the ATA-7 LIF with dimensions as stated. For a
time it was increasingly used in digital audio players and subnotebooks, but its popularity de-
creased to the point where this form factor is increasingly rare and only a small percentage of
the overall market.[121] There was an attempt to standardize this format as SFF-8123, but it
was cancelled in 2005.[122] SATA revision 2.6 standardized the internal Micro SATA con-
nector and device dimensions.
1-inch
42.8 mm 5 mm 36.4 mm. This form factor was introduced in 1999,
as IBM's Microdrive to fit inside a CF Type II slot. Samsung calls the same form fac-
tor "1.3 inch" drive in its product literature.[123]
83 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 83 of 138
0.85-inch
24 mm 5 mm 32 mm. Toshiba announced this form factor in January 2004[124] for
use in mobile phones and similar applications, including SD/MMC slot compatible HDDs
optimized for video storage on 4G handsets. Toshiba manufactured a 4 GB (MK4001MTD)
and an 8 GB (MK8003MTD) version and holds the Guinness World Record for the smallest
HDD.[125][126]As of 2012, 2.5-inch and 3.5-inch hard disks were the most popular sizes.
By 2009, all manufacturers had discontinued the development of new products for the
1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash
memory,[127][128]which has no moving parts.While these sizes are customarily described by an
approximately correct figure in inches, actual sizes have long been specified in millimeters.
Desktop HDDs
They typically store between 60 GB and 4 TB and rotate at 5,400 to 10,000 rpm, and
have a media transfer rate of 0.5 Gbit/s or higher (1 GB = 109 bytes; 1 Gbit/s = 109 bit/s). In
August 2014, the highest-capacity desktop HDDs stored 8 TB,[146][147] which increased to
10 TB by June 2016.[148] As of 2016, the typical speed of a hard drive in an average desktop
computer is 7200 RPM, whereas low-cost desktop computers may use 5900 RPM or 5400
RPM drives. For some time in the 2000s and early 2010s some desktop users would also use
10k RPM drives such as Western Digital Raptor but such drives have become much rarer as
of 2016 and are not commonly used now, having been replaced by NAND flash-based SSDs.

Enterprise HDDs
Typically used with multiple-user computers running enterprise software. Examples are:
transaction processing databases, internet infrastructure (email, webserver, e-commerce), sci-
entific computing software, and nearline storage management software. Enterprise drives
commonly operate continuously ("24/7") in demanding environments while delivering the
highest possible performance without sacrificing reliability. Maximum capacity is not the
primary goal, and as a result the drives are often offered in capacities that are relatively low
in relation to their cost.[149]
The fastest enterprise HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media
transfer speeds above 1.6 Gbit/s[150]and a sustained transfer rate up to 1 Gbit/s.[150] Drives
running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power require-
ments (as they have less air drag) and therefore generally have lower capacity than the high-
est capacity desktop drives. Enterprise HDDs are commonly connected through Serial At-
tached SCSI (SAS) or Fibre Channel (FC). Some support multiple ports, so they can be con-
nected to a redundant host bus adapter.
Enterprise HDDs can have sector sizes larger than 512 bytes (often 520, 524, 528 or 536
bytes). The additional per-sector space can be used by hardware RAID controllers or applica-
tions for storing Data Integrity Field (DIF) or Data Integrity Extensions (DIX) data, resulting
in higher reliability and prevention of silent data corruption.[151]

84 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 84 of 138
External hard disk drives

Toshiba 1 TB 2.5" external USB


2.0 hard disk drive

3.0 TB 3.5" Seagate FreeA-


gent GoFlex plug and play external USB
3.0-compatible drive (left),
750 GB 3.5" Seagate Technology push-
button external USB 2.0 drive (right), and
a 500 GB 2.5" generic brand plug and
play external USB 2.0 drive (front).

External hard disk drives[q] typically connect via USB; variants using USB 2.0 inter-
face generally have slower data transfer rates when compared to internally mounted hard
drives connected through SATA. Plug and play drive functionality offers system compatibil-
ity and features large storage options and portable design. As of March 2015, available capac-
ities for external hard disk drives ranged from 500 GB to 8 TB.[155]
External hard disk drives are usually available as pre-assembled integrated products,
but may be also assembled by combining an external enclosure (with USB or other interface)
with a separately purchased drive. They are available in 2.5-inch and 3.5-inch sizes; 2.5-inch
variants are typically called portable external drives, while 3.5-inch variants are referred to
as desktop external drives. "Portable" drives are packaged in smaller and lighter enclosures
than the "desktop" drives; additionally, "portable" drives use power provided by the USB
connection, while "desktop" drives require external power bricks.
Features such as biometric security or multiple interfaces (for example, Firewire) are
available at a higher cost.[156] There are pre-assembled external hard disk drives that, when
taken out from their enclosures, cannot be used internally in a laptop or desktop computer due
to embedded USB interface on their printed circuit boards, and lack of SATA (or Parallel
ATA) interfaces.
85 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 85 of 138
3.6 Compare laptop and desktop hard drives.

Whether installed in a laptop or desktop computer, all hard drives use similar technol-
ogy: spinning magnetic platters that record and store data. In earlier generations of comput-
ers, there were multiple types of connection interfaces which could vary between desktops
and laptops, but the Serial ATA (SATA) standard now dominates both markets. The primary
remaining differences between the two types of drives are physical size, storage space and
speed.
Physical Size

Historically, desktop computers used 5.25-inch hard drives, but today's desktops pri-
marily rely on 3.5-inch drives. This size allows for several internal platters, increasing the
drive's storage space. These drives, however, are too heavy and thick to use in portable com-
puters. Instead, most laptops use 2.5-inch drives. These hard disks are lighter, slimmer and
use less energy -- a premium when running on batteries. Though commonly installed in de-
vices such as iPods, 1.8-inch drives are also used in some ultra-thin laptops and netbooks.
Storage Space

The physical thinness of 2.5-inch laptop drives directly affects their storage capacity.
With less room internally, 2.5-inch drives contain fewer, smaller platters. While some high-
end 3.5-inch drives boast 4TB of storage -- and continue to grow -- laptop drives cannot
match this claim. Instead, even the most expensive laptop drives often range in size from
750GB to 1TB. While laptop drives also get larger year after year, their physical size pre-
vents them from ever catching up to the space provided by desktop drives.
Speed

In order to read and write data, the platters in hard drives spin rapidly. While some en-
thusiast drives run at 10,000 rotations per minute, the majority of desktop hard drives run at
7,200 RPM. In order to keep down heat, power use and noise levels, many laptop drives run
slower, at only 5,400 RPM. This decreased rotation speed directly affects the speed of the
computer, as the drive has to spin for longer to reach each piece of data. However, some lap-
top models do include 7,200 RPM drives, negating the speed difference.
Solid State Drives

In the last several years, many laptop and desktop computers have begun including
solid state drives instead of hard drives. These SSDs use data chips instead of platters, remov-
ing the noise, heat and vibration caused by hard drives. While more expensive than hard
drives, SSDs also run significantly faster than even 10,000 RPM disks. Due to the price dif-
ference, some desktop computers include both a small SSD for frequently-used data and a
large hard drive for greater storage capacity. Because most laptops can't fit two drives, some

86 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 86 of 138
models offer hybrid drives, which have a large storage area in addition to a small SSD used
for caching.
Compatibility

While 3.5-inch drives can't fit into most laptops internally, you can still connect them
using an external enclosure, which you plug into your computer via a USB or external SATA
(eSATA) cable. Similarly, you can use 2.5-inch drives on a desktop with an enclosure, or use
a mounting bracket to install them internally. Some desktop computers also include 2.5-inch
drive bays, since most SSDs are of this size.

87 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 87 of 138
3.7 Find a table which includes all hard disk space needs for Windows Cli-
ent and Server Operating Systems.
Hardware requirements
Installing Windows requires an internal or external optical drive.
A keyboard and mouse are the recommended input devices, though some versions support
a touchscreen. For operating systems prior to Vista, the drive must be capable of reading CD
media, while in Windows Vista onwards, the drive must be DVD-compatible. The drive may
be detached after installing Windows.
Windows 9x

Version CPU RAM Free disk space

Windows 95 386 4 MB 120 MB

486 DX2
Windows 98 16 MB 300 MB
66 MHz

Pentium
Windows Me (Millennium Edition) 32 MB 400 MB
150 MHz

Windows NT

RAM
Free disk Video adapter
Version CPU space and monitor
Minimum Recommended

Windows NT 3.51
386, 25 MHz 8 MB ? 90 MB ?
Workstation

Windows NT 4.0
486, 33 MHz 12 MB ? 110 MB ?
Workstation

88 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 88 of 138
RAM
Free disk Video adapter
Version CPU space and monitor
Minimum Recommended

Windows 2000 Pro-


133 MHz 32 MB 128 MB 650 MB VGA (640 480)
fessional

Windows XP 1.5 GB

233 MHz 64 MB 128 MB


Windows Fundamentals
500 MB
for Legacy PCs

Windows XP Professional
700 MHz Itanium[6] 1 GB[6] ? 6 GB[6] SVGA (800 600)
x64
For Aero (if appli-
cable):
128 MB VRAM
1 GHz (x86) or
Windows Server 2003 128 MB 256 MB
1.4 GHz (x64)

15 GB
Windows Vista 800 MHz 384 MB (Starter) 2 GB (~6.5 GB for
512 MB (others) OS)

1 GHz (x86) or
Windows Server 2008 2 GB 10 GB
1.4 GHz (x64) [7]

16 GB
1 GB (x86)
Windows 7 1 GHz 2 GB (~6.5 GB for
2 GB (x64)
OS)
Windows Server 2012 1.4 GHz (x86-64) 512 MB 1 GB 10 GB
1024 x 768 for
20 GB
1 GB (x86) Windows Store apps
Windows 8 1 GHz 4 GB (~6.5 GB for
2 GB (x64) 1366 x 768 to snap
OS)
apps
1 GHz or faster 1 GB (x86) 16 GB (x86)
Windows 10 1 GB
processor or SoC 2 GB (x64) 20 GB (x64) 800x600
Windows Server 2016

89 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 89 of 138
3.8 Explain basic disk, dynamic disk and volume.

Before partitioning a drive or getting information about the partition layout of a drive,
you must first understand the features and limitations of basic and dynamic disk storage
types. For the purposes of this topic, the term volume is used to refer to the concept of a disk
partition formatted with a valid file system, most commonly NTFS, that is used by the Win-
dows operating system to store files. A volume has a Win32 path name, can be enumerated
by theFindFirstVolume and FindNextVolume functions, and usually has a drive letter as-
signed to it, such as Local Disk C. There are two types of disks when referring to storage
types in this context: basic disks and dynamic disks. Note that the storage types discussed
here are not the same as physical disks or partition styles, which are related but separate con-
cepts. For example, referring to a basic disk does not imply a particular partition stylethe
partition style used for the disk under discussion would also need to be specified. For a sim-
plified description of how a basic disk storage type relates to a physical hard disk, see Disk
Devices and Partitions.

Basic Disks

Basic disks are the storage types most often used with Windows. The term basic
disk refers to a disk that contains partitions, such as primary partitions and logical drives, and
these in turn are usually formatted with a file system to become a volume for file storage.
Basic disks provide a simple storage solution that can accommodate a useful array of chang-
ing storage requirement scenarios. Basic disks also support clustered disks, Institute of Elec-
trical and Electronics Engineers (IEEE) 1394 disks, and universal serial bus (USB) remova-
ble drives. For backward compatibility, basic disks usually use the same Master Boot Record
(MBR) partition style as the disks used by the Microsoft MS-DOS operating system and all
versions of Windows but can also support GUID Partition Table (GPT) partitions on systems
that support it. For more information about MBR and GPT partition styles, see the Partition
Styles section.
You can add more space to existing primary partitions and logical drives by extending
them into adjacent, contiguous unallocated space on the same disk. To extend a basic volume,
it must be formatted with the NTFS file system. You can extend a logical drive within con-
tiguous free space in the extended partition that contains it. If you extend a logical drive be-
yond the free space available in the extended partition, the extended partition grows to con-
tain the logical drive as long as the extended partition is followed by contiguous unallocated
space. For more information, see How Basic Disks and Volumes Work.
The following operations can be performed only on basic disks

Create and delete primary and extended partitions.


Create and delete logical drives within an extended partition.
Format a partition and mark it as active.

90 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 90 of 138
Dynamic Disks

Dynamic disks provide features that basic disks do not, such as the ability to create
volumes that span multiple disks (spanned and striped volumes) and the ability to create
fault-tolerant volumes (mirrored and RAID-5 volumes). Like basic disks, dynamic disks can
use the MBR or GPT partition styles on systems that support both. All volumes on dynamic
disks are known as dynamic volumes. Dynamic disks offer greater flexibility for volume
management because they use a database to track information about dynamic volumes on the
disk and about other dynamic disks in the computer. Because each dynamic disk in a comput-
er stores a replica of the dynamic disk database, for example, a corrupted dynamic disk data-
base can repair one dynamic disk by using the database on another dynamic disk. The loca-
tion of the database is determined by the partition style of the disk. On MBR partitions, the
database is contained in the last 1 megabyte (MB) of the disk. On GPT partitions, the data-
base is contained in a 1-MB reserved (hidden) partition.
Dynamic disks are a separate form of volume management that allows volumes to
have noncontiguous extents on one or more physical disks. Dynamic disks and volumes rely
on the Logical Disk Manager (LDM) and Virtual Disk Service (VDS) and their associated
features. These features enable you to perform tasks such as converting basic disks into dy-
namic disks, and creating fault-tolerant volumes. To encourage the use of dynamic disks,
multi-partition volume support was removed from basic disks, and is now exclusively sup-
ported on dynamic disks.
The following operations can be performed only on dynamic disks:

Create and delete simple, spanned, striped, mirrored, and RAID-5 volumes.
Extend a simple or spanned volume.
Remove a mirror from a mirrored volume or break the mirrored volume into two volumes.
Repair mirrored or RAID-5 volumes.
Reactivate a missing or offline disk.

Another difference between basic and dynamic disks is that dynamic disk volumes
can be composed of a set of noncontiguous extents on one or multiple physical disks. By con-
trast, a volume on a basic disk consists of one set of contiguous extents on a single disk. Be-
cause of the location and size of the disk space needed by the LDM database, Windows can-
not convert a basic disk to a dynamic disk unless there is at least 1 MB of unused space on
the disk.
Regardless of whether the dynamic disks on a system use the MBR or GPT partition
style, you can create up to 2,000 dynamic volumes on a system, although the recommended
number of dynamic volumes is 32 or less. For details and other considerations about using
dynamic disks and volumes, see Dynamic disks and volumes.
For more features of and usage scenarios for dynamic disks, see What Are Dynamic Disks
and Volumes.

91 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 91 of 138
The operations common to basic and dynamic disks are the following:

Support both MBR and GPT partition styles.


Check disk properties, such as capacity, available free space, and current status.
View partition properties, such as offset, length, type, and if the partition can be used as the
system volume at boot.
View volume properties, such as size, drive-letter assignment, label, type, Win32 path name,
partition type, and file system.
Establish drive-letter assignments for disk volumes or partitions, and for CD-ROM devices.
Convert a basic disk to a dynamic disk, or a dynamic disk to a basic disk.

Unless specified otherwise, Windows initially partitions a drive as a basic disk by de-
fault. You must explicitly convert a basic disk to a dynamic disk. However, there are disk
space considerations that must be accounted for before you attempt to do this. For more in-
formation, see How To Convert to Basic and Dynamic Disks in Windows XP Professional.
Partition Styles

Partition styles, also sometimes called partition schemes, is a term that refers to the
particular underlying structure of the disk layout and how the partitioning is actually ar-
ranged, what the capabilities are, and also what the limitations are. To boot Windows, the
BIOS implementations in x86-based and x64-based computers require a basic disk that must
contain at least one master boot record (MBR) partition marked as active where information
about the Windows operating system (but not necessarily the entire operating system installa-
tion) and where information about the partitions on the disk are stored. This information is
placed in separate places, and these two places may be located in separate partitions or in a
single partition. All other physical disk storage can be set up as various combinations of the
two available partition styles, described in the following sections. For more information about
other system types, see the TechNet topic on partition styles.
Dynamic disks follow slightly different usage scenarios, as previously outlined, and
the way they utilize the two partition styles is affected by that usage. Because dynamic disks
are not generally used to contain system boot volumes, this discussion is simplified to ex-
clude special-case scenarios. For more detailed information about partition data block lay-
outs, and basic or dynamic disk usage scenarios related to partition styles, see How Basic
Disks and Volumes Work and How Dynamic Disks and Volumes Work.

92 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 92 of 138
Master Boot Record

All x86-based and x64-based computers running Windows can use the partition style
known as master boot record (MBR). The MBR partition style contains a partition table that
describes where the partitions are located on the disk. Because MBR is the only partition
style available on x86-based computers prior to Windows Server 2003 with Service Pack 1
(SP1), you do not need to choose this style. It is used automatically.
You can create up to four partitions on a basic disk using the MBR partition scheme: either
four primary partitions, or three primary and one extended. The extended partition can con-
tain one or more logical drives. The following figure illustrates an example layout of three
primary partitions and one extended partition on a basic disk using MBR. The extended parti-
tion contains four extended logical drives within it. The extended partition may or may not be
located at the end of the disk, but it is always a single contiguous space for logical drives 1-n.

Each partition, whether primary or extended, can be formatted to be a Windows vol-


ume, with a one-to-one correlation of volume-to-partition. In other words, a single partition
cannot contain more than a single volume. In this example, there would be a total of seven
volumes available to Windows for file storage. An unformatted partition is not available for
file storage in Windows.
The dynamic disk MBR layout looks very similar to the basic disk MBR layout, except that
only one primary partition is allowed (referred to as the LDM partition), no extended parti-
tioning is allowed, and there is a hidden partition at the end of the disk for the LDM database.
For more information on the LDM, see the Dynamic Disks section.

93 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 93 of 138
GUID Partition Table

Systems running Windows Server 2003 with SP1 and later can use a partition style
known as the globally unique identifier (GUID) partition table (GPT) in addition to the MBR
partition style. A basic disk using the GPT partition style can have up to 128 primary parti-
tions, while dynamic disks will have a single LDM partition as with MBR partitioning. Be-
cause basic disks using GPT partitioning do not limit you to four partitions, you do not need
to create extended partitions or logical drives.
The GPT partition style also has the following properties:

Allows partitions larger than 2 terabytes.


Added reliability from replication and cyclic redundancy check (CRC) protection of the
partition table.
Support for additional partition type GUIDs defined by original equipment manufacturers
(OEMs), independent software vendors (ISVs), and other operating systems.

The GPT partitioning layout for a basic disk is illustrated in the following figure.

The protective MBR area exists on a GPT partition layout for backward compatibility
with disk management utilities that operate on MBR. The GPT header defines the range of
logical block addresses that are usable by partition entries. The GPT header also defines its
location on the disk, its GUID, and a 32-bit cyclic redundancy check (CRC32) checksum that
94 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 94 of 138
is used to verify the integrity of the GPT header. Each GUIDpartition entry begins with a par-
tition type GUID. The 16-byte partition type GUID, which is similar to a System ID in the
partition table of an MBR disk, identifies the type of data that the partition contains and iden-
tifies how the partition is used, for example if it is a basic disk or a dynamic disk. Note that
each GUID partition entry has a backup copy.
Dynamic disk GPT partition layouts looks similar to this basic disk example, but as
stated previously have only one LDM partition entry rather than 1-n primary partitions as al-
lowed on basic disks. There is also a hidden LDM database partition with a correspond-
ing GUID partition entry for it. For more information on the LDM, see the Dynamic
Disks section.

95 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 95 of 138
3.9 What are the differences between desktop and server hard disks?
The obvious things I can see are: durability (server hardware mostly more qualitative
and have more warranty) and power consumption (server hardware more focused on perfor-
mance, than on power economy). Also server disks are usually a little faster, but it seems, that
it is not always the case. May be there are some other reasons, that make you choose server-
oriented series (Seagate ES drives, for example) over desktop-oriented ones (Seagate Barra-
cuda series)? What are they?

Time-Limited Error Recovery ):

Time-Limited Error Recovery (TLER) is a name used by Western Digital for a hard
drive feature that allows improved error handling in a RAID environment. In some cases,
there is a conflict as to whether error handling should be undertaken by the hard drive or by
the RAID controller, which leads to drives being marked as unusable and significant perfor-
mance degradation, when this could otherwise have been avoided. Similar technologies are
called Error Recovery Control (ERC), used by competitor Seagate, and Command Comple-
tion Time Limit (CCTL), used by Samsung and Hitachi. This is very important in RAID ar-
rays where one drive can lock up or degrade the array.

According to Intel's Enterprise-class versus Desktop-class Hard Drives, enterprise-


class drives are often faster and more reliable due to better hardware and different firmware.

Better hardware specs:


better mechanics for faster and more reliable data access (stronger actuator magnets,
more servos)
more cache memory
more components for error detection and correction
vibration compensation or reduction to reduce likelihood of data corruption induced by
moving parts in the server, e.g. rotating fans and spinning disks

Behavior:
Time-limited error recovery (TLER) for more reliable and lower-latency error recovery
and fail-over
End-to-end error detection
Usually fixed rotation speed

Testing:
It's just a guess, but it may be that manufacturers test their enterprise-class disks more
thoroughly than their desktop disks, however, independent testing by external parties is

96 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 96 of 138
not entirely conclusive and might even show that desktop drives may be as reliable as
server-grade drives, at least in the initial years of operation (Backblaze study).
The point here is though that there are more and older desktop drives out in the wild and
in active use than enterprise disks, so one could say that we have more and better data
about desktop drive reliability than we have for server disks (Google).

Warranty:
Enterprise-class disks often come with longer warranty than desktop disks. However,
this will probably not matter very much to you if you have sensitive data stored unen-
crypted on your failing disk and do not want to send in your disk to claim the warranty.
Nonetheless, the issue about TLER might be the main reason for choosing server-grade stor-
age solutions over consumer products, especially when operating RAID systems and servers
with time-critical workloads like web or database servers.

Other than that, yes, there is probably some FUD by the manufacturers to make you
feel uneasy about using desktop products for a server, so using enterprise products will also
give you some peace of mind.

97 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 97 of 138
Task 04
4.1 What are the desktop RAM types?

SDRAM (Synchronous DRAM)

Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is
not an extension of older EDO DRAM but a new type of DRAM altogether. SDRAM started
out running at 66 MHz, while older fast page mode DRAM and EDO max out at 50 MHz.
SDRAM is able to scale to 133 MHz (PC133) officially, and unofficially up to 180MHz or
higher. As processors get faster, new generations of memory such as DDR and RDRAM are
required to get proper performance.

DDR (Double Data Rate SDRAM)

DDR basically doubles the rate of data transfer of standard SDRAM by transferring
data on the up and down tick of a clock cycle. DDR memory operating at 333MHz actually
operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a
2.5 volt technology that uses 184 pins in its DIMMs. It is incompatible with SDRAM physi-
cally, but uses a similar parallel bus, making it easier to implement than RDRAM, which is a
different technology.

Rambus DRAM (RDRAM)

Despite it's higher price, Intel has given RDRAM it's blessing for the consumer mar-
ket, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is a serial
memory technology that arrived in three flavors, PC600, PC700, and PC800. PC800
RDRAM has double the maximum throughput of old PC100 SDRAM, but a higher latency.
RDRAM designs with multiple channels, such as those in Pentium 4 motherboards, are cur-
rently at the top of the heap in memory throughput, especially when paired with PC1066
RDRAM memory.

DIMMs vs. RIMMs

DRAM comes in two major form factors: DIMMs and RIMMS. DIMMs are 64-bit
components, but if used in a motherboard with a dual-channel configuration (like with an
Nvidia nForce chipset) you must pair them to get maximum performance. So far there aren't
many DDR chipset that use dual-channels. Typically, if you want to add 512 MB of DIMM
memory to your machine, you just pop in a 512 MB DIMM if you've got an available slot.
DIMMs for SDRAM and DDR are different, and not physically compatible. SDRAM

98 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 98 of 138
DIMMs have 168-pins and run at 3.3 volts, while DDR DIMMs have 184-pins and run at 2.5
volts.

RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maxi-
mum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a dual-
channel 32-bit interface. You have to plan more when upgrading and purchasing RDRAM.

From the top: SIMM, DIMM and SODIMM memory modules

More about DDR SDRAM

Double data rate synchronous dynamic random-access memory (DDR SDRAM)


is a class of memory integrated circuits used in computers. DDR SDRAM, also called DDR1
SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM and DDR4 SDRAM.
None of its successors are forward or backward compatible with DDR1 SDRAM, meaning
DDR2, DDR3, and DDR4 memory modules will not work in DDR1-equipped motherboards,
and vice versa.

Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes
higher transfer rates possible by more strict control of the timing of the electrical data and
clock signals. Implementations often have to use schemes such as phase-locked loops and
self-calibration to reach the required timing accuracy. The interface uses double pumping
(transferring data on both the rising and falling edges of the clock signal) to double data bus
bandwidth without a corresponding increase in clock frequency. One advantage of keeping
the clock frequency down is that it reduces the signal integrity requirements on the circuit
board connecting the memory to the controller. The name "double data rate" refers to the fact

99 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 99 of 138
that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a
SDR SDRAM running at the same clock frequency, due to this double pumping.

With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate of
(memory bus clock rate) 2 (for dual rate) 64 (number of bits transferred) / 8 (number of
bits/byte). Thus, with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer
rate of 1600 MB/s.

"Beginning in 1996 and concluding in June 2000, JEDEC developed the DDR (Dou-
ble Data Rate) SDRAM specification (JESD79)." JEDEC has set standards for data rates of
DDR SDRAM, divided into two parts. The first specification is for memory chips, and the
second is for memory modules.

Comparison of memory modules for desktop PCs (DIMM). Comparison of memory mod-
ules for portable/mobile PCs
(SO-DIMM).

Physical DDR layout


100 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 100 of 138
Chips and modules

Standard Memory Cycle I/O bus Data VDDQ Module Peak trans- Timings
name clock time[4] clock rate (V) name fer rate (CL-tRCD-
(MHz) (ns) (MHz) (MT/s) (MB/s) tRP)
DDR-200 100 10 100 200 2.50.2 PC-1600 1600
DDR-266 133.33 7.5 133.33 266.67 PC-2100 2133.33 2.5-3-3
DDR-333 166.67 6 166.67 333.33 PC-2700 2666.67
DDR-400A 200 5 200 400 2.60.1 PC-3200 3200 2.5-3-3
DDR-400B 3-3-3
DDR-400C 3-4-4

Note: All above listed are specified by JEDEC as JESD79F. All RAM data rates in-
between or above these listed specifications are not standardized by JEDEC often they are
simply manufacturer optimizations using tighter-tolerance or overvolted chips.

The package sizes in which DDR SDRAM is manufactured are also standardized by JEDEC.

There is no architectural difference between DDR SDRAM designed for different


clock frequencies, for example, PC-1600, designed to run at 100 MHz, and PC-2100, de-
signed to run at 133 MHz. The number simply designates the data rate at which the chip is
guaranteed to perform, hence DDR SDRAM is guaranteed to run at lower and can possibly
run at higher (overclocking) clock rates than those for which it was made.[6]

DDR SDRAM modules for desktop computers, commonly called DIMMs, have 184
pins (as opposed to 168 pins on SDRAM, or 240 pins on DDR2 SDRAM), and can be differ-
entiated from SDRAM DIMMs by the number of notches (DDR SDRAM has one, SDRAM
has two). DDR SDRAM for notebook computers, SO-DIMMs, have 200 pins, which is the
same number of pins as DDR2 SO-DIMMs. These two specifications are notched very simi-
larly and care must be taken during insertion if unsure of a correct match. Most DDR
SDRAM operates at a voltage of 2.5 V, compared to 3.3 V for SDRAM. This can significant-
ly reduce power consumption. Chips and modules with DDR-400/PC-3200 standard have a
nominal voltage of 2.6 V.

JEDEC Standard No. 21C defines three possible operating voltages for 184 pin
DDR, as identified by the key notch position relative to its centreline. Page 4.5.10-7 defines
2.5V (left), 1.8V (centre), TBD (right), while page 4.20.540 nominates 3.3V for the right
notch position. The orientation of the module for determining the key notch position is with
52 contact positions to the left and 40 contact positions to the right.

Increasing operating voltage slightly can increase maximum speed, at the cost of
higher power dissipation and heating, and at the risk of malfunctioning or damage. Many new
chipsets use these memory types in multi-channel configurations.
101 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 101 of 138
1.Chip characteristics;

DRAM density:
Size of the chip is measured in megabits. Most motherboards recognize only 1 GB
modules if they contain 64M8 chips (low density). If 128M4 (high density) 1 GB modules
are used, they most likely will not work. The JEDEC standard allows 128M4 only for slow-
er buffered/registered modules designed specifically for some servers, but some generic man-
ufacturers do not comply.

Organization:
The notation like 64M4 means that the memory matrix has 64 million (the product
of banks x rows x columns) 4-bit storage locations. There are 4, 8, and 16 DDR chips.
The 4 chips allow the use of advanced error correction features like Chipkill, memory
scrubbing and Intel SDDC in server environments, while the 8 and 16 chips are somewhat
less expensive. x8 chips are mainly used in desktops/notebooks but are making entry into the
server market. There are normally 4 banks and only one row can be active in each bank.

2.Module characteristics;

Ranks:
To increase memory capacity and bandwidth, chips are combined on a module. For
instance, the 64-bit data bus for DIMM requires eight 8-bit chips, addressed in parallel. Mul-
tiple chips with the common address lines are called a memory rank. The term was intro-
duced to avoid confusion with chip internal rows and banks. A memory module may bear
more than one rank. The term sides would also be confusing because it incorrectly suggests
the physical placement of chips on the module. All ranks are connected to the same memory
bus (address+data). The Chip Select signal is used to issue commands to specific rank.

Adding modules to the single memory bus creates additional electrical load on its
drivers. To mitigate the resulting bus signaling rate drop and overcome the memory bottle-
neck, new chipsets employ the multi-channel architecture.

Capacity:

Number of DRAM Devices - The number of chips is a multiple of 8 for non-ECC modules
and a multiple of 9 for ECC modules. Chips can occupy one side (single sided) or both sides
(dual sided) of the module. The maximum number of chips per DDR module is 36 (94) for
ECC and 32 (8x4) for non-ECC.

ECC vs non-ECC - Modules that have error correcting code are labeled as ECC. Modules
without error correcting code are labeled non-ECC.
Timings - CAS latency (CL), clock cycle time (tCK), row cycle time (tRC), refresh row cycle
time (tRFC), row active time (tRAS).
102 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 102 of 138
Power consumption:
A test with DDR and DDR2 RAM in 2005 found that average power consumption
appeared to be of the order of 1-3W per 512MB module; this increases with clock rate, and
when in use rather than idling.[8] A manufacturer has produced calculators to estimate the
power used by various types of RAM.[9] Module and chip characteristics are inherently
linked.

Total module capacity is a product of one chip's capacity by the number of chips.
ECC modules multiply it by 8/9 because they use one bit per byte for error correction. A
module of any particular size can therefore be assembled either from 32 small chips (36 for
ECC memory), or 16(18) or 8(9) bigger ones.

DDR memory bus width per channel is 64 bits (72 for ECC memory). Total module
bit width is a product of bits per chip by number of chips. It also equals number of ranks
(rows) multiplied by DDR memory bus width. Consequently, a module with greater amount
of chips or using 8 chips instead of 4 will have more ranks.

Example: Variations of 1 GB PC2100 Registered DDR SDRAM module with ECC


Module size (GB) Number of chips Chip size (Mbit) Chip organization Number of ranks
1 36 256 64M4 2
1 18 512 64M8 2
1 18 512 128M4 1

This example compares different real-world server memory modules with a common
size of 1 GB. One should definitely be careful buying 1 GB memory modules, because all
these variations can be sold under one price position without stating whether they are 4 or
8, single or dual ranked.

There is a common belief that number of module ranks equals number of sides. As
above data shows, this is not true. One can also find 2-side/1-rank modules. One can even
think of a 1-side/2-rank memory module having 16(18) chips on single side 8 each, but it's
unlikely such a module was ever produced.

Organization

PC3200 is DDR SDRAM designed to operate at 200 MHz using DDR-400 chips with
a bandwidth of 3,200 MB/s. Because PC3200 memory transfers data on both the rising and
falling clock edges, its effective clock rate is 400 MHz. 1 GB PC3200 non-ECC modules are
usually made with sixteen 512 Mbit chips, eight on each side (512 Mbits 16 chips) / (8 bits
(per byte)) = 1,024 MB. The individual chips making up a 1 GB memory module are usually
organized as 226 eight-bit words, commonly expressed as 64M8. Memory manufactured in
this way is low-density RAM and will usually be compatible with any motherboard specify-
ing PC3200 DDR-400 memory.[citation needed]
103 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 103 of 138
High density RAM

In the context of the 1 GB non-ECC PC3200 SDRAM module, there is very little vis-
ually to differentiate low density from high density RAM. High density DDR RAM modules
will, like their low density counterparts, usually be double-sided with eight 512 Mbit chips
per side. The difference is that each chip, instead of being organized as 64M8, is organized
as 227 four-bit words, or 128M4.

High density memory modules are assembled using chips from multiple manufactur-
ers. These chips come in both the familiar 22 10 mm (approx.) TSOP2 and smaller squarer
12 9 mm (approx.) FBGA package sizes. High density chips can be identified by the num-
bers on each chip.

High density RAM devices were designed to be used in registered memory modules
for servers. JEDEC standards do not apply to high-density DDR RAM in desktop implemen-
tations.[citation needed] JEDEC's technical documentation, however, supports 128M4 semicon-
ductors as such that contradicts 1284 being classified as high density. As such, high density
is a relative term, which can be used to describe memory which is not supported by a particu-
lar motherboard's memory controller.[citation needed]

Variations

DDR Bus clock Internal Prefetch Transfer Voltage DIMM SO- MicroDIMM
SDRAM (MHz) rate (min Rate pins DIMM pins
Standard (MHz) burst) (MT/s) pins
DDR 100200 100200 2n 200400 2.5/2.6 184 200 172
DDR2 200533.33 100 4n 4001066.67 1.8 240 200 214
266.67
DDR3 4001066.67 100 8n 8002133.33 1.5/1.35 240 204 214
266.67
DDR4 1066.67 133.33 8n 2133.33 1.05/1.2 288 256
2133.33 266.67 4266.67

DDR (DDR1) was superseded by DDR2 SDRAM, which had modifications for high-
er clock frequency and again doubled throughput, but operates on the same principle as DDR.
Competing with DDR2 was Rambus XDR DRAM. DDR2 dominated due to cost and support
factors. DDR2 was in turn superseded by DDR3 SDRAM which offered higher performance
for increased bus speeds and new features. DDR3 has been superseded by DDR4 SDRAM,
which was first produced in 2011 and whose standards were still in flux (2012) with signifi-
cant architectural changes.

DDR's prefetch buffer depth is 2 (bits), while DDR2 uses 4. Although the effective
clock rates of DDR2 are higher than DDR, the overall performance was no greater in the ear-
104 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 104 of 138
ly implementations, primarily due to the high latencies of the first DDR2 modules. DDR2
started to be effective by the end of 2004, as modules with lower latencies became availa-
ble.[11]

Memory manufacturers stated that it was impractical to mass-produce DDR1 memory


with effective transfer rates in excess of 400 MHz (i.e. 400 MT/s and 200 MHz external
clock) due to internal speed limitations. DDR2 picks up where DDR1 leaves off, utilizing in-
ternal clock rates similar to DDR1, but is available at effective transfer rates of 400 MHz and
higher. DDR3 advances extended the ability to preserve internal clock rates while providing
higher effective transfer rates by again doubling the prefetch depth.

RDRAM was a particularly expensive alternative to DDR SDRAM, and most manu-
facturers dropped its support from their chipsets. DDR1 memory's prices substantially in-
creased since Q2 2008 while DDR2 prices declined. In January 2009, 1 GB DDR1 was 23
times more expensive than 1 GB DDR2. High density DDR RAM will suit about 10% of PC
motherboards on the market while low density will suit almost all motherboards on the PC
Desktop market.

MDDR

MDDR is an acronym that some enterprises use for Mobile DDR SDRAM, a type of
memory used in some portable electronic devices, like mobile phones, handhelds, and digital
audio players. Through techniques including reduced voltage supply and advanced refresh
options, Mobile DDR can achieve greater power efficiency.

105 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 105 of 138
4.2 Find difference between RAM types using Images.

There are many different types of RAM which have appeared over the years and it is
often difficult knowing the difference between them both performance wise and visually
identifying them. This article tells a little about each RAM type, what it looks like and how it
performs.

FPM RAM

FPM RAM, which stands for Fast Page Mode RAM is a type of Dynamic RAM
(DRAM). The term "Fast Page Mode" comes from the capability of memory being able to
access data that is on the same page and can be done with less latency. Most 486 and Pentium
based systems from 1995 and earlier use FPM Memory.

FPM RAM

EDO RAM

EDO RAM, which stands for "Extended Data Out RAM" came out in 1995 as a new
type of memory available for Pentium based systems. EDO is a modified form of FPM RAM
which is commonly referred to as "Hyper Page Mode". Extended Data Out refers to fact that
the data output drivers on the memory module are not switched off when the memory con-
troller removes the column address to begin the next cycle, unlike FPM RAM. Most early
Penitum based systems use EDO.

EDO RAM

106 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 106 of 138
SDRAM

SDRAM, which is short for Synchronous DRAM is a type of DRAM that runs in syn-
chronization with the memory bus. Beginning in 1996 most Intel based chipsets began to
support SDRAM which made it a popular choice for new systems in 2001.
SDRAM is capable of running at 133MHz which is about three times faster than FPM RAM
and twice as fast as EDO RAM. Most Pentium or Celeron systems purchased in 1999 have
SDRAM.

SD RAM

DDR RAM

DDR RAM, which stands for "Double Data Rate" which is a type of SDRAM and ap-
peared first on the market around 2001 but didnt catch on until about 2001 when the main-
stream motherboards started supporting it. The difference between SDRAM and DDR RAM
is that instead of doubling the clock rate it transfers data twice per clock cycle which effec-
tively doubles the data rate. DDRRAM has become mainstream in the graphics card market
and has become the memory standard.

DDR RAM

DDR2 RAM

DDR2 RAM, which stands for "Double Data Rate 2" is a newer version of DDR
which is twice as fast as the original DDR RAM. DDR2RAM came out in mid 2003 and the
107 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 107 of 138
first chipsets that supported DDR2 came out in mid 2004. DDR2 still is double data rate just
like the original DDR however DDR2-RAM has modified signaling which enables higher
speeds to be achieved with more immunity to signal noise and cross-talk between signals.

DDR2 RAM

RAMBUS (RIMM) RAM

RAMBUS RDRAM is a type of ram of its own, it came out in 1999 and was developed from
traditional DRAM but its architecture is totally new. The RAMBUS design gives smarter ac-
cess to the ram meaning that units can prefetch data and free some CPU work. The idea be-
hind RAMBUS RAM is to get small packets of data from the RAM, but at very high clock
speeds. For example, SD RAM can get 64bit of information at 100MHz where RAMBUS
RAM would get 16bits of data at 800MHz. RIMM ram was generally unsuccessful as Intel
had a lot of problems with the RAM timing or signal noise. RD RAM did make an appear-
ance in the Sony Playstation 2 and the Nintendo 64 game consoles.

RD RAM

108 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 108 of 138
Memory Module and Bus Standards/Bandwith

Module Module Format Chip Type Clock Cycles Bus Bus Transfer
Standard Speed per Speed Width Rate
(MHz) Clock (MT/s) (Bytes) (MBps)
FPM SIMM 60ns 22 1 22 8 177
EDO SIMM 60ns 33 1 33 8 266
PC66 SDR DIMM 10ns 66 1 66 8 533
PC100 SDR DIMM 8ns 100 1 100 8 800
PC133 SDR DIMM 7/7.5ns 133 1 133 8 1,066
PC1600 DDR DIMM DDR200 100 2 200 8 1,600
PC2100 DDR DIMM DDR266 133 2 266 8 2,133
PC2400 DDR DIMM DDR300 150 2 300 8 2,400
PC2700 DDR DIMM DDR333 166 2 333 8 2,667
PC3000 DDR DIMM DDR366 183 2 366 8 2,933
PC3200 DDR DIMM DDR400 200 2 400 8 3,200
PC3500 DDR DIMM DDR433 216 2 433 8 3,466
PC3700 DDR DIMM DDR466 233 2 466 8 3,733
PC4000 DDR DIMM DDR500 250 2 500 8 4,000
PC4200 DDR DIMM DDR533 266 2 533 8 4,266
PC2-3200 DDR2 DIMM DDR2-400 200 2 400 8 3,200
PC2-4200 DDR2 DIMM DDR2-533 266 2 533 8 4,266
PC2-5300 DDR2 DIMM DDR2-667 333 2 667 8 5,333
PC2-6000 DDR2 DIMM DDR2-750 375 2 750 8 6,000
PC2-6400 DDR2 DIMM DDR2-800 400 2 800 8 6,400
PC2-7200 DDR2 DIMM DDR2-900 450 2 900 8 7,200
PC2-8000 DDR2 DIMM DDR2-1000 500 2 1000 8 8,000
RIMM1200 RIMM-16 PC600 300 2 600 2 1,200
RIMM1400 RIMM-16 PC700 350 2 700 2 1,400
RIMM1600 RIMM-16 PC800 400 2 800 2 1,600
RIMM2100 RIMM-16 PC1066 533 2 1066 2 2,133
RIMM2400 RIMM-16 PC1200 600 2 1200 2 2,400
RIMM3200 RIMM-32 PC800 400 2 800 4 3,200
RIMM4200 RIMM-32 PC1066 533 2 1066 4 4,266
RIMM4800 RIMM-32 PC1200 600 2 1200 4 4,800

109 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 109 of 138
4.3 What are the Laptop RAM types? Find different Laptop RAM images.

Laptop memory comes in several shapes and sizes. For purposes of sorting, memory
can be grouped by physical size. This process identifies the physical characteristics of the
ram. Once that is done, it can be further separated by location of the rejection slot on the ram.
This slot is in different locations depending on the operating voltage and characteristics of the
ram. If ram is not identified with a label showing details about speed, slot position and volt-
age can be used as an alternative to specific identification. Once grouped, the remaining task
is to sort in Meg or Gig sizes. As with most laptop components a close fit is not enough.
Modifying components to fit will create problems or may permanently damage the laptop;
ram memory is no exception. The different types of laptop ram are identified below.

Laptops rarely use the standard SIMM and DIMM memory modules used by non-
portable machines. Instead, you may find:

MODULE CONFIGURATION DEFINITION SIZE IMAGE

72 PIN 32-bit transfer Small Outline Dual In-


SODIMM 128-512 MB
2.35" wide 3.3V line Memory Module

144 PIN 32-bit transfer


144-pin SO-
2.66" wide 3.3V 144-pin Small Outline Dual In-
SODIMM 128-512 MB DIMM 4
SO-DIMMs have a single notch near line Memory Module
Chip
(but not at) the center.

PC100 100MHz
XFER 800MB/s 168-pin Synchronous Dynamic
SDRAM 128-512 MB
SDRAM modules operate at 3.3V or RAM
5V.

PC133 133MHz
XFER 1.1 GB/s 168-pin Synchronous Dynamic
SDRAM 128-512 MB
SDRAM modules operate at 3.3V or RAM
5V.

110 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 110 of 138
PC2100 DDR1
DDR1 Dual Data Rate Syn- 256 MB -
DDR 266 MHz SDRAM
SDRAM chronous RAM 1GB
2.5V 2100 8 Chip

PC2700
DDR1 Dual Data Rate Syn-
DDR 333 MHz 256-512 MB
SDRAM chronous RAM
2.5V

PC3200
DDR1 400 MHz 2.5V DDR1
DDR1 The rejection slot on the ram changes Dual Data Rate Syn- 512 MB- 1 SDRAM
SDRAM because the voltage differences be- chronous RAM GB 3200 16
tween DDR1 and DDR2 prevents in- Chip
terchangeability.

PC3200*
DDR2-400
Memory Clock: 100 MHz
DDR2 Dual Data Rate Syn- 512 MB - 1
Bus Clock: 200 MHz
SDRAM chronous RAM GB
Data Transfers/Sec.: 400,000,000.
200p
1.8v 200-pin

PC4200*
DDR2-533
DDR2
DDR2 Memory Clock: 133 MHz Dual Data Rate Syn- 512 MB - 2
SDRAM
SDRAM Bus Clock: 266 MHz chronous RAM GB
4200 8 Chip
Data Transfers/Sec.: 533,000,000.
200p SODIMM, 1.8v

PC5300*
DDR2-667
DDR2
DDR2 Memory Clock: 166 MHz Dual Data Rate Syn- 512 MB - 4
SDRAM
SDRAM Bus Clock: 333 MHz chronous RAM GB
5300 8 Chip
Data Transfers/Sec.: 667,000,000.
200p SODIMM, 1.8v

PC6400* DDR2-800
DDR2
Memory Clock: 200 MHz
DDR2 Dual Data Rate Syn- SDRAM
Bus Clock: 400 MHz 1GB - 4 GB
SDRAM chronous RAM 6400 16
Data Transfers/Sec.: 800,000,000.
Chip
200p SODIMM, 1.8v

PC8500* DDR3
DDR3 Dual Data Rate Syn-
DDR3-1066 1GB - 4 GB SDRAM
SDRAM chronous RAM
204p SODIMM, 1.8v 8500 8 Chip

111 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 111 of 138
112 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 112 of 138
4.4 Compare Intel Core 2 Duo CPU & Core 2 Quad CPU. Find suitable
technical table.
Intel Core2 Quad Intel Core2 Duo
Processor Name Processor Processor
Q9550S (12M Cache, 2.83 E7600 (3M Cache, 3.06
GHz, 1333 MHz FSB) GHz, 1066 MHz FSB)
Code Name Yorkfield Wolfdale
Essentials
Processor Number Q9550S E7600
Status End of Interactive Support End of Interactive Support
Launch Date Q1'09 Q2'09
Lithography 45 nm 45 nm
Recommended Customer Price $340.00 $147.00
Performance
Number of Cores 4 2
Processor Base Frequency 2.83 GHz 3.06 GHz
Cache 12 MB L2 3 MB L2
Bus Speed 1333 MHz FSB 1066 MHz FSB
FSB Parity Yes No
TDP 65 W 65 W
VID Voltage Range 0.8500V-1.3625V 0.8500V-1.3625V
Supplemental Information
Embedded Options Available No No
Datasheet Link Link
Package Specifications
Sockets Supported LGA775 LGA775
TCASE 76.3C 74.1C
Package Size 37.5mm x 37.5mm 37.5mm x 37.5mm
Processing Die Size 214 mm2 82 mm2
Number of Processing Die Transis- 820 million 228 million
tors
Low Halogen Options Available See MDDS See MDDS
Advanced Technologies
Intel Turbo Boost Technology No No
Intel Hyper-Threading Technolo- No No
gy
Intel Virtualization Technology Yes Yes
(VT-x)

113 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 113 of 138
Intel Virtualization Technology Yes
for Directed I/O (VT-d)
Intel 64 Yes Yes
Instruction Set 64-bit 64-bit
Idle States Yes Yes
Enhanced Intel SpeedStep Tech- Yes Yes
nology
Intel Demand Based Switching No No
Thermal Monitoring Technologies Yes Yes
Intel Data Protection Technology
Intel AES New Instructions No No
Intel Platform Protection Technology
Trusted Execution Technology Yes No
Execute Disable Bit Yes Yes

114 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 114 of 138
4.5 Compare Intel Core i3 first generation CPU & Core i7 first generation
CPU. Include technical table.
Intel Core i7-980 First Intel Core i3-540 First
Processor Name Generation Processor Generation Processor
(12M Cache, 3.33 GHz,) (4M Cache, 3.06 GHz)
Code Name Gulftown Clarkdale
Essentials;
Processor Number i7-980 i3-540
Status End of Interactive Support End of Interactive Support
Launch Date Q2'11 Q1'10
Lithography 32 nm 32 nm
Recommended Customer Price $594.00 $109.00 - $117.00
Performance;
Number of Cores 6 2
Number of Threads 12 4
Processor Base Frequency 3.33 GHz 3.06 GHz
Max Turbo Frequency 3.60 GHz
Cache 12 MB SmartCache 4 MB SmartCache
Bus Speed 4.8 GT/s QPI 2.5 GT/s DMI
Number of QPI Links 1
TDP 130 W 73 W
VID Voltage Range 0.800V-1.300V 0.6500V-1.4000V
Supplemental Information;
Embedded Options Available No Yes
Datasheet Link Link
Conflict Free Yes
Memory Specifications;
Max Memory Size (dependent on 24 GB 16.38 GB
memory type)
Memory Types DDR3 1066 DDR3 1066/1333
Max number of Memory Channels 3 2
Max Memory Bandwidth 21 GB/s
Physical Address Extensions 36-bit
Package Specifications;
Sockets Supported FCLGA1366 FCLGA1156
Max CPU Configuration 1 1
TCASE 68.8C 72.6C
Package Size 45mm x 42.5mm 37.5mm x 37.5mm
Processing Die Size 239 mm2 81 mm2

115 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 115 of 138
Low Halogen Options Available See MDDS See MDDS
Number of Processing Die Transis- 382 million
tors
Advanced Technologies;
Intel Turbo Boost Technology 1.0 No
Intel Hyper-Threading Technolo- Yes Yes
gy
Intel Virtualization Technology Yes Yes
(VT-x)
Intel VT-x with Extended Page Yes Yes
Tables (EPT)
Intel 64 Yes Yes
Instruction Set 64-bit 64-bit
Instruction Set Extensions SSE4.2 SSE4.2
Idle States Yes Yes
Enhanced Intel SpeedStep Tech- Yes Yes
nology
Intel Demand Based Switching No No
Thermal Monitoring Technologies No No
Intel vPro Technology No
Intel Data Protection Technology;
Intel AES New Instructions Yes No
Intel Platform Protection Technology;
Trusted Execution Technology No No
Execute Disable Bit Yes Yes
Graphics Specifications;
Processor Graphics Intel HD Graphics
Graphics Base Frequency 733.00 MHz
Intel Flexible Display Interface Yes
(Intel FDI)
Intel Clear Video HD Technology Yes
Number of Displays Supported 2
Expansion Options;
PCI Express Revision 2.0
PCI Express Configurations 1x16, 2x8
Max number of PCI Express Lanes 16

116 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 116 of 138
4.6 Compare Intel Core i5 seventh generation CPU & Core i7 seventh gen-
eration CPU. Include technical table.
Intel Core i7-7600U Intel Core i5-7Y57
Processor Name Seventh Generation Seventh Generation
Processor Processor
(4M Cache, up to 3.90GHz) (4M Cache, up to 3.30 GHz)
Code Name Kaby Lake Kaby Lake
Essentials;
Processor Number i7-7600U i5-7Y57
Status Launched Launched
Launch Date Q1'17 Q1'17
Lithography 14 nm 14 nm
Recommended Customer Price $393.00 $281.00
Performance;
Number of Cores 2 2
Number of Threads 4 4
Processor Base Frequency 2.80 GHz 1.20 GHz
Max Turbo Frequency 3.90 GHz 3.30 GHz
Cache 4 MB SmartCache 4 MB SmartCache
Bus Speed 4 GT/s OPI 4 GT/s OPI
TDP 15 W 4.5 W
Configurable TDP-up Frequency 2.90 GHz 1.60 GHz
Configurable TDP-up 25 W 7W
Configurable TDP-down Frequency 800.00 MHz 600.00 MHz
Configurable TDP-down 7.5 W 3.5 W
Supplemental Information;
Embedded Options Available Yes No
Datasheet Link Link
Conflict Free Yes Yes
Memory Specifications;
Max Memory Size (dependent on 32 GB 16 GB
memory type)
Memory Types DDR4-2133, LPDDR3- LPDDR3-1866, DDR3L-
1866, DDR3L-1600 1600
Max number of Memory Channels 2 2
Max Memory Bandwidth 34.1 GB/s 29.8 GB/s
ECC Memory Supported No No
Graphics Specifications;
Processor Graphics Intel HD Graphics 620 Intel HD Graphics 615
Graphics Base Frequency 300.00 MHz 300.00 MHz
117 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 117 of 138
Graphics Max Dynamic Frequency 1.15 GHz 950.00 GHz
Graphics Video Max Memory 32 GB 16 GB
Graphics Output eDP/DP/HDMI/DVI eDP/DP/HDMI/DVI
4K Support Yes, at 60Hz Yes, at 60Hz
Max Resolution (HDMI 1.4) 4096x2304@24Hz 4096x2304@24Hz
Max Resolution (DP) 4096x2304@60Hz 3840x2160@60Hz
Max Resolution (eDP - Integrated 4096x2304@60Hz 3840x2160@60Hz
Flat Panel)
DirectX* Support 12 12
OpenGL* Support 4.4 4.4
Intel Quick Sync Video Yes Yes
Intel Clear Video HD Technology Yes Yes
Intel Clear Video Technology Yes Yes
Number of Displays Supported 3 3
Device ID 0x5916 0x591E
Expansion Options;
PCI Express Revision 3.0 3.0
PCI Express Configurations 1x4, 2x2, 1x2+2x1 and 4x1 1x4, 2x2, 1x2+2x1 and 4x1
Max number of PCI Express Lanes 12 10
Package Specifications;
Sockets Supported FCBGA1356 FCBGA1515
Max CPU Configuration 1 1
TJUNCTION 100C 100C
Package Size 42mm X 24mm 20mm X 16.5mm
Low Halogen Options Available See MDDS See MDDS
Advanced Technologies;
Intel Speed Shift Technology Yes Yes
Intel Turbo Boost Technology 2.0 2.0
Intel vPro Technology Yes Yes
Intel Hyper-Threading Technology Yes Yes
Intel Virtualization Technology Yes Yes
(VT-x)
Intel Virtualization Technology for Yes Yes
Directed I/O (VT-d)
Intel VT-x with Extended Page Yes Yes
Tables (EPT)
Intel TSX-NI Yes Yes
Intel 64 Yes Yes
Instruction Set 64-bit 64-bit
Instruction Set Extensions SSE4.1/4.2, AVX 2.0 SSE4.1/4.2, AVX 2.0

118 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 118 of 138
Idle States Yes Yes
Enhanced Intel SpeedStep Tech- Yes Yes
nology
Thermal Monitoring Technologies Yes Yes
Intel Flex Memory Access Yes Yes
Intel Stable Image Platform Pro- Yes Yes
gram (SIPP)
Intel Smart Response Technology Yes Yes
Intel My WiFi Technology Yes Yes
Intel Data Protection Technology;
Intel AES New Instructions Yes Yes
Secure Key Yes Yes
Intel Software Guard Extensions Yes Yes
(Intel SGX)
Intel Memory Protection Exten- Yes Yes
sions (Intel MPX)
Intel Platform Protection Technology;
Trusted Execution Technology Yes Yes
Execute Disable Bit Yes Yes
OS Guard Yes Yes

119 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 119 of 138
4.7 What are the Laptop CPU types? Give a detail table.

Notable Product Commentary


Features
Core i7 (1) Hyper-Threading The Intel Core i7 represents the company's most fea-
(2) Turbo Boost ture robust processor offering. They are Intel's flagship
(3) QuickPath InterConnect series of processor, achieving the greatest levels of rel-
(4) Tri-Gate (3D) Transistors ative performance. As an excellent all-around proces-
(5) Intel HD Graphics sor, the i7 is ideal for enthusiasts, gamers, power users
(6) 64-bit and content creators alike. They are available for both
desktop and notebook platforms. The current genera-
tion of i7 (as well as i3 and i5) processors is Ivy Bridge
as of Mid-2012.
Core i5 (1) Hyper-Threading (on i5 The Intel Core i5 is a class of high-performance pro-
Mobile Dual-Core only, not cessor just a notch beneath the i7. Though they gener-
available on Quad-Core desk- ally possess same features as the i7 with some excep-
top version) tions (see Features), they have less cache (L3) memory
(2) Turbo Boost which amounts to similar, but lesser all-around per-
(3) QuickPath InterConnect formance. Like the i7 and i3, the i5 features Intel's high
(4) Tri-Gate (3D) Transistors performance integrated graphics in the HD 3000/4000.
(5) Intel HD Graphics Most users will find the general level of perfomance
(6) 64-bit offered by the i5 to be an attractive option compared to
a more expensive i7-equipped system.
Core i3 (1) Hyper-Threading The Intel Core i3 processor is the closest successor to
(2) QuickPath InterConnect the now out-of-production Core2Duo processor. The
(3) Tri-Gate (3D) Transistors most significant differences between the i3 and i5/i7 is
(4) Intel HD Graphics 3000 the lack of Turbo Boost and less cache (L3) memory.
(5) 64-bit The i3 offers moderate all-around performance and is
often found in budget-oriented systems.
Pentium (Post-2009) Hyper-Threading (however, The Intel Pentium as a product line had built a strong
most currently do not support reputation with consumers in the 90's through the early
this feature) 2000s with the Pentium I/II/III/4 series. Formerly a
flagship line of processor, the Pentium is currently in
production as a budget-oriented option just above the
Celeron in terms of relative performance. The most
recent iteration of the Pentium takes some architectural
cues from the Core i series with the 2011 Pentium
based on the Sandy Bridge, offering performance suit-
able for most basic tasks.

120 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 120 of 138
Celeron 64-bit Throughout its many iterations, the Intel Celeron has
(Post-2010)
occupied the lower end of the processor market in
terms of both price and performance. Updates to the
Celeron based on current generation architecture have
been made to keep the processor relevant. The im-
provements are enough such that they allow for run-
ning current productivity packages and web applica-
tions. They are best considered for an entry-level sys-
tem.
Atom (1) Hyper-Threading The Intel Atom belongs almost exclusively to a class
(2) 64-bit of personal computers known as netbooks (nettops and
tablets are the lesser common instances). The Atom is
focused not so much on performance as it is on reduc-
ing power consumption. As a result, many netbooks
offer excellent battery life at the cost of being unable to
run more sophisticated applications beyond web
browsing and word processing. Generally speaking,
netbook processors such as the Atom do not see sub-
stantial performance gains with subsequent genera-
tions.

Notable Product Commentary


Features

FX (1) HyperTransport Available exclusively on desktop platforms, AMD FX


(2) Integrated DRAM Con- targets custom builders and enthusiasts. This is a pro-
troller with AMD Memory cessor that far surpasses the needs of the average user.
Optimizer However, given the amount of performance it provides
(2) AMD Turbo CORE combined with the relative low cost, it becomes an at-
(3) AMD Virtualization tractive option for budget custom PC builds. The FX
(4) AMD PowerNow! along with the A-Series, represent AMD's current flag-
(Cool'n'Quiet) ship products and later releases within these product
lines are planned.
A-Series (Fusion) DirectX 11 Capable Graphics The AMD A-Series (AMD Fusion) are a type of chip
that merges the CPU with a high-performance GPU
(graphics processing unit) resulting in a versatile sys-
tem that is very power efficient. They are available in
desktops, laptops and most recently, ultrabooks. Where
the A4 APU is found in less expensive, entry level sys-
tems, the A6 and A8 are more suited for all-around use
w/advanced graphics applications (such as gaming or
3D modeling). In May 2012, AMD released the next
generation of Fusion A-Series processors known as
"Trinity"; these processors promise much greater
graphical and general purpose performance. AMD has
aligned Trinity as an answer to Intel's Ivy Bridge.
121 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 121 of 138
Phenom II (1) HyperTransport The AMD Phenom II is primarily a class of high-
(2) Integrated DRAM Con- performance desktop processor.In 2010, AMD claimed
troller with AMD Memory to be the first in the industry to offer a consumer class
Optimizer six-core processor though the X6. Mobile variants of
(3) AMD Turbo CORE the Phenom II were introduced as well, but not in the
(4) AMD PowerNow! six-core flavor. Though new generations of this prod-
(Cool'n'Quiet) uct line are no longer in the works, this line of proces-
(5) AMD CoolCore! sor is still sold as a low-cost, budget-oriented option
for custom system builds. The performance of this pro-
cessor is more than enough for everyday usage and
productivity.

Athlon II (1) AMD Virtualization The Athlon II is a relatively recent processor taking
(2) AMD PowerNow! design cues from the Phenom II. Unlike the Athlon
(Cool'n'Quiet) Classic, is still in production and far more suited to
(3) AMD CoolCore! current productivity applications such as Microsoft
Office as well as multitasking and multimedia applca-
tions. It is found in both laptops and desktops as a rea-
sonably-powered, cost-effective option.
Turion II (1) HyperTransport The Turion II is a processor based from the same archi-
(2) 64-bit tecture in the Phenom II and Athlon II. It was intro-
duced as a competitor to Intel's Core 2 Duo. As a re-
sult, its performance should be very suitable for
productivity software. They designed with power effi-
ciency in mind and is found primarily in notebook con-
figurations.
Sempron (1) HyperTransport The Sempron is the AMD analogue to the Intel
(2) 64-bit Celeron. It offers very basic levels of performance and
is updated every so often so as to offer an inexpensive
option capable of running recent versions of productiv-
ity software such as Office 2010 as well as web appli-
cations.

122 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 122 of 138
4.8 Give a table Desktop & Laptop CPU socket types.

In computer hardware, a CPU socket or CPU slot comprises one or more mechanical
components providing mechanical and electrical connections between a microprocessor and a
printed circuit board (PCB). This allows for placing and replacing the CPU without soldering.

Common sockets have retention clips that apply a constant force, which must be over-
come when a device is inserted. For chips with a large number of pins, either zero insertion
force (ZIF) sockets or land grid array (LGA) sockets are used instead. These designs apply a
compression force once either a handle (for ZIF type) or a surface plate (LGA type) is put
into place. This provides superior mechanical retention while avoiding the risk of bending
pins when inserting the chip into the socket.

CPU sockets are used in desktop and server computers. As they allow easy swapping
of components, they are also used for prototyping new circuits. Laptops typically use surface-
mount CPUs, which take up less space than a socketed part.

History

In the past, Dual in-line package (DIP) sockets have been used for processors such as
Motorola 68000. Other types used include PLCC and CLCC sockets.

Socket a (Socket 462)


A pin grid array socket.

LGA 775, a land grid array socket


123 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 123 of 138
Function

A CPU socket is made of plastic, and comes with a lever or latch, and with metal con-
tacts for each of the pins or lands on the CPU. Many packages are keyed to ensure the proper
insertion of the CPU. CPUs with a PGA (pin grid array) package are inserted into the socket
and the latch is closed. CPUs with an LGA package are inserted into the socket, the latch
plate is flipped into position atop the CPU, and the lever is lowered and locked into place,
pressing the CPU's contacts firmly against the socket's lands and ensuring a good connection,
as well as increased mechanical stability.

List of 80x86 sockets and slots;

Table legend:

Socket CPU families suppo Comput- Pack- Pin Bus speed


name rted er type age count
DIP Intel 8086 DIP 40 5/10 MHz
Intel 8088
PLCC Intel 80186 PLCC 68 to 132 640 MHz
Intel 80286
Intel 80386
Socket 1 Intel 80486 PGA 169 1650 MHz
Socket 2 Intel 80486 PGA 238 1650 MHz
Socket 3 Intel 80486 PGA 237 1650 MHz
Socket 4 Intel Pentium PGA 273 6066 MHz
Socket 5 Intel Pentium PGA 320 5066 MHz
AMD K5
Cyrix 6x86
IDT WinChip C6
IDT WinChip 2
Socket 6 Intel 80486 PGA 235
Socket 7 Intel Pentium PGA 321 5066 MHz
Intel Pentium MMX
AMD K6
Super Socket 7 AMD K6-2 PGA 321 66100 MHz
AMD K6-2+
AMD K6-III
AMD K6-III+
Rise mP6
Cyrix MII
Socket 8 Intel Pentium Pro PGA 387 6066 MHz

124 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 124 of 138
Slot 1 Intel Pentium II Slot 242 66133 MHz
Intel Pentium III
Slot 2 Intel Pentium II Slot 330 100133 MHz
Xeon
Socket 463/ NexGen Nx586 PGA 463 37.566MHz
Socket NexGen
Socket 587 Alpha 21164A Slot 587
Slot A AMD Athlon Slot 242 100 MHz
Slot B Alpha 21264 Slot 587
Socket 370 Intel Pentium III PGA 370 66133 MHz
Intel Celeron
VIA Cyrix III
VIA C3
Socket 462/ AMD Athlon Desktop PGA 462 100200 MHz This
Socket A AMD Duron is a double data rate
AMD Athlon XP bus having a 400
AMD Athlon XP-M MT/s (mega trans-
AMD Athlon MP fers/second) FSB in
AMD Sempron the later models
Socket 423 Intel Pentium 4 Desktop PGA 423 400 MT/s (100 MHz)
Socket 478/ Intel Pentium 4 Desktop PGA 478 400800 MT/s
Socket N Intel Celeron (100200 MHz)
Intel Pentium 4 EE
Intel Pentium 4 M
Socket 495 Intel Celeron Intel Notebook PGA 495 66133MHz
Pentium III
PAC418 Intel Itanium PGA 418 133 MHz
Socket 603 Intel Xeon Server PGA 603 400533 MT/s
(100133 MHz)
Socket 563 AMD Athlon XP-M Notebook PGA 563
PAC611 Intel Itanium 2 PGA 611
HP PA-8800,
PA-8900
Socket 604 Intel Xeon Server PGA 604 4001066 MT/s
(100266 MHz)
Socket 754 AMD Athlon 64 Desktop PGA 754 200800 MHz
AMD Sempron
AMD Turion 64
Socket 940 AMD Opteron Ath- Server PGA 940 2001000 MHz
lon 64 FX Desktop

125 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 125 of 138
Socket 479 Intel Pentium M Notebook PGA 479[8] 400533 MT/s
Intel Celeron M (100133 MHz)
Socket 939 AMD Athlon 64 Desktop PGA 939 2001000 MHz
AMD Athlon 64 FX
AMD Athlon 64 X2
AMD Opteron
LGA 775/ Intel Pentium 4 Desktop LGA 775 1600 MHz
Socket T Intel Pentium D
Intel Celeron
Intel Celeron D
Intel Pentium XE
Intel Core 2 Duo
Intel Core 2 Quad
Intel Xeon
Socket M Intel Core Solo Notebook PGA 478 533667 MT/s
Intel Core Duo (133166 MHz)
Intel Dual-Core
Xeon
Intel Core 2 Duo
LGA 771/ Intel Xeon Server LGA 771 1600 MHz
Socket J
Socket S1 AMD Turion 64 X2 Notebook PGA 638 200800 MHz
Socket AM2 AMD Athlon 64 Desktop PGA 940 2001000 MHz
AMD Athlon 64 X2
Socket F/ AMD Athlon 64 FX Server LGA 1207 Socket L: 1000 MHz
AMD Opteron in Single CPU mode,
Socket L (Sock- (Socket L only sup- Desktop 2000 MHz in Dual
et 1207FX) port Athlon 64 FX) CPU mode
Socket AM2+ AMD Athlon 64 Desktop PGA 940 2002600 MHz
AMD Athlon X2
AMD Phenom
AMD Phenom II
Socket P Intel Core 2 Notebook PGA 478 5331066 MT/s
(133266 MHz)
Socket 441 Intel Atom Sub- PGA 441 400667 MHz
notebook
LGA 1366/ Intel Core i7 Server LGA 1366 4.86.4 GT/s
Socket B (900series)
Intel Xeon
(35xx, 36xx, 55xx,
56xx series)

126 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 126 of 138
rPGA 988A/ Intel Core i7 (600, Notebook rPGA 988 2.5GT/s, 4.8GT/s
Socket G1 700, 800, 900 series)
Intel Core i5
(400, 500 series)
Intel Core i3
(300 series)
Intel Pentium
(P6000 series)
Intel Celeron
(P4000 series)
Socket AM3 AMD Phenom II Desktop PGA 941 2003200 MHz
AMD Athlon II or
AMD Sempron 940
AMD Opteron (1300
series)
LGA 1156/ Intel Core i7 (800 Desktop LGA 1156 2.5 GT/s
Socket H series)
Intel Core i5 (700,
600 series)
Intel Core i3 (500
series)
Intel Xeon (X3400,
L3400 series)
Intel Pentium
(G6000 series)
Intel Celeron (G1000
series)
Socket G34 AMD Opteron (6000 Server LGA 1974 2003200 MHz
series)
Socket C32 AMD Opteron (4000 Server LGA 1207 2003200 MHz
series)
LGA 1248 Intel Intel Itanium Server LGA 1248 4.8 GT/s
9300-series
LGA 1567 Intel Intel Xeon Server LGA 1567 4.86.4 GT/s
6500/7500-series
LGA 1155/ Intel Sandy Bridge Desktop LGA 1155 5.7, GT/s
Socket H2 Intel Ivy Bridge
Intel Xeon E3 12xx Server
[Sandy Bridge 12xx]
[Ivy Bridge 12xxV2]
LGA 2011/ Intel Core i7 3xxx Desktop LGA 2011 4.86.4 GT/s

127 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 127 of 138
Socket R Sandy Bridge-E Server
Intel Core i7
4xxx Ivy Bridge-E
Intel Xeon
E5 2xxx/4xxx
[Sandy Bridge EP]
(2/4S)
Intel Xeon
E5-2xxx/4xxx v2
[Ivy Bridge EP]
(2/4S)
rPGA 988B/ Intel Core i7 Notebook rPGA 988 2.5GT/s, 4.8GT/s
Socket G2 (2000, 3000 series)
Intel Core i5
(2000, 3000 series)
Intel Core i3
(2000, 3000 series)
Socket FM1 AMD Llano Desktop PGA 905
Processors
Socket FS1 AMD Llano Notebook PGA 722
Processors
Socket AM3+ AMD FX Vishera Desktop PGA 942
AMD FX Zambezi (CPU
AMD Phenom II 71pin)
AMD Athlon II
AMD Sempron
Socket FM2 AMD Trinity Pro- Desktop PGA 904
cessors
LGA 1150/ Intel Haswell, Intel Desktop LGA 1150
Socket H3 Haswell Refresh,
Intel Broadwell
Socket G3/ Intel Haswell Notebook rPGA 946
Socket G3 Intel Broadwell
Socket FM2+ AMD Kaveri and Desktop PGA 906
Godavari Processors
Socket AM1 AMD Athlon Desktop PGA 721
AMD Sempron
LGA 1151 Intel Skylake Desktop LGA 1151
Intel Kaby Lake Server
Socket AM4 AMD Zen Proces- Desktop PGA 1331
sors Notebook

128 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 128 of 138
Task 05

5.1 Find three IPv4 classes with their default subnet masks.

Class and Range Default Subnet Mask


A 1 - 126 255.0.0.0
B 128 - 191 255.225.0.0
C 192 - 223 255.255.255.0

5.2 Explain what is a protocol?


In information technology, a protocol is the special set of rules that end points in a
telecommunication connection use when they communicate. Protocols specify interactions
between the communicating entities.

Protocols exist at several levels in a telecommunication connection. For example,


there are protocols for the data interchange at the hardware device level and protocols for da-
ta interchange at the application program level. In the standard model known as Open Sys-
tems Interconnection (OSI), there are one or more protocols at each layer in the telecommu-
nication exchange that both ends of the exchange must recognize and observe. Protocols are
often described in an industry or international standard.

The TCP/IP Internet protocols, a common example, consist of:

Transmission Control Protocol (TCP), which uses a set of rules to exchange messages
with other Internet points at the information packet level

Internet Protocol (IP), which uses a set of rules to send and receive messages at the Inter-
net address level

Additional protocols that include the Hypertext Transfer Protocol (HTTP) and File Trans-
fer Protocol (FTP), each with defined sets of rules to use with corresponding programs
elsewhere on the Internet

There are many other Internet protocols, such as the Border Gateway Protocol (BGP) and the
Dynamic Host Configuration Protocol (DHCP).
129 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 129 of 138
5.3 What are the OSI seven layers?

If network communications need to happen with out any trouble, many problems must
be solved. Coordinating all these problems are so complex and not easy to manage. To make
these tasks smooth, in 1977 the International Standards Organization (ISO) proposed the
Open Systems Interconnection (OSI) network model. The Open Systems Interconnection
(OSI) model breaks down the problems involved in moving data from one computer to an-
other computer. Open Systems Interconnection (OSI) model categorizes these hundreds of
problems to Seven Layers. A layer in Open Systems Interconnection (OSI) model is a portion
that is used to categorize specific problems.

Open Systems Interconnection (OSI) Seven Layered reference model is only just a
reference model. All the problems which are related to the communications are answered by
specific protocols operating at different layers. The following image shows the seven layers
described in Open Systems Interconnection (OSI) model.

Seven Layers of Open Systems Interconnection (OSI) Model

1. Physical Layer

The first layer of the seven layers of Open Systems Interconnection (OSI) network
model is called the Physical layer. Physical circuits are created on the physical layer of Open
Systems Interconnection (OSI) model. Physical layers describe the electrical or optical sig-
nals used for communication. Physical layer of the Open Systems Interconnection (OSI)
model is only concerned with the physical characteristics of electrical or optical signaling
130 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 130 of 138
techniques which includes the voltage of the electrical current used to transport the signal, the
media type (Twisted Pair, Coaxial Cable, Optical Fiber etc), impedance characteristics, phys-
ical shape of the connector, Synchronization etc. The Physical Layer is limited to the pro-
cesses needed to place the communication signals over the media, and to receive signals com-
ing from that media. The lower boundary of the physical layer of the Open Systems Intercon-
nection (OSI) model is the physical connector attached to the transmission media. Physical
layer of the Open Systems Interconnection (OSI) model does not include the transmission
media. Transmission media stays outside the scope of the Physical Layer and are also re-
ferred to as Layer 0 of the Open Systems Interconnection (OSI) Model.

2. Datalink Layer

The second layer of the seven layers of Open Systems Interconnection (OSI) network
model is called the Datalink layer. The Data Link layer resides above the Physical layer and
below the Network layer. Datalink layer is responsible for providing end-to-end validity of
the data being transmitted. The Data Link Layer is logically divided into two sublayers, The
Media Access Control (MAC) Sublayer and the Logical Link Control (LLC) Sublayer.

Media Access Control (MAC) Sublayer determines the physical addressing of the
hosts. The MAC sub-layer maintains MAC addresses (physical device addresses) for com-
municating with other devices on the network. MAC addresses are burned into the network
cards and constitute the low-level address used to determine the source and destination of
network traffic. MAC Addresses are also known as Physical addresses, Layer 2 addresses, or
Hardware addresses.The Logical Link Control sublayer is responsible for synchronizing
frames, error checking, and flow control.

3. Network Layer

The third layer of the seven layers of Open Systems Interconnection (OSI) network
model is the Network layer. The Network layer of the OSI model is responsible for manag-
ing logical addressing information in the packets and the delivery of those packets to the cor-
rect destination. Routers, which are special computers used to build the network, direct the
data packet generated by Network Layer using information stored in a table known as routing
table. The routing table is a list of available destinations that are stored in memory on the
routers. The network layer is responsible for working with logical addresses. The logical ad-
dresses are used to uniquely identify a computer on the network, but at the same time identify
the network that system resides on. The logical address is used by network layer protocols to
deliver the packets to the correct network. The Logical addressing system used in Network
Layer is known as IP address.IP addresses are also known as Logical addresses or Layer 3
addresses.

131 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 131 of 138
4. Transport Layer

The fourth layer of the seven layers of Open Systems Interconnection (OSI) network
mode is the Transport layer. The Transport layer handles transport functions such as reliable
or unreliable delivery of the data to the destination. On the sending computer, the transport
layer is responsible for breaking the data into smaller packets, so that if any packet is lost dur-
ing transmission, the missing packets will be sent again. Missing packets are determined by
acknowledgments (ACKs) from the remote device, when the remote device receives the
packets. At the receiving system, the transport layer will be responsible for opening all of the
packets and reconstructing the original message.

Another function of the transport layer is TCP segment sequencing. Sequencing is a


connection-oriented service that takes TCP segments that are received out of order and place
them in the right order.

The transport layer also enables the option of specifying a "service address" for the
services or application on the source and the destination computer to specify what application
the request came from and what application the request is going to.

Many network applications can run on a computer simultaneously and there should be
some mechanism to identify which application should receive the incoming data. To make
this work correctly, incoming data from different applications are multiplexed at the
Transport layer and sent to the bottom layers. On the other side of the communication, the
data received from the bottom layers are de-multiplexed at the Transport layer and delivered
to the correct application. This is achieved by using "Port Numbers".

The protocols operating at the Transport Layer, TCP (Transmission Control Protocol)
and UDP (User Datagram Protocol) uses a mechanism known as "Port Number" to enable
multiplexing and de-multiplexing. Port numbers identify the originating network application
on the source computer and destination network application on the receiving computer.

5. Session Layer

The position of Session Layer of the Seven Layered Open Systems Interconnection
(OSI) model is between Transport Layer and the Presentation Layer. Session layer is the fifth
layer of seven layered Open Systems Interconnection (OSI) Model. The session layer is re-
sponsible for establishing, managing, and terminating connections between applications at
each end of the communication.

In the connection establishment phase, the service and the rules (who transmits and
when, how much data can be sent at a time etc.) for communication between the two devices
are proposed. The participating devices must agree on the rules. Once the rules are estab-
lished, the data transfer phase begins. Connection termination occurs when the session is
132 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 132 of 138
complete, and communication ends gracefully. In practice, Session Layer is often combined
with the Transport Layer.

6. Presentation Layer

The position of Presentation Layer in seven layered Open Systems Interconnection


(OSI) model is just below the Application Layer. When the presentation layer receives data
from the application layer, to be sent over the network, it makes sure that the data is in the
proper format. If it is not, the presentation layer converts the data to the proper format. On the
other side of communication, when the presentation layer receives network data from the ses-
sion layer, it makes sure that the data is in the proper format and once again converts it if it is
not.

Formatting functions at the presentation layer may include compression, encryption,


and ensuring that the character code set (ASCII, Unicode, EBCDIC (Extended Binary Coded
Decimal Interchange Code, which is used in IBM servers) etc) can be interpreted on the other
side.

For example, if we select to compress the data from a network application that we are
using, the Application Layer will pass that request to the Presentation Layer, but it will be the
Presentation Layer that does the compression.

7. Application Layer

The Application Layer the seventh layer in OSI network model. Application Layer is
the top-most layer of the seven layered Open Systems Interconnection (OSI) network model.
Real traffic data will be often generated from the Application Layer. This may be a web re-
quest generated from HTTP protocol, a command from telnet protocol, a file download re-
quest from FTP protocol etc.

In this lesson (Seven Layers of Open Systems Interconnection (OSI) Model), you
have learned what are the Seven Layers of Open Systems Interconnection (OSI) Model and
the functions of these seven layers. The top-most layer of the Seven Layers of Open Systems
Interconnection (OSI) Model is the Application Layer and the bottom-most layer of the Sev-
en Layers of Open Systems Interconnection (OSI) Model is Physical Layer.

133 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 133 of 138
5.4 Explain difference between LAN & WAN.

The terms LAN and WAN are often confusing for people that arent that tech savvy.
These are both connections that allow users to connect their computer to a network, including
the internet. LAN is short for Local Area Network, while WAN is short for Wide Area Net-
work. These two differ from each other in distinct ways.

LAN is a computer network that connects computers in small areas such as home, of-
fice, school, corporation, etc. using a network media. It is useful for sharing resources such as
printers, files, games, etc. A LAN network includes a couple of computer systems connected
to each other, with one system connected to a router, modem or an outlet for internet access.
The LAN network is built using inexpensive technologies such as Ethernet cables, network
adapters and hubs. However, other wireless technologies are also available to connect the
computer through a wireless access. In order to configure a LAN network, a person may also
require specialized operating system software. The most popular software includes the Mi-
crosoft Windows Internet Connection Sharing (ICS), which allows users to create LAN.

The first successful LAN network was created by Cambridge University in 1974
known as the Cambridge Ring; however it was not commercialized until 1976 by Datapoint
Corporation. Datapoints ARCNET was installed at Chase Manhattan Bank in New York in
1977. The main purpose of creating a LAN was to share storage and other technologies such
as printers, scanners, etc. The smallest LAN can include two computers, while the largest
can, in theory, support 16 million devices according to About.com. Wikipedia states that the
larger LANs are characterized by their use of redundant links with switches using the span-
ning tree protocol to prevent loops, their ability to manage differing traffic types via quality
of service (QoS), and to segregate traffic with VLANs. The larger LANs also employ other
devices such as switches, firewalls, routers, load balancers, and sensors.

134 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 134 of 138
WAN is a network that covers a broad area using private or public network transports.
The best example of WAN would be the Internet, which can help connect anyone from any
area of the world. Many businesses and government use WAN in order to conduct business
from anywhere in the world. WANs are also responsible largely for businesses that happen
across the world (i.e. a company in UK does business with a company in China). The basic
definition of WAN includes a network that can span regions, countries, or even the world.
However, in practicality, WAN can be viewed as a network that is used to transmit data over
long distances between different LANs, WANs and other networking architectures.

WANs allow the computer users to connect and communicate with each other regard-
less of location. WAN uses technologies such as SONET, Frame Relay, and ATM. WANS
allow different LANs to connect to other LANs through technology such as routers, hubs and
modems. There are four main options for connecting WANs: Leased line, Circuit switching,
Packet switching and Call relay. Leased lines are point-to-point connection between two sys-
tems. Circuit switching is a dedicated circuit path between two points. Packet switching in-
cludes devices transporting packets via a shared single point-to-point or point-to-multipoint
link across a carrier internetwork. Call relay is similar packet switching but uses fixed length
cells instead of variable length packets.

LANs are become more and more common in many places such as offices, corpora-
tions, homes, etc. A main reason for their growing popularity is that they are cheaper to instill
and offer higher transfer speeds. LANs offer speeds up to 80 or 90 mbps due to the proximity
of the computer systems to each other and lack of congestion in the network. In comparison,
WANs can provide a speed of 10 to 20 mbps. Also LANs offer better security compared to
WANs, which are more easily accessible with the people that know how to hack systems.
WANs and LANs can be secured using firewalls, anti-virus and spyware softwares.

135 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 135 of 138
A detailed description is available below:

LAN WAN

WAN is a network that covers a


LAN is a computer network that
Definition broad area using private or public
connects computers in small areas.
network transports.

WAN has lower data transfer rates


Data transfer rates LAN offers high data transfer rates.
due to congestion

Speed 80-90 mbps 10-20 mbps

WAN uses technologies such as


LANs use technologies such as
MPLS, ATM, Frame Relay and
Technology Ethernet and Token Ring to connect
X.25 for data connection over
to other networks.
greater distances.

High bandwidth is available for Low bandwidth available for


Bandwidth
transmission. transmission.

Computers connected to a wide-


area network are often connected
One LAN can be connected to other
through public networks, such as
Connection LANs over any distance via tele-
the telephone system. They can also
phone lines and radio waves.
be connected through leased lines
or satellites.

Layers 3 devices Routers, Multi-


Layer 2 devices like switches, bridg-
layer Switches and Technology
Components es. Layer 1 devices like hubs, re-
specific devices like ATM or
peaters.
Frame-relay Switches etc.

136 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 136 of 138
WANs have more problems due to
LANs tend to have fewer problems
Problems the large amount of system and data
associated with them.
that is present.

LAN networks can be owned up pri- WAN are not owned up any one
Ownership vate companies or people that set it organization but exist under collec-
up at homes. tive or distributed ownership.

Data Transmission Experiences fewer data transmission Experiences more data transmission
Error errors. errors.

Set-up costs are high, especially in


Set-up costs are low as the devices
remote locations where set-up is not
Cost required to set up the networks are
done. However, WANs using pub-
cheap.
lic networks are cheap.

The network is spread to a very The network can be spread world-


Spread
small location. wide.

Maintenance costs are low as the Maintenance costs are high as the
Maintenance costs
area coverage is small. area coverage is world-wide.

Congestion Less congestion More congestion

137 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 137 of 138
5.5 What are the benefits of Star topology?

What is Star topology?

In Star topology, all the components of network are connected to the central device
called hub which may be a hub, a router or a switch. Unlike Bus topology (discussed earli-
er), where nodes were connected to central cable, here all the workstations are connected to
central device with a point-to-point connection. So it can be said that every computer is indi-
rectly connected to every other node by the help of hub.

All the data on the star topology passes through the central device before reaching the
intended destination. Hub acts as a junction to connect different nodes present in Star Net-
work, and at the same time it manages and controls whole of the network. Depending on
which central device is used, hub can act as repeater or signal booster. Central device can
also communicate with other hubs of different network. Unshielded Twisted Pair (UTP)
Ethernet cable is used to connect workstations to central node.

Star Topology Diagram

Advantages of Star Topology;

1) As compared to Bus topology it gives far much better performance, signals dont neces-
sarily get transmitted to all the workstations. A sent signal reaches the intended destination
after passing through no more than 3-4 devices and 2-3 links. Performance of the network is
dependent on the capacity of central hub.
2) Easy to connect new nodes or devices. In star topology new nodes can be added easily
without affecting rest of the network. Similarly components can also be removed easily.
3) Centralized management. It helps in monitoring the network.
4) Failure of one node or link doesnt affect the rest of network. At the same time its easy to
detect the failure and troubleshoot it.

138 | P a g e
International Diploma in Hardware & Computer Network Technology
Assembling and maintenance of Personal Computers /combined assessment
Page 138 of 138

Das könnte Ihnen auch gefallen