Sie sind auf Seite 1von 39

6

Computer systems
To err is human, but to really foul up requires a computer.
Dan Rather, US TV newscaster
Overview
In unit 1 we looked at what made a system and, in particular what constituted an information
system. In this unit we are going to get more specific and consider what is involved with a
computer system in action.
To keep our investigation of computer systems relevant we must always be aware that they are
operated by people. Throughout our study we will look at both the social and ethical use of the
systems and the human-computer interaction that takes place.
To find out about computer systems we will investigate:
the parts of a computer system
hardware
software
networking
cabling and protocols
case study of a computer system
the changing workplace.
Introduction
Designing, implementing, and organising a computer system can vary from the very simple, to
the very complex; from a student buying a single laptop, to a large organisation setting up a
network involving a wide range of technology.
We will investigate what is involved in this process, but before we do we will focus our
thinking by looking at a scenario of a typical computer system in development.
Dr. J ill Foote has been in general practice in her doctors surgery for nearly ten years now and is
growing increasingly annoyed at the inefficiency of the current system of running her office.
Dr Foote has about 900 patients, but many visit only rarely. On average she will see 25-30 patients a
day and has an income of around $38 000 per month. While this may seem a lot, out of this she
must pay for rental on the surgery, equipment, supplies, insurance and the wages of Kym the
receptionist/nurse, and Darryl a cleaner who comes in each evening. Although Kym controls most of
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 171
the appointments and clerical work much of the financial management has fallen on Dr Foote and
she is finding she has little free time to be with her family.
She is also concerned about the growing complexity of the finances. Most of Dr Foote's income is
paid through Medicare or by private health insurers; only a small proportion is paid in cash by
patients. Claiming this money involves detailed record keeping and much
form filling. The occurrence of even small errors can delay payment by
months and errors affecting patients harm the practice.
The recording of GST is also a problem. While most medical services are
free of the tax there are some aspects of her business that involve the
GST. Keeping track of input tax credits and which services and items are
taxed is very demanding, as is preparing the quarterly BAS (business
activity statement).
Wages and the payment of accounts owing also falls on Dr. Foote. Patient
medical histories, reminder notices for follow up visits, and a stock record
of medicines, etc., on hand are other aspects of the business that Dr Foote
is interested in computerising.
At home Dr Foote has an Internet connection and is in regular email
contact with colleagues. She would like to extend this connectivity to the office. It would also be
useful to network the computers in the office. The system to be set up will need to be reliable as she
cannot afford the inconvenience of a breakdown; secure so that patient confidentiality is maintained;
and simple to use as she has little computer background.
Activity 6.1 The doctors office
1. a Make a list of as many different activities as you can that take place in the doctors
office and that are described above (e.g. maintain patient records).
b Tick those activities in your list that you think could best be managed by a
computerised information system.
c Are there any activities that could be computerised, but which perhaps might best not
be? Give reasons for these.
2. What advantages will Dr Foote gain by introducing a computerised information system to
her business? In your answer consider time, money, space, security, efficiency,
effectiveness, and patient response.
3. a Identify the individuals who will be the main users of the new system.
b What tasks will each of these users carry out?
c What are some of the human factors Dr Foote will need to consider in establishing
this system? (Consider training, security, management, and other similar factors.)
4. Part of the difficulty Dr Foote will have with installing her new system will be the
problems involved in changing over from the old to the new, i.e. paper based to computer
based.
a Describe some of the problems that might arise.
b Do you think it would be better to make a sudden change (over a weekend say), or
phase in the new system over a period of time?
Give reasons to support your answer.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 172
System parts
A complete computer system consists of three areas, the human, the processing environment,
and interconnectivity. Each of these is important in its own right, each interacts with the other
two parts, and each must be considered when putting together a computer system. There should
not be a focus on one of these areas at the expense of the other two; each area must be
considered independently to properly ensure the usability, reliability and effectiveness of the
whole system.
We will start by looking at the human side of a computer system.
Human considerations
The users of a computer system must be directly addressed during its development. While it is
tempting to focus on hardware and software, the role of the user is crucial without the user
there is no need for the system. People collect the data for the system, they interact with it, and
the systems output is used for their purposes.
The main areas of human interaction involved in a computer system to be considered are:
the work procedures of the day-to-day operation of the computer; these include
measures for data collection, methods of entering the data into the computer, and
procedures for handling system output; they also include the maintaining of effective
interaction between colleagues for shared tasks
technical management which involves installation, customisation and maintenance of
hardware, software and networking features; it also often includes user maintenance
and user support
training in the use of the computer system; this will include both initial training and
on-going re-skilling as changes occur in the technology; training can be either on the
job, in seminars off site, or it may be required as a basic qualification prior to
obtaining the job
personnel management to ensure the health, safety and wellbeing of workers, the
reduction of stress factors, and the use of ergonomic principles in the design of the
working environment
security to protect the data in the system from accidental or deliberate errors; this will
include the use of company policies, firewalls, virus protection and a regular backup
protocol.
In developing any new computer system each of these factors must be considered.
Going back to the doctors office scenario we can be more specific as to what each of these
involves. In Dr Footes office the human systems will involve each of the features identified
above and decisions will have to be made as part of the change over:
work procedures who will do which jobs?, e.g. enter confidential patient data, handle financial
matters, manage day-to-day updating of patient records and bookings, and so on; what output
is to be obtained, what recording process is to be followed, how is the file system to be set up;
how will each of the users work together for best efficiency and without interfering with the
others?
technical management who will decide what hardware, software and networking system is to
be purchased, at an affordable price, and ensuring it is suitable for this particular situation; who
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 173
is responsible for the system set up, for repairs or contacting repairers, for routine maintenance
and purchasing of supplies?
training who will have to be given instruction in how to use the system and software, who will
provide the training, and how will it be delivered so as not to interfere with normal operations?
Dr Foote herself will need skilling in the use of the new system and how to use any of the
programs she is not familiar with such as a new accounting package; Kym the
receptionist/nurse will also need training; time must be found in a busy schedule for both of
these to occur
personnel management what safeguards and procedures will have to be put in place to
ensure the change over and operation of the new system occurs smoothly and with as little
stress as possible, and that the new system conforms to health and safety standards?
security what precautions and practices are to be put in place to ensure the confidentially and
safeguarding of patient records and other sensitive or critical data in the system?
The overall effectiveness of a computer system relies on how well the users are accommodated
during the design and implementation stages of system development. By addressing each of
these areas the new system has the best chance of success. The goal is to allow users to do their
job as efficiently as possible by removing any obstacles that might be introduced by changes to
the computer and network systems or by human factors.
Processing environment
Once the human aspects have been determined we can look at the processing involved.
The computer system itself consists of three main sub-systems, the hardware setup, the
software layer and the interface system. Together these three provide a complete system for
processing. They must be planned for during the design phase, and together take up the
majority of the implementation phase.
Hardware is the physical components that make up a computer system. It includes such items
as monitors, processors, disc drives, printers, modems, routers, cabling, wireless access points,
and so on. The hardware is physical in that it is real
and solid, and can be touched, moved and connected.
These components can be selected from off-the-shelf
parts, manufactured to fit a required task, or a mix of
these options.
Many organisations source all of their hardware from
a single supplier such as Dell. Systems that
exclusively use a prefabricated option such as this
have the advantage of reduced cost, faster
implementation, and demonstrated reliability. This
approach however may not address the hardware requirements of some organisations. In this
case a more hybrid combination of devices may better fit the particular needs of the enterprise.
The software is the programs that run on the computer. These are not physical and exist only as
electrical or magnetic pulses stored on disc or in memory. Software consists of applications to
run on the computer, operating systems to run the computer itself, and programming languages
used to write specific applications.
Similar to hardware these programs can be purchased as completed packages or can be
developed by teams of programmers. It is even common for programs to be extended to add
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 174
extra features, or to be directed how to interact with data that the users input. Choosing the best
software for the task is the most important part of planning the processing environment.
The final system is the interface. The interface is the means by which a user communicates
with the computer system the link between person and machine. In the case of package
software such as Microsoft Office, the interface is often limited to a default style, however this
is not always the case and custom interfaces can be employed. Software specially written for an
enterprise will need a clean and simple interface developed. (User needs and interface are
explored in more detail in the unit on HCI.)
Interconnectivity
The final element important to the development of a computer system is the networking in
place. For the purposes of a practical system these networks can be grouped into two types, the
abstract and the physical.
The abstract networks are those which connect information but do not directly deal with
moving information between computers. Some examples of an abstract network includes
humans working together to collect and input data into the
system, the hardware of a computer talking between its
components to process data, and peripheral devices such as
printers and routers assisting in both of these tasks.
Physical networks are used to transfer data that exists on one
device on the system to another. This data can be anything input
into the system, information derived from the original input, or
information obtained from outside sources. These physical
networks allow the rapid movement of data locally and around
the globe.
In all computer systems abstract networks must exist even if physical networks are not
required. Identifying these networks allows the organisation of the system to be planned during
the design phase and tested during implementation.
System solution
In the next sections we will look at hardware, software and networking in more detail, but for
now we will propose one possible solution for Dr J ill Footes situation in each of these areas:
Hardware: There is probably the need for at least two computers, one for the receptionist and one on
the doctors desk. Of these one might be a laptop so that it can be taken home in the evening, while
the other can be a mid-level desktop computer. The computers will use the normal peripherals such
as keyboard, mouse and printer, but direct connection to a fax line would help.
Software: The office will employ a range of software to run the computers as well as take care of all
of the tasks Dr Foote would like to computerise. A simple practical solution might encompass a
Windows 7 based operating system, the Microsoft Office suite of programs, a networking client, and
an accounting package such as MYOB or Quicken. A dedicated communications package such as
Outlook would also help.
Networking: These computers will need to be linked so that Dr Foote and Kym can share data and
resources. The office should also have an Internet connection for browsing, ordering, email and
other communication. A network modem/router, possibly wireless, would help achieve this.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 175
Activity 6.2 Sub-systems
1. a Identify the three areas of importance to be considered in developing a computer
system.
b One of these can in turn be divided into three further sub-systems. Which is it and
identify these sub-systems.
2. a What sort of processes are involved in the human system side of computing?
b Choose one of these and explain its role and importance in the effective running of
the computer system as a whole.
3 a What is the difference between hardware and software?
b Find out what is meant by the term firmware?
c Identify three different forms of software.
d What is the interface in a computer system? Why is an effective interface so
important?
4. What advantages are gained by networking computers together?
5. A possible solution for Dr Foote is given above, but it lacks specific detail.
Prepare your own solution by doing the following:
a Make a list of the hardware you think Dr Foote might need.
b To this list add the software you think would assist her in running the office.
c Suggest a suitable operating system that could be used in the situation described.
d Use computer magazines, adverts or web sites to prepare a cost effective information
system for Dr Foote.
In doing this keep in mind:
Dr Foote does not have a lot of money to spend so the system purchased will
need to be cost effective (i.e. pay for itself over time)
the computer system must be simple to use and easy to maintain
the system will need to be flexible so that it can be upgraded as time goes by.
e In a paragraph justify the decisions you have made in developing this solution.
6. Make a diagram showing an appropriate layout design for the office indicating the waiting
area, receptionist, consulting room, etc.
On the diagram position all of the equipment you identified in Q5, as well as desks, filing
cabinets, desk diary, phone, fax, photocopier, intercom, etc.
Design the layout so that the most frequently used items are within easy reach, and so that
the space is effectively and efficiently used. Ensure that no clutter exists between the
receptionist and patients at the counter.
For confidentiality the receptionists screen should not be visible to patients.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 176
Inside the computer
Computer power has risen dramatically since the first electronic computers were built in the
1940s. Current computers are many times more powerful than those early models that filled
whole laboratories.
Gordon Moore, one of the founders of the processor manufacturer Intel, coined a guideline
which now bears his name. The so-called Moores Law proposes that computing power will
double every 18 months. This suggests that, on the whole, if you compare current computers
with those of a year and a half ago you will see they have about twice the processing power.
Moore coined this proposition over 40 years ago, and so far his hypothesis has been borne out!
Despite Moores Law, the rising expectations of users are placing greater pressure on hardware
resources. Computers are now handling sound, animation and video for multimedia and internet
streaming which are placing increasing demands on the power and memory needed.
In this section we will investigate the basic hardware used in a computer system. Before we
look at the parts of hardware however it is necessary to gain some idea of what underlies the
operation of the computer.
Analogue and digital
To understand what a digital computer is we first have to appreciate the distinction between
digital and analogue.
The computers we know are devices that work with digits (numbers), and so are called digital
computers. Less common but still around are analogue computers such as the neural networks
we will see in unit 10. An analogue device is one that handles a continuous range of values
rather than distinct number values.
Most of the world is analogue. Our ears can hear a range of tones from under 100 Hz up to over
15 000 Hz including all of the frequencies in between. Cheetahs travel at speeds of from still
up to 80 kph again including all of the speeds in between. The second hand on an analogue
watch sweeps out all of the seconds in a minute, including all of the fractions of a second as it
travels over the watch face. Each of these is an example of an analogue measure a continuous
range of values, including all quantities in between any two measures.

A digital device on the other hand deals only in distinct values. A light switch must be off or
on. A digital watch records only the specific seconds of a minute (not the in between parts of a
second that an analogue watch does). A digital camera image is made up of separate dots. All

A digital watch only shows
separate, whole seconds

The sweep hand of an analogue
watch covers all fractions of a second
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 177
of these digital values are discrete it easy to see where one ends and the next begins, there is
no in-between part.
Perhaps the best example of the distinction between analogue and digital is with a light that is
controlled by both an on-off switch and a dimmer. The on-off is digital (two distinct values)
while the dimmer is analogue (the light can be set to any of a full range of levels of brightness).
Analogue signals are usually represented as a wave form, while digital signals by a square
wave.
1
1
0
0
Analogue signal Digital signal
As computers are digital, and all forms of input and output humans use are analogue, we must
constantly be converting from one form to the other in order to pass values that are useful.
One example of this is where we want to send digital computer values over an analogue
telephone line. Using a modem the digital signal is converted into sounds that are transmitted
over the connection. At the other end the analogue sounds are demodulated back into digital
values by another modem for input back into a computer.





Analogue to digital conversion
While analogue is more common in the real world it is not as convenient to handle as digital.
Digital values are simple to transmit, maintain, compress, and to check for errors. For these
reasons it is now becoming common to make recordings digital. A music CD uses digital
coding, mp3 is a digital format, images are stored as bitmaps (bmp or jpg) and video is digitised
for DVD. HDTV is a digital form of television transmission. It seems the world is going digital.

An enlargement of a bitmap showing how analogue curves are represented by digital pixels.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 178
Bits and bytes
Computers are electronic and, because of the limitations of electricity, digital computers are
restricted to using only the states of on and off. For ease of handling these states are represented
as the two digits 0 and 1, and are referred to as bits (short for binary digits). All computer data
is stored or transmitted as bits:
0 off, uncharged, unpolarised, low (or negative) voltage
1 on, charged, polarised, high (or positive) voltage
For transmission or storage it is usual to group bits together. A byte is a group of usually 8 bits,
e.g.
By using bits and bytes we can electronically store and transmit text (writing) and numbers.
A byte can hold one character such as a K or a % or an 8 . In the ASCII code most commonly
used to represent text, the character K for example is represented by the byte 0100 1011 .
Binary numbers
0s and 1s are also used to represent numbers in binary (base 2) arithmetic.
All numbers possible in our decimal (base 10) number system can be represented in binary. The
two bytes 1011 0101 1001 0011 hold the decimal value 46 483. Each 1 or 0 in a binary number
has a place value of the power of 2,
e.g. 1001 1101
2
= 1 x 2
7
+0 x 2
6
+0 x 2
5
+1 x 2
4
+1 x 2
3
+1 x 2
2
+0 x 2
1
+1 x 2
0
= 157
10

In addition to the simple representation of a value, binary can also represent large numbers. By
structuring multiple bytes with meta information they can store floating point numbers. These
are the binary equivalent of what is called scientific or exponential notation in the decimal
system; for example 3.14 x 10
23
.
Binary memory
Early computer memory consisted of magnetic cores woven onto strands of wires in a 32 x 32
grid. Each of these core memory grids could hold 32 x 32 =1 024 bits of data. As this was
close to one thousand they were said to hold one kilobit of data. The unit 1 024 is still used in
memory values.
Common units of memory measurement are:
kilobytes (kB 1 024 bytes)
megabytes (MB 1 024 kB)
gigabytes (GB 1 024 MB)
and terabytes (TB 1 024 GB).
A kilobyte will hold about 20 words of text, a megabyte about 45 pages of writing, a gigabyte
over 150 books, and a terabyte a school library of books. In terms of media content, 1TB can
store over 250 high definition movies.
Note: These measurements are not to be confused with Mb, Gb, etc., which refer to mega bits
and giga bits (not bytes). Units such as these are usually used to measure download speeds. To
further confuse the issue the International Electrotechnical Commission (IEC) released a new
set of measurements such as MiB, GiB, etc., which are based on a multiple of 1 000 instead of
1 024.
1 1 1 1 0 0 0 0
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 179
Von Neumann architecture
Shown below is the traditional, or von Neumann, model of a computer.

Von Neumann model of a computer showing flow of data
Dr J ohn von Neumann in 1946 was the first to suggest that the computer could be used as a
general purpose device by reading into memory a different program for each task. Previously
each computer had been hard wired to do one specific task. This development was so effective
that this model has been the basis of most past and current computers, and is known by his
name. We will look at the parts of the von Neumann computer in the next section.
For the last decade the processor industry has been reaching the limits of miniaturisation and
speed that can be achieved with the von Neumann model.
One of the limits to speed is the so called von Neumann bottleneck. (A
bottleneck is a narrowing in a channel that restricts flow along it, here
data.) The von Neumann bottleneck occurs because in this way of
structuring computers instructions and data from memory must travel
along the same bus (data pathway). This means that pipelining is not
possible, i.e. reading in the next instruction to the processor while the
results of the last are being passed out. In the von Neumann model
instructions can only be executed sequentially, one at a time.
There are also limits to miniaturisation with this model. Each
instruction in the processor relies on a pulse of electricity and, as there
are hundreds of thousands of operations per second, heat builds up very quickly. The
accumulation of heat disrupts the operation of the processor and can overheat the whole
system. There is also a limit to the maximum speed information can travel, especially along
pathways as complex as in microchips. This can become a problem when the data simply
cannot cover the distances required in the time available, thus slowing down the whole process.
Dr J ohn von Neumann
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 180
In addition the extreme miniaturisation in integrated circuits has led to the problem of electrons
jumping from one path to another, causing computational errors.
To overcome the limitations in speed and miniaturisation of the von Neumann model,
alternative architectures have been suggested. The alternatives range from conceptual, to
practical, and even to the style of programming they facilitate. Many of these alternatives can
only perform very limited tasks and have been specialised to the point where they cannot be
used for more general tasks. In some cases von Neumann systems can emulate or be part of the
alternative architecture.
Some examples of alternative architectures include:
distributed computing work is split up into units or tasks and allocated by a master
control to any number of distinct computers
parallel processing with more than one processor in the one computer
clustering where a group of independent computers is managed as a single system
reduced instruction set computer (RISC) processors a Harvard architecture where
data and instructions travel on multiple buses (lines) simultaneously
artificial neural networks (ANNs) that mimic the operations of neurons in the human
brain
field programmable gate array (FPGA) a processor that can be configured by the
designer or user after manufacturing; can be mixed with von Neumann, RISC, or
other processors.
One other area of alternative architecture is the quantum computer. This theoretical computer
has the ability to solve problems that are not suited to traditional architecture, with much
greater efficiency. Practical examples of problem solving using quantum computing methods
have been successful and research is ongoing. While quantum computing is not suitable for
general purpose processing it could be used in conjunction with current computers.
Activity 6.3 Fundamentals
1. a What is Moores Law?
b The personal computer (PC) was introduced in 1980. Using Moores Law, by what
factor would we expect todays PCs to be better than the original IBM PC?
c Use the adverts in a computer magazine compare an entry level computer from 18
months ago with a similarly priced one today. Would the improvements validate
Moores Law?
2. a Explain the difference between analogue and digital.
b What are four advantages of using digital storage of data over analogue?
c Is optical fibre an analogue or a digital form of communication? What advantage does
this form of communication have for the National Broadband Network (NBN) which
is based on fibre to the home?
3. The world is going digital. What does this mean, and why might it be happening?
4. a What is the abbreviation bit short for?
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 181
b Why must computer values be stored as bits?
c What decimal value is stored by the binary number 1101 0101?
d Why is a kilobyte 1 024 bytes and not 1 000 bytes?
e How many bytes are there in a gigabyte?
5. a Make a diagram of the von Neumann model of a computer.
b What is the von Neumann bottleneck?
c Choose one of the alternative architectures listed and find out more about it. Write
your findings in a paragraph and report them to the rest of the class.
6. What is a quantum computer? In your answer refer to qubits, basic operation, and the
potential applications of quantum computing.
Processor
The central processing unit (CPU), or processor, in a computing device is contained in an
integrated circuit on a silicon chip. It is made up of many millions of transistors in an area not
much larger than your finger nail.
The chip consists of microscopic layers of silicon, the circuits of
which are made by etching or depositing different materials such as
aluminium or chemically impure silicon. The lines of the circuits on
the chip are so small that one hundred lines are less than 1 mm
wide. Single chips are now being produced with over 2.3 billion
transistors on each.
The processor is the device that follows the steps of the program
the computer is running. It is made up of a variety of sections, each
with a specialised task:
thecontrol unit (CU) directs and synchronises the operations of the processor; it does
this by reading instructions from memory, decoding them and then activating circuits
to execute each instruction
the arithmetic and logic unit (ALU) carries out the instructions from the CU; it
includes components that can add and compare numbers and store partial results;
current processors have a floating point unit (FPU a maths co-processor) as part of
the ALU to assist with complex mathematical operations
a level 1 cache which is a small, very fast memory area for immediate use by the
processor; many processors have a secondary cache known as the level 2 cache which
is bigger and greatly assists media intensive tasks
an I/O unit which consists of registers to control the flow of information to and from
the processor and the rest of the computer, and
a variety of accumulators, registers and buffers that hold data as it is switched
between the parts of the processor.
To make an analogy with a person, the CU could be seen to be the brain that directs the body,
the ALU the liver and stomach that process input, while the I/O system is the senses and voice.
A silicon chip
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 182
Taking the analogy further the cache would be a scrap pad for writing quick notes, and the FPU
a calculator.
Because of the complexity and level of miniaturisation these processors are extremely
expensive to design and develop. However, once production is underway they can be
manufactured in batches of thousands. When
development costs have been met, the cost of
each processor soon drops. The current main
producers of processor for desktops are Intel and
AMD, and ARM for small mobile computers.
The processor is mounted on a motherboard a
rectangular circuit board that links all of the
components inside the computer. The
connections are made through pathways called
the system bus. The speed of the system bus (e.g.
100 or 400 MHz) affects the rate at which data
can pass between the components and hence the
speed operations can occur at.
Graphics processors
Increasing demands are being placed on processors and memory nowadays. Multimedia
computing for example calls for the manipulation of blobs (binary large objects, e.g. video
images or sound files) which require very fast and complex processing. This demand seems to
be following a spiral pattern. As processors become more powerful developers find applications
that push them to the limit so that more powerful processors need to be developed. The
expectations of users and the more sophisticated applications being produced lead to a rapid
obsolescence in processors.
To counter this trend many computers now employ the use of a graphics processing unit
(GPU). The GPU supports the main processor by handling the complex maths involved in
images to be output to the monitor, especially in gaming.
Modern computer games require intensive processing to model the complex movement of 2-
and 3-dimensional representations, and to handle the realistic display of light, shadow and
texture. As the gaming industry has developed and expanded over the past decade, the quality
of graphics expected by game players has increased. In an example of consumer expectations
and demand driving development, this has led to fierce competition to create more powerful
graphics cards.
While processor manufacturers were improving the power of single core processors, the
graphics industry chose a different direction to focus their efforts. Graphics cards are massively
multi-core, and rely on hundreds of comparatively primitive processors to work. These cores
are designed to do very specialised maths, very quickly.
Recently the power of these GPU cards has drawn attention from academics. The inclusion of
dedicated physics processors has allowed real world representation of movement in not only
games, but in scientific modelling. The idea that these specialised functions could be used more
widely caught on quickly, and GPGPU (general-purpose computing on graphics processing
A personal computer motherboard
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 183
units) has became a standard feature on many cards. This general-purpose programming ability
of GPUs has attracted the attention of universities who see them as a cheap and powerful
processor that can be used to solve complex maths problems. Some universities are already
building large computer clusters of GPUs.

An Nvidia GPU card
It is possible that in future the graphics card will become as important to personal computers as
the main processor. The CPU would deal with general programs while diverting real-world-
problem processing to the graphics card.
Main memory
All programs and data in computer systems are recorded in some type of memory. This can be
on silicon chips linked directly to the processor or on secondary devices such as hard drives,
optical discs like DVDs, or even remotely on the internet.
The main memory is the section of the computer where programs and data are stored ready for
rapid access when needed. The processor has direct access to main memory and so it must be
very fast to avoid slowing down the operations of the processor. Access speeds as quick as 6 or
7 ns (nanoseconds billionths of a second) are common now. To show how brief this is 6 ns
compared to one second is the same as 1 second is to 5 years!
This type of memory is made up of multiple lattices of very small wires or printed paths. At
each intersection in this grid is a microchip and a capacitor which can store a single on or off
state for access while the system is running. The capacity of these grids is usually 1024 or one
kilobit. This was done as it allows base 2 addresses which can be directly linked to physical
locations in memory. Other digital storage, such as disc, does not have this quirk which has led
to a debate over how units of memory should be described. (As mentioned earlier with the
reference to the IECs MiB and GiB units.)
All types of memory can be divided into ROM (read only memory) and RAM (random access
memory).
ROM holds static instructions needed to run systems such as the BIOS (basic input output
system) and its replacement EFI (extensible firmware interface) on all consumer computers.
ROM is permanent and persists even when the computer is turned off.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 184
RAM on the other hand holds current programs and data. These have only been read into the
memory since the computer was last switched on. Data in RAM is lost when the power is
turned off.
Nowadays this type of memory is principally held by DDR SDRAM (double data rate
synchronous dynamic RAM) modules that consist of millions of transistors that can switch and
hold a 1 or 0. A typical desktop nowadays will come with at least 4 GB of DDR3 RAM as
standard. This can be increased by the addition of more RAM modules that plug into memory
sockets on the motherboard. The more memory available the less disc access is required, and
the quicker and more smoothly the computer operates. These memory modules are currently
240 pin DIMMs (double inline) and are incompatible with earlier generations of motherboards.

Larger computers used in corporations and universities use other more expensive forms of
memory such as magnetic bubble memory or CCDs charged coupled devices. Many
enterprise servers will use RAM capable of correcting errors in data collected from secondary
devices.
Most computer systems now use some very fast memory as a cache. The cache holds data
recently obtained from disc on the assumption that the same data may be needed again shortly.
The processor will check the cache each time it needs data and if it is available (a hit) can get at
it very quickly without having to wait for the data to come from the disc. Level 1 cache is on
the processor, and while level 2 used to be separate, it is now also on modern processors.
When the processor calls for data the search order is, look in level 1 cache, then level 2 cache,
then main memory, then hard disc, and finally any other (e.g. flash drive, internet). If the data is
found at any point the search stops. Cache memory is kept small to limit search time (and
because it is very expensive), but its use can significantly speed up operation.
Secondary memory
Secondary memory includes all of the devices used to store bulk data that is not immediately
needed by the processor. Common forms include hard discs, flash drives, Blu-Ray discs, and
tape cassettes.
While secondary memory devices are much slower than main memory (taking milliseconds
ms rather than nanoseconds ns) the relative cost is very much less. As a result slow but cheap
secondary memory devices are used for storing bulky data such as programs, files, and data that
are not immediately required by the processor. When needed the data is loaded into main
memory so that the processor has swift access to it.
Main memory a DDR SDRAM DIMM
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 185
Disc storage
So called volume storage in modern computers takes the form of a hard disc drive (HDD), a
solid state drive (SSD), or a mixture of the two. In mobile phones and tablet computers with a
large storage need, a less specialised version of SSD is used, usually a single microchip instead
of a large array of chips. All of these forms of mass storage tend to be referred to (confusingly)
as hard disks or HDDs.
The true HDD consists of one to five aluminium platters
(discs) encased in a sealed housing to avoid contamination
with dust. Each platter has a thin layer of magnetic
material coating it and, as the platter is spun, a read/write
head moves over it. To write to the disc a pulse of
electricity through the head polarises (magnetises) sections
of the disc. The same head can later pass over the disc to
read this pattern of 1s and 0s. In consumer computers
capacity is measured in terabytes, with 2TB or larger being
typical in desktop computers.
SSDs on the other hand are similar to several sticks of
standard computer memory joined together. They employ a
technique that allows the information stored to survive even when the mains or battery power is
not available. SSDs are in the same form factors as standard PATA or SATA HDDs, i.e. 3.5" or
2.5". They can be mounted directly on the motherboard of the device or computer, but are
usually in the same removable form factor as HDDs.
HDDs usually stay permanently in the computer, although portable or external HDDs are
becoming common for holding or transferring large data files such as backup or media content.
These can be moved from computer to computer and connected via USB or ESATA. An
alternative portable option is the use of USB Flash Drives that can store gigabytes of data on a
small chip. Flash drives use the same technology as SSDs.
Optical storage
CDs, DVDs and Blu-Ray are all forms of optical disc used for mass storage of data.
A computer CD (compact disc) works similarly to an audio CD. The data is encoded as a series
of depressions or pits cut into the surface of the disc to represent 1s and 0s. These are read by a
laser that bounces light off the pits. A CD can store 700 MB of programs, video, music, data,
games, etc., in a compact (12cm) form. CDs are rated at speeds such as 52x which are multiples
of a nominal rate. (1x is the speed of a standard audio CD.)
DVD (digital versatile disc) works similarly to CD but holds more data. Depending on the
format DVD can hold 8.7G, 9.4 GB or more data. Some are even four sided (i.e. two layers of
pits on each side).
The Blu-Ray (BR) disc is the next evolution of the optical storage format.
The name refers to a blue laser that is used to read the data. This format
is capable of storage of up to 128 GB depending on the standard and discs
used. High definition movies are most commonly distributed in BR format.

Inside an HDD showing the top
platter and the read/write head
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 186
All three standards of optical disc allow several options for users to create writeable discs.
These are denoted by a postfix to the name:
ROM (read-only memory) discs cannot be changed
R for discs which can be written to once
RW for re-writable discs that allow their content to be changed at a later time.
For all forms of writeable disc the surface has a photo-reactive dye layer that can be burned
away by laser to create the pits to store data. This process of recording data is usually referred
to as burning.
Network storage
In addition to sharing data and resources, networks are also important in providing access to
secondary memory for the storage of data.
Networks that require large quantities of data to be available to their users will often use
storage clusters known as network attached storage (NAS), or for specialised networks storage
area networks (SAN). NAS devices are dedicated storage computers that do nothing but
connect to a network and provide a large storage space for data. While this style of storage was
once used mainly by enterprises it is becoming more common in everyday households as rich
media such as HD video requires an increasing amount of space.
Secure storage is of primary importance, and so most networks will also have some form of
data backup in case files are lost for one reason or another. The simplest form of backup
employs storage tape using QIC (quarter inch cartridge) or DAT (digital audio tape). Some
systems use discs as backup, and those with a more critical data set may use mirrored drives.
This is a second set of disc drives that constantly copy everything from the master set of drives.
If the original drives go down the mirrored drives can take over immediately.
A RAID (redundant array of inexpensive discs) is
used in networking situations also. In this setup a
bank of HDDs has data stored, not on one but across
a series of the discs. (The data is said to be striped
across the discs.) The RAID acts like a single disc
but has better fault tolerance, data storage and
reliability. There is also increased system
performance through better I/O of data. If any of the
discs fails it can be replaced with a hot swap
(without stopping the server) and data restored from
a mirrored section of the RAID.
Types of Computer
There is a wide range of computers available from those that will fit into a pocket, to those that
fill rooms. Below are listed some of the main categories in order of size and power:
palmtop computer or PDA (personal digital assistant) pocket sized, used for
portability, convenience and on-line connection by engineers, accountants, real estate
agents, etc.

A Raid-5 array
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 187
laptop or notebook computer used away from main place of activity by sales people,
students, journalists, lawyers, etc.
desktop computer the standard computer used for general computing activities
work station a more powerful desktop computer used for intensive applications such
as CAD or DTP
server a powerful computer with rapid memory access that acts as a data source for
a network; servers may provide files, print services, web access, email, etc.
mini computer a desk-sized computer used by agencies such as credit bureaus,
travel companies, educational institutions, etc.
mainframe computer a large computer system, these are used by insurance
companies, banks, government bodies, universities, etc.
super computer the largest computer setup available; these are used by the military,
intelligence organisations, or for weather forecasting.
In addition to the above computers can be used in clusters, where several computers together
handle tasks. The advantages of using computers in clusters are:
availability and redundancy if one computer goes down the others can take up the
load giving a high level of fault tolerance
performance tasks can be shared across the cluster to balance work load and
improve the level of service
management the cluster appears as one device which makes for simpler control.
The disadvantage to using a cluster is the initial cost of setup and the complex controlling
software.
Activity 6.4 Inside information
1. Expand the following abbreviations:
USB CPU ALU CU
I/O FPU DDR SDRAM CD-ROM
DVD TB Gb GPU
BIOS RAID ESATA CD-RW
2. What is meant by the term obsolescence in regard to computer processors?
3. a Explain the difference between ROM and RAM.
b What is each used for?
c What is the cache and how does its use help speed up the operations of the processor?
4. a What is secondary memory?
b Give five examples of secondary memory.
c It is possible to hold all of the data and applications a computer could use in main
memory, but this is not done.
Why is secondary memory used even though it is much slower?
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 188
5. Find out about one of the following experimental forms of data storage. Briefly explain
how the one you chose works, its potential storage capacity, and the likelihood of its wide
scale use in the near future.
magneto-optical (MO) magnetic random access memory (MRAM)
scanning tunnel microscopy (STM) atomic force microscopy (AFM)
solid immersion lenses (SIL) atomic resolution storage (ARS)
holographic storage
6. What type or category of computer could each of the following be classified as:
a A Dell Optiplex desktop computer used by an office worker.
b A Cray SX-6 used by the NSA spy agency to decrypt strongly encoded intercepted
messages from China?
c A BlackBerry used by an architect while she was at a building site.
d A DEC computer used by an insurance company to process 50 000 accounts.
e An iPad that a university student takes with him to lectures.
7. a List the type(s) of computer used in your computer room.
b What processors are used in them?
c How much RAM is in these computers?
d What sort of secondary storage is available? Identify the storage capacity of each.
e What peripherals (printers, mice, etc.) are used?
f What form of network connection is used?
8. Using a computer designated by your teacher carry out the following:
a Switch off and unplug the power cord from the wall outlet.
b Disconnect every lead from the back of the computer. Some may need to be
unscrewed using a small screwdriver.
c Carefully examine the ends of the leads and the sockets they go into and note how
each connection is different from the others and how each lead and socket connect
together.
d Connectors are described as male or female depending on which fits into which.
Identify these.
e Each socket or port has a series of pins, each of which is connected to one wire or
cable in the lead. Which connector has the most pins? How many is this?
f If permitted by your teacher open the case of the computer and find each of the
following components:
motherboard processor processor fan or cooling fins
power supply HDD DIMM slots (memory)
GPU (if present) CMOS battery peripheral connectors (USB ports, etc.)
g Identify how many cards are in the back of the computer, and attempt to identify what
each is used for.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 189
h Look at the slots the cards fit into. Is there more than one type of slot (and if so can
you name them)?
i Are there any spare slots for additional memory modules? If so what is the maximum
amount of RAM this computer could have?
j Replace the case and reconnect the leads ensuring each goes on the correct port and is
not forced into position.
9. Go to an on-line computer vendor such as Dell (dell.com.au) and choose a suitable
computer for each of the following individuals:
a A newspaper reporter who travels most of the time; needs a good reliable computer
that is easy to transport and can be used to send stories back to the head office.
b A university student who needs to prepare assignments but does not have much
money.
c A copywriter who is working with colleagues overseas to prepare television
advertisements.
d A school computer coordinator who needs to fill a classroom with networked
computers at a reasonable cost.
Software
Software is the set of programs used to operate a computer. There are three main forms of
software, applications to run on the computer, an operating system (OS) to run the computer
itself, and programming languages to write the two other forms of software.
Applications
Applications are the type of computer program we are most familiar with. Word processors,
computer games and web browsers are all applications that run on the computer.
Some applications, such as databases and spreadsheets, are general purpose in that they give
the user a basic format for doing a wide variety of tasks. Once the application is started then the
user is free to use the program to develop different types of activities such as invoices, letters,
diagrams or web pages.
In addition to general purpose, there are special purpose or dedicated applications that are
designed to do one specific task. Examples of these are a company payroll program, a
production line control program, or a computer game.
Sometimes applications are grouped together to form a program suite. The best example is
Microsoft Office which includes Word, Excel, Access, PowerPoint, etc. Another example is the
Adobe Creative Suite package which includes Flash, Illustrator, Photoshop, Dreamweaver,
Acrobat, and so on.
Most modern programs give the user the option to customise the interface. The menu structure,
form and position of toolbars, and screen appearance can all be altered to suit the working style
of the user. In turn this gives the user more control over their working environment. Current
application software is also very user friendly. The use of on-screen, context sensitive help (the
message refers to what the user has highlighted) means users do not have to refer to manuals to
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 190
find out what to do next. Most applications also include tutorials or wizards that lead the user
through an operation they wish to carry out.
One aspect of current applications that is a result of this customisation and user friendliness is
that they are becoming increasingly large and complex. The increased elaborateness of
programs is in turn making greater demands on hardware in processor requirements, memory
capacity, and data storage devices. This trend has become known as feature creep.
An application should be linked to the task the user is required to carry out, and the user should
be aware of which application is best suited to the task at hand. For example, while the
distinction between word processing and desktop publishing applications has blurred, in certain
situations one is better to use than the other. When it comes to web publishing certain programs
will only go so far, and it is up to the user to know when to move onto a specialist web
publishing package to achieve what they want to in the best way. It is the users responsibility
to choose the correct application for the job.
Remote applications
A recent trend has been for an outside company to provide the software for an organisation. An
application service provider (ASP) will own and host on its own server each of the applications
an enterprise uses. As a user needs to use a program he or she will access it from their desktop
as normal however, instead of coming from the local HDD, the parts of the program needed
(and only those parts) will come from the ASPs server. This might be over the internet or a
dedicated link. If more parts of the application are needed they are downloaded as required.
At first sight this on-demand software might seem an inefficient method, but it can be very
effective. Instead of buying dozens of copies of software and then loading these onto each
machine in a network, the ASP provides the programs only as needed. The maintenance,
correcting and updating of dozens of copies of the software does not have to be done. Any
changes in setup can be made at one central location. Security and virus control is centralised.
Costs are also reduced. Instead of paying for all of the features of complex applications, the
enterprise pays for (rents) only the parts of the programs that are actually used. In some cases
savings can also be made on the type of computer being used with a thin-client (minimal
computer) running the remote applications.
A variation on the ASP in more general use are web-apps. These are programs ordinary users
can access on-line and run in their browser. The most popular of these web-apps is Google
Docs which includes word processing, spread sheeting and presentation software, though there
are many other apps available. Most web-apps can also store the users files on-line, and so
take up little local disc space.
To be effective both ASPs and web-apps rely on reliable broadband links between the provider
and user. This is so that programs are always available, and run as smoothly as if they were
hosted on the local machine.
Operating systems
When using a word processor we type at a keyboard and are not surprised to see letters appear
on the screen in front of us. Clicking on Print causes a copy of the screen to appear on paper.
But what determines how the keys we press are converted into images that are placed on the
screen, or emerge on paper? The answer is the operating system of the computer we are using.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 191
The operating system (OS) is the program (or set of programs) that manages the flow of
information in a computer system. Data must come from the keyboard or the disc drive, into
memory or the processor, and from there must be sent to the screen, the printer or other
computers. This does not happen by accident, and in fact must be carefully controlled. The
movement is very fast and complex, and to avoid clashes, losses, or errors the operating system
must determine what goes where and when.
There are as many different forms of operating system as there are types of computer. The
system selected for your computer depends on both the level of user friendliness required, and
the purpose the computer is to be used for.
Currently common OSs include:
Microsoft Windows 7
Macintosh OS X
Unix and GNU/Linux
Nintendo, Playstation, etc. for games consoles
VMS and SAA/SNA for mainframe computers.
Despite the wide variety of operating systems there is little standardisation. Software can often
only run on one type of OS, e.g. Xbox games will not run on a Playstation.
At the personal computer level one effect of this is that Macintosh software will not run on
Windows computers without special emulation software. One way of emulating another OS is
to set up a virtual machine, i.e. where one computer is used to carry out or mimic the actions of
another. An example is Parallels Desktop for Mac, an application which permits the Windows
OS and programs to run in a window on the Macintosh.
Types of OS
The types of OS we are most familiar with are those that manage the standalone and the
networked computer.
A standalone is independent of other computers and has the advantage of being able to function
by itself even if there are problems with other nearby computers. The standalone is however
becoming rarer nowadays with the advantages networking offers such as the sharing of
information and resources.
More sophisticated OSs offer the options of multiuser, multitasking, or parallel processing.
A multiuser operating system is one in which more than one user is accessing the computer at a
given time. A good example is an ATM network for a bank. Here a mainframe computer
located in a capital city can control up to hundreds of ATMs. In sequence the mainframe polls
each ATM to see if it wants to do some processing (deposit, withdrawal, etc.) and if it does,
gives it a few milliseconds of processing time. (This is an activity called time slicing.) To each
user at a multiuser terminal they appear to have exclusive use of the computer, yet all of the
terminals are sharing the same processing.
In the past multiuser systems were common, especially in places such as universities or offices
where users all accessed one mainframe. Nowadays they have been replaced by networked
computers, with the move toward decentralisation and more local autonomy in processing.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 192
A multitasking OS allows one processor to perform several jobs at once, to gain the maximum
benefit from a processor. The operating speeds of a processor are in nanoseconds (ns, billionths
of a second) while disc access is in milliseconds (ms, thousandths of a second). Rather than
have a processor stand idle for several milliseconds waiting for data from disc, it is often better
to get it to work on another job in the meantime. In this way a fast, powerful processor will
switch jobs in and out, working on several programs at once.
Another development is parallel or distributed processing where an OS uses more than one
processor to handle its task or tasks. This may be in one computer, or alternatively on
computers spread across a network (perhaps at night when they are idle). Naturally the OS that
can handle a series of tasks over many different processors must be very sophisticated and
complex.
File maintenance
Murphys Law states: If anything can go wrong, it will go wrong, at the worst possible time.
This is never more apparent in computing than when vital files are lost.
Backing up files should be part of your working strategy in operating a computer. It is
important to make regular backups; the more significant the files the more regular the backup.
It is also a good idea to either leapfrog your backups (i.e. keep one or even two generations of
earlier backups before overwriting them) or to use incremental backups. This is where you
make a regular full backup, and then make more common smaller backups of files that have
changed since the last full backup.
Backup can be made by using special backup programs or by compressing the required files
using a zip (or similar) compression utility. Zipping can also be useful for compressing files for
transfer over the Internet as it reduces file size.
If you lose some files it will be time to call on your earlier backup and restore the lost data.
A virus is a self duplicating program that copies itself onto computers and performs an action
the computer owner does not intend. Again it is part of your working strategy to regularly
check your hard drive(s) for viruses, and most importantly to check every flash drive, and every
internet download or email for viruses before you give them access to your computer.
There are often unwanted files on disc that it is a good idea to remove every now and then to
remove clutter and save disc space. Two places these can be found are in the recycled/trash bin
or temp folders.
When files are deleted from a hard disc they are sent to a special folder (the recycled folder in
Windows). This is a precaution to prevent accidental deletion, or to hold files in case you later
find you still want them. This folder needs clearing out every now and then.
Many applications, as they are running, write temporary files to the hard drive. These files hold
data that will not fit into memory (RAM). When the application is closed the temporary files
are normally deleted automatically. If, for any reason, the application does not close normally
(Reset button pressed, power failure, etc.) the temp files remain on disc. After a while these can
clutter up the hard disc and should be removed. In Windows computers these files are
recognised by the extension .tmp or by having a tilde ~in front of the file name. In deleting any
files be very sure they are not files needed for the computer or applications to run.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 193
Finally after time a disc will need to be defragmented. When files are written to disc then are
placed in the nearest available spare slot. As some files are deleted they leave gaps that parts of
new files are written into. Over time there will be parts of files inefficiently spread all over the
disc. The process of defragmentation collects these parts of files together in one continuous
band that frees up disc space and speeds up file access.
Activity 6.5 The computer at work
1. What type of application is each of the following programs?:
CorelDraw Word MYOB Excel
FireFox Access AutoCAD MS Project
DreamWeaver PowerPoint Flash Photoshop
2. a Re-read the task description about Dr J ill Foote and identify the areas of office and
financial procedures that could be computerised in her office.
b Suggest general purpose applications to do each of these tasks.
c What type of OS would best suit the operations of this office? Suggest a specific
operating system that could be used in the situation described.
3. Mike is about to start at UQ in Brisbane studying engineering. His parents have offered to
buy him a laptop to assist in the preparation of assignments, the formation of diagrams,
carrying out calculations and the recording of information he will need to be familiar with.
Mike will need to purchase software to do this.
a By looking on-line, through computer magazines, or by visiting computer shops
obtain a list of the applications (with prices) that you think he will need.
b As Mike is on Austudy he has limited money available. Find a list of free programs
that could be used as an alternative to the above list.
4. In your own words describe the function of an operating system.
5. Suggest a suitable operating system for each of the following situations:
a A high school student with a laptop computer.
b An insurance agent who wishes to communicate with clients.
c A bank system manager who wants to manage her IBM mainframe.
d A Nintendo DS handheld.
e An advertising copy writer preparing a newspaper advert.
f A classroom network.
6. a What is an app store?
b In what ways does this form of software distribution differ from more traditional
methods?
c What are the advantages in obtaining software in this way?
7. Explain what a virtual machine is, and explain why it might be needed.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 194
Networking
The standalone computer is becoming a thing of the past. The convenience and efficiency of
connecting computers together has led most organisations to alter their work practices and to
form networks of computers. Home networks are now also common.
A network is a set of computers that are linked together to share data and communications.
There are three principal advantages of using a network in place of standalone computers:
data and programs can be shared between many computers so that for example
several people can work on one report, all users can access a central database, or
alternatively networked games can be played
resources such as printers, scanners, CD drives, or a broadband router do not have to
be attached to each computer, but can be communal
the ability to communicate with colleagues electronically through email, messaging or
video conferencing.
A group of computers in one location (an office, a school, a work site) is called a local area
network (LAN). Computers that are connected but not on the one site are called a wide area
network (WAN). Examples of WANs are the computers of an organisation located in different
cities or even countries, and bank ATMs. The ultimate WAN is the internet
Network configuration
Each device connected to a network is called a node. Nodes include computers, printers, CD
drives, modems, laptops, scanners, and so on.
There are two principal configurations for a network:
peer-to-peer each computer on the network is equal to others and can be configured
to share peripherals (such as printers) and disc space, including disc space on other
computers
client-server one computer acts as a file server or master computer that directs data,
communications or printing tasks for a group of other computers; the file server may
be dedicated (do no other job) or may be able to also act as a work station itself.
Peer-to-peer networks are cheaper to establish (no file server) and easier to set up and
administer. They are normally used in a LAN with up to 25 nodes. A peer-to-peer network is
ideal for situations where the main activity is using general applications such as word
processing and spreadsheets as in a small business, a classroom, or in the home. There is
usually a form of password access to log into the system but generally not much greater
security.
The disadvantages with a peer-to-peer network are:
it is limited to about 25 nodes
a particular work station with a peripheral attached can be slowed down as others use
the processor in, for example, a print job
there is also usually a high RAM overhead in running a peer-to-peer network, and
all machines need to stay switched on all the time for users to access all facilities and
data.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 195
The client-server model can handle up to 200 nodes and can also operate over a WAN. While
more expensive to set up and more complex to manage the client-server configuration is more
suitable for businesses in that it can provide centralised data on the server. This data (e.g. stock,
employee data, order entry, billing system, etc.) can then be accessed by each user in the
organisation.
Security on a client-server network is obviously more of an issue with many people able to use
one set of data. This is generally handled by having levels of access so that most users are only
able to read data and only a few are able to modify or delete it.
Both peer-to-peer and client-server can have a printer sever attached. This is a printer linked
directly into the network. As such it does not work through a computer and hence does not tie
up one.
Over recent years with the influence of the internet the distinction between these two network
configurations has started to blur. It is common for systems in business to be a mixture of peer-
to-peer and client-server as each has advantages for some shared tasks and not others. In the
same way home computers are increasingly being used to connect to remote networks or
meshes to obtain work related or rich-media content using the best available method.
(As a side note peer-to-peer (P2P) file sharing refers to the fact that equal partners contribute to
the data exchange, and not to how the networking is structured.)
A variant of client-server that some organisations are using is called thin client. A thin client is
a computer that acts as little more than a screen and keyboard. A users keyboard or mouse
actions on the client are transmitted to the server which does the work. Applications, data
storage and processing are carried out on the server. The results are then sent for display on the
screen of the thin client. Depending on the type of client, the amount of processing it can
undertake may vary from nil, to being able to act as a standalone if the network connection
goes down.
The advantages of a thin client network are that they are relatively cheap to set up, and they are
very simple to administer with all programs and processing being in one central place.
Packet switching
Information that travels over a computer network is usually sent by a process called packet
switching. There are other forms of network transmission (circuit or message switching) but
packet switching is by far the most common. It is the mode used over the internet and most
intranets.
In this form of transmission each message to be sent is divided into parcels or chunks of data
called packets. Each packet has a header of addressing information and a tail to mark the end
of the packet.



Packet switching is a very efficient form of data communication because the packets of many
messages can be interwoven onto one line so that full use is made of the bandwidth.
t
a
i
l
t
a
i
l

data data data
t
a
i
l
h
e
a
d
e
r
h
e
a
d
e
r
h
e
a
d
e
r
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 196
The process of packing parts of several messages onto one channel is called multiplexing.
C B A
4 3 2 1 3 B q r 2 1 A
p q r
Multiplexing packets
In the diagram the packets from three separate messages have been multiplexed onto the one
communication channel. On receipt, the information in the packet header will be used to direct
each packet to its required destination.
Multiplexing small packets is very efficient, but packet switching has another advantage. All
packets do not have to follow the same pathway to their destination. If one communication line
is crowded or broken then some of the packets can be re-directed along other less busy
channels.
Bridges
When a LAN gets too crowded it can be divided into segments (sections) to prevent traffic
overload. A bridge is a device used to link LAN segments. LAN segments that are connected in
this way are sometimes called BLANs bridged LANs.

A bridge is a combined hardware/software device that maintains a list of node addresses. Each
packet that reaches the bridge is checked and if necessary sent onto another segment. In this
way packets will stay inside of their own segment unless they need to reach a node in another
segment. This reduces the network traffic because instead of every device checking the header
of every packet on the network it will only see packets on its own segment. All other packets
are filtered out by the bridge.
A simple bridge will have a look-up table of addresses of devices. Each new device on the
network will have to have its address added to the table. This can be a time consuming task for
network managers. Smart bridges are more common nowadays. These build their own table of
addresses by noting the source address where packets come from and add these to a dynamic
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 197
database of device locations. If they detect a new or unknown address they will broadcast it
over the whole network until a device responds. They will then update the table with the
responding address. Smart bridges use method called a spanning tree algorithm and so they are
sometimes called spanning tree bridges.
Bridges can also act as repeaters in that they can regenerate a signal as it passes through them.
Nowadays bridges are not used much on local networks, but are still important on larger
networks, including the internet.
Routers
While a bridge is effective in linking segments of a LAN together they are not sophisticated
enough to work over a wide area network. To work effectively over a WAN packets must be
sent directly to a node. Routers read the header information in a packet and extract the network
address of the destination device. They then direct (route) the packet to its destination. Routers
forward messages rather than filtering them as bridges do.
Using routers, packets can be distributed over wide scale
networks in fact without routers the Internet could not work.
Some routers have additional functions such as prioritising
some messages, handling security or multicasting (sending the
same packet to many destinations).
For the home user access routers or modem/routers are used
to link several computers or laptops to the one internet
connection, sometimes using a WiFi (wireless) connection.
Switches
Bridges and routers tend to be slow and relatively expensive and have now mostly been
replaced by switches in general purpose settings like homes and small business. (Routers are
still important in enterprise networks with most sites having at least one, and often hundreds.)
Switches combine many of the functions of
bridges and routers, but operate at very much
faster speeds. Instead of waiting for a whole
packet to be received before sending it on a switch
has a flow through facility. The addressing
information in a header is read, and the packet
directly piped into the required output port even before the rest of the packet arrives. This is
described as minimal latency.
Using specially designed circuitry switches can rapidly direct packets, assign different
bandwidth to various output ports or even handle parallel data flows. Advanced switches
incorporate data management functions, prioritisation of data flow and bandwidth allocation.
The main disadvantage a switch has, compared to a router, is that it does not support dynamic
routing. This means that they cannot transfer data from one network to another. They also have
no support for NAT (network address translation the remapping of one IP address into
another) or advanced filtering.
A wireless router
A simple network switch
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 198
Network management
As networks with their associated advantages are becoming standard for computer systems they
also have a cost in terms of management. Someone has to take over the role of manager, and
depending on the type, size and complexity of the network, this may involve a full time
position, the need for a management team, or even an out-sourced management arrangement.
While each computer system may have different forms of management, the role is consistent. It
involves controlling users access to network equipment and resources, overseeing resource
usage, responding to bottlenecks in the system, planning expansions of hardware, and general
maintenance as well as trouble shooting and fault fixing.
The role of network manager specifically involves:
user management, e.g. establishing users accounts and level of access, and the
reporting of usage statistics
installation, customisation and maintenance of software, hardware and the operating
system
level of access, i.e. which individuals or groups can be permitted to read, create,
modify or delete which data
network file maintenance, e.g. the clearing away of unused files
establishing remote access (e.g. via modem) or email if required
preserving copyright
backup either to a tape backup unit or through using a mirrored HDD array (one that
copies the main drive constantly and is available if it goes down)
network security: preventing unauthorised access, physical security of the hardware,
data security, establishment of firewalls and avoiding viruses
user confidentiality and privacy.
Overall the network manager is responsible for the quality of service (QoS) of the network i.e.
how well it supports the operation of the enterprise. The manager may also be responsible for
training and user liaison and therefore good communication skills are important.
Activity 6.6 Inter-connectivity
1. a Why might a school computer coordinator decide to change the configuration in a
computer room from standalone computers to a network? Give at least three reasons
with examples.
b Would the introduction of thin client work stations be an advantage or a disadvantage
in a school situation?
2. a What is the distinction between a LAN and a WAN?
b Give an example of each that you know about.
3. a Explain what packet switching involves.
b What sort of information is stored in the header and tail of a data packet?
c What is multiplexing and how does it improve the efficiency of communication. Use
diagrams to assist in your answer.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 199
4. Explain with examples why a client-server configuration is more suitable than peer to peer
for a medium sized business such as a local newspaper.
5. Explain the difference between bridges, routers and switches.
6. a Identify the areas that a network manager is responsible for.
b Of these areas which do you consider to be the three most important? J ustify your
choices.
c What are the aspects of QoS that can be affected by poor management practices?
7. Bunyip high school has several hundred computers. Recently students have been
downloading games and playing them during class time. There has also been a spate of
derogatory comments sent via the schools email system.
While you may not see a problem with the odd game, and value student email as a vital
form of communication, it is your job as the system administrator to define what is
acceptable on the school computers. You have been asked by frustrated staff to write an
acceptable use policy (AUP) that defines what students should do on the network.
As it is not a formal agreement, the policy can be written as short bullet points, however it
must clearly state the limits on students. These should include what can be downloaded,
what behaviour is appropriate, and what non-class related activities are permitted.
Prepare the AUP.
Network cabling and protocols
Any device on a network that can receive or send data is called a node. Every node in a
network requires an adaptor or NIC (network interface card) installed in it. The adaptor or NIC
enables communication on the network by reading and sending packets using a network
protocol.
A protocol is a standardised way of communicating. Protocols are necessary because there are
so many variables in connecting devices that unless there is a common, agreed way of doing
things, only a garbled message will get through, if at all.

A network interface card
We will investigate protocols shortly, but the most common way of linking the nodes to each
other is to use cabling, which we will look at next.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 200
Cabling
Nodes can be connected by cable so that signals can pass between them.
The principal forms of cable in use are:
twisted pair this is basically unshielded copper telephone line (UTP), but may be
shielded to prevent noise on the line (STP)
optical fibre long threads of glass strand; information is sent by a flashing LED
(light emitting diode) at one end, and read at the other.
The most familiar form of UTP nowadays is the blue CAT5
(category 5) cable used for networking computers. While CAT5 is
most commonly used for 100 Mbit/s networks, such as 100BASE-
TX Ethernet, it can also be used for internal phone networks or for
streaming video.
Optical fibre cable, on the other hand, is now becoming the standard
for faster, more widespread data communication, especially with the
roll out of the national broadband network (NBN).
Despite being expensive optical fibre has the advantages of:
being fast (up to, and even exceeding, 40 Gbps)
having a wide band width, and can carry voice, video and data at the same time
being secure in that it emits no radiation and is difficult to tap (intercept messages)
without disrupting the cable with a physical connection

being immune to electrical interference, so that it can be used even around generators,
etc., in factories, and is not affected by lightning
can be used over long distances.
The expense with optical fibre arises from the complexity of converting electrical signals to
light, and then back again.
The principal alternative form of connection to using cables is wireless (radio) communication
between nodes. We will look at wireless shortly, but before we do we will look at some of the
transmission protocols in use.
Ethernet
Ethernet is currently the protocol most widely in use for cabling and network access. It is fast,
reliable, cheap, and easy to install. As such has become the industry standard for LANs.
Ethernet is a contention (or collision detection) based protocol. This means that nodes listen for
breaks in the network traffic and, if quiet, they then transmit their message and check to see if it
gets through. Special software is used to detect if there has been a signal collision if another
device transmitted at the same instant. If there is a collision all messages cease and are re-sent

Optical fibre cable
CAT 5 cable
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 201
at random times. This first-come first-served method may seem inefficient, but in fact works
well if the network is working at anything below 80% capacity.
Ethernet originally ran at 10 Mbps over UTP (called 10Base-T) or thin ethernet (coaxial STP).
10 Mbps is supposedly 10 million bits per second (equivalent to about 900 A4 pages of text)
but in practice most 10Base-T networks only achieve about 4 Mbps. This is adequate for file
and printer sharing, but not for streaming video.
Quicker standards with faster packet rates include Fast Ethernet (100Base-T) and Gigabit
Ethernet (1 000 Mbps). (For comparison most HDD access is in the 80-200 MBps range.) A
mixture of 100 Mbit and Gigabit is now standard for home networking.
TCP/IP
TCP/IP is the basis of communication over the Internet (and most intranets). It consists of two
parts: IP (internet protocol) and TCP (transmission control protocol).
IP takes care of the addressing of packets. An IP address consists of four numbers less than 256
separated by dots (e.g. 204.19.141.35). The first two numbers usually specify the segment
address while the latter two point to an individual node on this segment. Every device on the
network is assigned its own IP, enabling other devices to send packets to it.
Since large numbers are not user friendly internet IPs have been associated with words through
the Domain Name System (DNS). When a user types www.yahoo.com into a browser a DNS
server converts it to the IP 204.71.200.68 that the routers delivering the packets of data can
understand.
The TCP part of the protocol specifies how packets of data are constructed from a message to
be transmitted, and how they are recombined at the receiving device. It also checks for errors in
packets, makes sure all arrive, and arranges for re-transmission if necessary.
Wireless
A wireless network is a group of devices that are linked by microwave communication.
Typically there is an access point (AP or WAP) that connects to a wired network and the
internet. This broadcasts a signal that can be received by a wireless adaptor in devices such as
computers, laptops, printers, MP3 players, smart phones, game consoles, and other appliances.
Wireless connection between devices can be over a local area (WLAN) or a wide area network
(WWAN).
WLANs generally operate on the IEEE 802.11 protocol, generically referred to as Wi-Fi. In
addition to private use in homes and offices, Wi-Fi can also provide contact with other users, or
on-line access, at what are called hotspots. These are publicly available access points that are
provided either free, e.g. at coffee shops or restaurants like MacDonalds, or to users who pay
an access fee, e.g. at airports or hotels.
WWANs generally use mobile phone technology, and protocols such as GPRS or 3G, for
signalling and data transmission.
The advantages of wireless is there is no need for cabling, cheaper set up costs, and the
flexibility of being able to move the receiving device (e.g. a laptop) freely within the access
area. There are some limitations in using wireless in that there may be interference or blockage
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 202
to the signal which can restrict the range of coverage. There may also be problems of channel
pollution where there are too many wireless signals in the one location.
The biggest drawback to wireless is security. It is relatively easy to tap into a wireless network
without the owner being aware. This unofficial access can be used for purposes as benign as
web surfing, to those more damaging such as breeching security, stealing information, or
planting malware. The best defence against unauthorised access is the use of encryption,
whereby any data transmitted on the network is coded so that only verified users can read it.
Bluetooth
Bluetooth is a short range, low power, wireless communications protocol with good security.
Using Bluetooth fixed and mobile devices can exchange data over short distances
creating what is sometimes called a personal area network (PAN). This PAN can
include devices such as computers, laptops, mobile phones, printers, faxes, GPS
receivers, digital cameras, game consoles, kitchen appliances, and so on.
Bluetooth is a packet-based protocol with a master-slave structure. One device
acts as master and may communicate with up to seven slave devices in a piconet
(tiny network). Generally data is transferred between the master and one other device at a time,
switching from one slave to another in a round-robin fashion. While there is a broadcast option,
this is rarely used. If required at any time the devices can switch roles, and the slave can
become the master.
Other protocols
ATM (asynchronous transfer mode) is a very high speed protocol mainly used by telcos
(telecommunication providers) such as Telstra. Because of its speed and reliability it is used
mainly for high-bandwidth backbone services such as ADSL. ATM is still too expensive for
individual use but may one day become the dominant networking protocol.
Token Ring was a protocol devised by IBM that has largely fallen into disuse. In this protocol
an imaginary (electronic) token is passed between nodes. Only the node with the token may
transmit, the rest remain silent until they receive the token. It is usually works through a hub
and hence forms a star based network (despite the ring).
The major advantage of the token ring protocol is that selected nodes may be assigned a
priority and have longer use of the token or more frequent access to it. This format is more
expensive than Ethernet and operates at 4 or 16Mbps. Cabling is usually UTP, although STP is
better. With the dominance of Ethernet it is rarely used nowadays.
IPX/SPX was the protocol used for local area communication before TCP/IP, and was
important up to the early 1990s. It is still used in some router and network software.
Activity 6.7 Getting together
1. a What advantages does optical fibre have over copper wire for connecting computers?
b Optical fibre is still mainly used for network backbones. Why is it not used for direct
connections to computers in a home or office?
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 203
c The National Broadband Network (NBN) is currently rolling out optical fibre to the
home throughout Australia. In what ways will fibre optic cable enable more access to
information than conventional cable?
2. a What is a protocol?
b Why are protocols needed in computer communication?
c List five different protocols used for networks.
3. a How is a token used in a token-ring network?
b What advantages does this type of network offer?
4. Using IPv4 an address consists of four groups of numbers, each less than 256, separated by
dots, e.g. 23.134.229.97 .
a What is the largest number of devices that can be connected this way so that each has
a unique IP? (256 x 256 x 256 x 256)
b Will this number be enough to supply a unique IP for every device that will ultimately
be connected to the Internet?
c Use the Internet to find out what IPv6 is and how it overcomes this limitation. Also
see if you can find if any ways have been found to overcome IPv4s limitations.
5. Research and, in a paragraph, report on one of the following:
a Wardriving.
b Wi-Fi hotspots.
c Bluetooth vs Wi-Fi.
Internet cafe
As stated earlier any investigation of computer systems must be aware that they are operated by
people, and so we have been mindful of the social and ethical implications, and the human-
computer interaction that takes place.
To bring together much of what we have looked at with computer systems we will now
investigate the following scenario:
Two friends plan to open and manage a cafe for computer gaming and general internet access. Paul
'Zander' Price and J ames 'Higgs' Lloyd will be equal partners and have secured significant capital to
purchase equipment and pay expenses for the first several months of operation. They have chosen
a location in the centre of town for maximum exposure and they expect the cafe to be very popular
with the capacity for 35 users.
While no hardware has been purchased yet, and the business model is still being developed, they
have settled the role of manager, the software management package to be used, and customer
acceptable use policies. The grand opening is in just a month.
When compiling the business case for the cafe several surveys were taken to ascertain what the
local population expect to find when they come to the cafe. Several important points were extracted
from these surveys:
the majority of customers would be students from the local high schools and university, with
shoppers expected to make up only a tenth of the clientele
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 204
customers coming for internet usage wanted a range of software titles with the 15 most
popular costing collectively $700 per computer and satisfying 90% of those surveyed
those interested in gaming asked for an extensive list of software involving over 350
different applications; of these the top 15 cost $1 800 per computer but would only satisfy
40% of those surveyed
prospective customers under 35, in addition to standard computers, wanted gaming
consoles like the Wii and Playstation 3 to be available, on the understanding that these
would be free to play if there was ever a wait for access to a computer
local shoppers did not want loud or offensive music, while students wanted a mixture of
techno, top 100, and alternative music.
From the outset Paul and J ames decided that all programs available for use on the system would be
licensed. Users would be able to request new applications, though there would be no way to install
unauthorised software. The exception to this would be add-ons and custom game configurations.
The network as planned is to consist of a simple client-server arrangement with a central server
holding copies of all the software to be accessed. From this server each computer would load the
required software across the network. Every computer on the network would have full internet
access with a firewall monitoring traffic and a routing device slowing down users who exceeded
standard access limits.
Activity 6.8 Game time
1. a Describe a typical client you expect will use this internet cafe.
b List probable games this user would expect to play.
c In addition to games, what other activities would users carry out? Identify software
that would support these activities.
2. What advantages will Paul and J ames gain by having software on a central server?
Consider time, money, space, security, efficiency, effectiveness, and customer satisfaction
in your answer.
3. a Make a list of the hardware you think Paul and J ames will need to support the
software you identified in Q1.
b Use computer magazines, adverts, or web sites to prepare a cost effective system for
the internet cafe.
In doing this keep in mind:
Paul and J ames do not have a lot of money to spend so the system purchased will
need to be cost effective (i.e. pay for itself over time)
the computer system must be simple to use and easy to maintain
the system will need to be flexible so that it can be upgraded as time goes by.
4. a Suggest suitable hours of business for the internet cafe.
b Make a list of the different tasks you think Paul and J ames will find themselves
carrying out during the day. Include as many as you can including user support and
equipment maintenance.
c Will Paul and J ames be able to run this cafe by themselves? If not how many other
staff, doing what, do you think they will have to employ?
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 205
5. a What is meant by the term total cost of ownership?
b What items other than hardware and software might be included in the cost of setting
up this internet cafe?
6. What are some of the human factors Paul and J ames will need to take into account in
establishing this system? (Consider monitoring, security, management, OH&S, and other
similar issues.)
7. Make a diagram showing an appropriate layout design for the internet cafe indicating the
reception area, computer room, point of sale, and so on.
Activity 6.9 Computer system case study
In a small group undertake a case study of a working computer system in a local enterprise.
Suitable sites will need to be contacted beforehand and arrangements made to visit the various
systems. Your schools library is a good alternative if there is no outside system available.
To carry out the case study you will work through three stages:
as a group prepare a questionnaire to query the organisation your group is to visit
visit the computer system
comment on the system you have visited.
In completing this study you will follow the DDE cycle.
Design
Form a group of 3 or 4 who will visit one of the enterprises.
As a group prepare the questionnaire. It should address the following areas:
purpose what is the main activity of the organisation (its core business); in what
ways does the computer system support this; what other supplementary activities (if
any) does the system support (WP, accounting, inventory, etc.); who are the clients of
this enterprise; what procedures are in place for dealing with these clients; is any
outsourcing used
hardware what type of computers are used in the enterprise; determine the variety
of input and output devices in use; what forms of secondary storage are used; if
possible cost the system; what physical security is in place
software what operating system(s) is used; list some of the major programs
employed; identify the backup system (form, frequency, storage); determine software
maintenance and programming (if any)
human systems determine the variety of tasks performed in operating the system;
how is the system administered and maintained; what training of staff occurs; what
level of user support is in place; how are new staff recruited; what form does HR
management take; investigate user comfort (hardware, furniture, programs)
networking what type of network is in use; what is the number of users; how good is
network performance; is wireless networking used; establish the role of email and
internet in the organisation; what protocols are supported; what forms of network
security are in place.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 206
Develop
Prepare the questionnaire. As a courtesy forward a copy to your host before you visit the
system.
Visit the system and have at least one member of your group (preferably two) complete the
questionnaire. Collect sufficient information to also be able to answer the evaluation questions
(see below). Upon completion thank your host.
On return prepare a neat copy of the questionnaire with details filled in.
Prepare and send a letter of thanks addressed to the host at the organisation you visited.
Evaluate
In addition to your completed group questionnaire you (individually) should submit answers to
the following:
1. Assess the overall effectiveness of the system you have visited. To do this you must
identify the purpose and function of the system. In your opinion does the system support
the operation of the enterprise significantly? If so, in what ways, if not how could it be
improved?
2. In what ways is the use of networking an advantage to the organisation?
3. Discuss theease of use of the system. In your answer refer to the hardware and software
ergonomics, the support given to users, and the functionality of the user interface.
4. Comment on the security system in place. Is it in your opinion effective, or could it be
improved in some way?
SEI 6 The changing workplace
Automation is where human workers have some or all of their jobs taken over by machines or a
new computer system. Automation can take place in factories with the introduction of robotic
devices and computer controlled processes. It can also occur in offices and businesses with the
increased use of computers and other technology.
Automation can be costly. Installing new devices and systems has an initial set-up cost, and
there is usually a disruption to on-going activities while the new system is installed. It can also
be costly in human terms. If a persons job is automated and they have to undertake a new
activity they are described as being displaced. Workers who cannot make the transition or
whose job disappears may be made redundant.
Automation however does not automatically mean workers lose their jobs. Often the boring or
dirty or dangerous jobs are taken by machines and the human worker takes a position of more
responsibility. Usually, if this happens, the worker will need retraining in the use of the new
technology.
In the next activity we will explore some of the social and ethical effects that automation can
have in the work place.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Computer systems 207
Activity 6.10 Automation
Read this description of job displacement and answer the questions that follow:
Peter Harris is over 50 and for the first time in his life he is afraid of unemployment. He is a carpenter
and for over 30 years has worked for The Furniture Factory in Brisbane. He knows at his age it will
be almost impossible to get another job if he is made redundant now.
The reason Peter fears for his job is the automation that has taken place in the factory. He first felt
the impact five years ago when the company converted furniture making to an assembly line process
to reduce costs and speed up production. The assembly line is fully automated, with robotic devices
cutting, holding and securing sections of furniture together. The whole process is computer
controlled.
Although the union objected at first, the company argued that robotic devices on the assembly line
could work around the clock without tiring or making mistakes. They felt that to stay competitive they
had no choice other than to install the assembly line and robots. After discussion and some
concessions the union agreed to the process. The initial cost of the assembly line, central computer
and robots was high, but they were able to pay for themselves within three years. The Furniture
Factory became a leader in its field and profits rose.
At the time Peter was not laid off but he was displaced, that is his job disappeared and he had to
take on a new position. The new job was in quality control where he had to inspect the furniture and
ensure no poor joints were passed onto the next section. Although the new position required less
skill and gave little personal satisfaction he took the job as he had a family to support.
Now five years later Peter is again facing job displacement. The quality of work by the robotic
devices has been so high that it can now be checked by an automated system. The company has
offered to retrain Peter as a technician to service and maintain the robotic devices.
Peter knows very little about technology, and feels he is too old to learn a new trade. He wants to
keep working for the company, but as a carpenterthe job he has taken pride in most of his life. He
also has his financial responsibilities.
Faced with what he wants to do and the fear that he cannot handle the technicians position he does
not know what to do.
1. a What advantages do automated workplaces have over non automated?
b What disadvantages are there for people who work in an automated environment?
2. a How can companies justify the high set-up costs of automating?
b Why did the Furniture Factory feel they had no choice other than to use robots?
3. a Which types of jobs are most likely to be taken over by robotic devices?
b Which are the least likely?
4. a Why might unions argue against automation?
b Why might unions argue for automation?
5. What is the difference between being made redundant (laid off) and experiencing job
displacement?
6. Describe how you think Peter would have thought:
a When he was first displaced refer to deskilling, lack of job satisfaction, and the
morale effect of having to work alongside a machine.
b With his latest job displacement where he has to undergo retraining and will end up
as virtually the servant of the robotic devices.
Kevin Savage 2011 Single user licence issued to Mitchell Ingall
Leading Technology 208
7. What do you think Peter should do? Why?
The next extract describes a case of redundancy. Read it and answer the questions that follow:
The principal of Bunyip Primary School is faced with a dilemma, he has to make a difficult choice.
Much of the clerical work in the school is carried out by two teachers aides. Maud is a supporting
parent in her late thirties with children at the school. While she has worked at the school for a
number of years, and is very capable, she is not prepared to use computers, instead relying on her
electric typewriter and calculator. J enny is a single woman of twenty two who gained her position
after two years unemployment. While unemployed she completed several J ob Skill courses in
computers. Both are valuable workers and are cheerful and efficient.
The school has been offered a new computer-based administrative package developed by the
education department. This will greatly improve the efficiency of the running of the school, lightening
the load on existing staff and in turn improving the general quality of education given to the students.
The package is optional; only those schools that choose to take it up need do so. A condition of the
package is that the number of teacher aides must be halved, in this case from two to one.
The principal is keen to adopt the package which will bring greater effectiveness to the running of the
school, but is unable to offer an alternative position to the redundant aide. Neither would be likely to
find other employment. Maud depends on her salary to help support her family, while J enny hopes to
get married soon if she can afford it.
8. a What options does the principal face?
b Explain why he would not wish to choose either option.
9. Decide (with reasons) which choice you think will be best for the school and the people
involved. (If you cannot come to a decision attempt to explain why you could not.)
10. J oin a group of 3 to 4 other students and discuss the dilemma, each person to put forward
their view and reasons. As a class list the various reasons and decisions reached.



Handling data is crucial,
so next we will see how
this can best be done
Kevin Savage 2011 Single user licence issued to Mitchell Ingall

Das könnte Ihnen auch gefallen