Sie sind auf Seite 1von 21

INTRODUCTION TO

COMPUTERS

ANIS HASSAN
Center of Excellence Hyderabad

INTRODUCTION TO COMPUTERS
Introduction:
The main purpose of this chapter is:

To introduce the amazing machine, Computer, its limitation and


characteristics.
To introduce the history of computer.
To learn various types of computers in different aspects
To know the advantages and impact of computer in our society.
To give the first hand knowledge about programming languages and
translator.

Computer: Computer, machine that performs tasks, such as calculations or


electronic communication, under the control of a set of instructions called a
program. Programs usually reside within the computer and are retrieved and
processed by the computers electronics. The program results are stored or
routed to output devices, such as video display monitors or printers.
Computers perform a wide variety of activities reliably, accurately, and quickly.
Hardware:
What is a computer?
A computer is defined as "an electronic device that accepts data (input),
performs arithmetic and logical operations using the data (process) to produce
information (output).
Computers process and store digital signals based on binary arithmetic.
Binary arithmetic uses only two digits, 0 and 1, to represent any number, letter
or special character.
Data--Numbers, characters, images or other method of recording, in a form
which can be assessed by a human or (especially) input into a computer,
stored and processed there.
Information--Data on its own has no meaning, only when interpreted by some
kind of data processing system does it take on meaning and become
information. For example, the number 1234.56 is data but if it is output as
your bank balance then it is information.
A set of instructions, called a program, operates the computer.
Brief History:
The first large-scale electronic digital computer, ENIAC (Electronic Numerical
Integrator and Computer), was introduced in 1946. The ENIAC weighed thirty
tons, contained 18,000 vacuum tubes, and filled a 30 x 50 foot room, yet was
far less powerful than todays personal computers.
In 1955, Bell labs introduce its first transistor computer. Transistors are faster,
smaller and create less heat than traditional vacuum tubes, making these
computers much more efficient.
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
In 1971, the first microprocessor, the Intel 4004, is designed. This single chip
contains all the basic parts of a central processor.
In 1977, Apple computer Inc., Radio Shack, and Commodore all introduce
mass-market computers, beginning the PC era and the microcomputer race.
In 1980, IBM hires Paul Allen and Bill Gates to create an operating system for
a new PC. IBM allows Allen and Gates to retain the marketing rights to the
operating system, called DOS. In 1981, IBM joins the personal computer race
with its IBM PC, which runs the new DOS operating system.
In 1990, Microsoft releases Windows 3.0, a complete rewrite of previous
versions and one in which most desktop users will eventually spend most of
their time. Windows 3.0 uses a graphical user interface [GUI].
In 1995, Microsoft releases Windows 95, Microsoft Office 95 and the online
Microsoft Network. In 1997, Microsoft releases Microsoft Office 97.
Classification of Computers
Mainframe Computers
Minicomputers
Microcomputers
Supercomputers
Mainframe computers: are very large, often filling an entire room. They can
store enormous of information, can perform many tasks at the same time, can
communicate with many users at the same time, and are very expensive. .
The price of a mainframe computer frequently runs into the millions of dollars.
Mainframe computers usually have many terminals connected to them.
These terminals look like small computers but they are only devices used to
send and receive information from the actual computer using wires.
Terminals can be located in the same room with the mainframe computer, but
they can also be in different rooms, buildings, or cities. Large businesses,
government agencies, and universities usually use this type of computer.
Minicomputers: are much smaller than mainframe computers and they are
also much less expensive. The cost of these computers can vary from a few
thousand dollars to several hundred thousand dollars. They possess most of
the features found on mainframe computers, but on a more limited scale.
They can still have many terminals, but not as many as the mainframes. They
can store a tremendous amount of information, but again usually not as much
as the mainframe. Medium and small businesses typically use these
computers.
Microcomputers: are the types of computers we are using in your classes at
Floyd College. These computers are usually divided into desktop models and
laptop models. They are terribly limited in what they can do when compared
to the larger models discussed above because they can only be used by one
person at a time, they are much slower than the larger computers, and they
cannot store nearly as much information, but they are excellent when used in
small businesses, homes, and school classrooms. These computers are
inexpensive and easy to use. They have become an indispensable part of
modern life.
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
Computer Tasks
Input
Storage
Processing
Output
When a computer is asked to do a job, it handles the task in a very special
way.
It accepts the information from the user. This is called input.
It stored the information until it is ready for use. The computer has memory
chips, which are designed to hold information until it is needed.
It processes the information. The computer has an electronic brain called the
Central Processing Unit, which is responsible for processing all data and
instructions given to the computer.
It then returns the processed information to the user. This is called output.
Every computer has special parts to do each of the jobs listed above.
Whether it is a multimillion dollar mainframe or a thousand dollar personal
computer, it has the following four components, Input, Memory, Central
Processing, and Output.
The central processing unit is made up of many components, but two of them
are worth mentioning at this point. These are the arithmetic and logic unit and
the control unit. The control unit controls the electronic flow of information
around the computer. The arithmetic and logic unit, ALU, is responsible for
mathematical calculations and logical comparisons.
Input Devices
Keyboard
Mouse
Scanner
Microphone
CD-ROM
Joystick
Output Devices
Monitor
Speakers
Printer
Impact
Daisy Wheel
Dot Matrix
Non-Impact
Ink Jet
Laser

MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
Storage Devices
Floppy disk
Tape drive
Local drive (c)
Network drive (z)
CD-ROM
Zip disk
Memory
Read Only Memory (ROM)
ROM is a small area of permanent memory that provides startup instructions
when the computer is turned on. You can not store any data in ROM. The
instructions in ROM are set by the manufacturer and cannot be changed by
the user. The last instruction in ROM directs the computer to load the
operating system.
Every computer needs an operating system. This is a special computer
program that must be loaded into memory as soon as the computer is turned
on. Its purpose is to translate your instructions in English into Binary so that
the computer can understand your instructions. The operating system also
translates the results generated by your computer into English when it is
finished so that we can understand and use the results. The operating
system comes with a computer.
Random Access Memory (RAM)
This is the area of memory where data and program instructions are stored
while the computer is in operation. This is temporary memory. NOTE: The
data stored in RAM is lost forever when the power is turned off. For this
reason it is very important that you save your work before turning off your
computer. This is why we have peripheral storage devices like your
computers hard disk and floppy diskettes.
Permanent Memory (Auxiliary Storage)
Your files are stored in permanent memory only when saved to your disk in a:
drive or saved to your computer's hard disk,
Drive c:
To better understand how a computer handles information and to also
understand why information is lost if the power goes off, lets take a closer
look at how a computer handles information. Your computer is made of
millions of tiny electric circuits. For every circuit in a computer chip, there are
two possibilities:
an electric circuit flows through the circuit or
An electric circuit does not flow through the circuit.

MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
When an electric current flows through a circuit, the circuit is on. When no
electricity flows, the circuit is off. An on circuit is represented by the number
one (1) and an off circuit is represented by the number zero (0). The two
numbers 1 and 0 are called bits. The word bit comes from binary digit.
Each time a computer reads an instruction, it translates that instruction into a
series of bits, 1s and 0s. On most computers every character from the
keyboard is translated into eight bits, a combination of eight 1s and 0s. Each
group of eight bits is called a byte.
Byte The amount of space in memory or on a disk needed to store one
character. 8 bits = 1 Byte
Since computers can handle such large numbers of characters at one time,
metric prefixes are combined with the word byte to give some common
multiples you will encounter in computer literature.
Kilo means 1000
Mega means 1,000,000
Giga Means 1,000,000,000

kilobyte (KB) =
1000 Bytes
megabyte (MB) = 1,000,000 Bytes
gigabyte (GB) = 1,000,000,000 Bytes

As a side note, the laptop computers that you are using at Floyd College have
32 MB of RAM.
At this point it would be good to point out why information stored in RAM is
lost if the power goes off. Consider the way the following characters are
translated into binary code for use by the computer.
A
B
C
X
Z

01000001
01000010
01000011
01011000
01011010

00110001
00110010
Consider the column at the right, which represents how the computer stores
information. Each of the 1s in the second column represents a circuit that is
on. If the power goes off, these circuits can NOT be on any more because
the electricity has been turned off and any data represented by these circuits
is lost.

ABACUS
Fascinating facts about the invention of the Abacus by Chinese in 3000 BC.
Calculation was a need from the early days when it was necessary to account
to others for individual or group actions, particularly in relation to maintaining
inventories (of flocks of sheep) or reconciling finances. Early man counted by
means of matching one set of objects with another set (stones and sheep).
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
The operations of addition and subtraction were simply the operations of
adding or subtracting groups of objects to the sack of counting stones or
pebbles. Early counting tables, named abaci, not only formalized this counting
method but also introduced the concept of positional notation that we use
today.
The next logical step was to produce the first "personal calculator"the
abacuswhich used the same concepts of one set of objects standing in for
objects in another set, but also the concept of a single object standing for a
collection of objectspositional notation. The Chinese abacus was developed
about 5000 years ago. It was built out of wood and beads. It could be held
and carried around easily. The abacus was so successful that its use spread
form China to many other countries. The abacus is still in use in some
countries today.
The abacus does not actually do the computing, as today's calculators do. It
helps people keep track of numbers as they do the computing. People who
are good at using an abacus can often do calculations as quickly as a person
who is using a calculator
This one-for-one correspondence continued for many centuries even up
through the many years when early calculators used the placement of holes in
a dial to signify a countsuch as in a rotary dial telephone. Although these
machine often had the number symbol engraved alongside the dial holes, the
user did not have to know the relationship between the symbols and their
numeric value.
Primitive people also needed a way to calculate and store information for
future use. To keep track of the number of animals killed, they collected small
rocks and pebbles in a pile. Each stone stood for one animal. Later they
scratched notches and symbols in stone or wood to record and store
information. Only when the process of counting and arithmetic became a
more abstract process and different sizes of groups were given a symbolic
representation so that the results could be written on a "storage medium"
such as papyrus or clay did the process of calculation become a process of
symbol manipulation.

MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad

AT A GLANCE:
Invention: abacus
in
abacus / noun .
Function:

c3000

BC

A counting device: a mechanical


device
for making calculations
consisting of a frame mounted with
rods along which beads or balls are
moved

Nationality
Chinese
:

The First Computer


Introduction
Seldom, if ever, in the history of technology has so long an interval separated
the invention of a device and its realization in hardware as that which elapsed
between Charles Babbage's description, in 1837, of the Analytical Engine, a
mechanical digital computer which, viewed with the benefit of a century and a
half's hindsight, anticipated virtually every aspect of present-day computers.
Charles Babbage (1792-1871) was an eminent figure in his day, elected
Lucasian Professor of Mathematics at Cambridge in 1828 (the same Chair
held by Newton and, in our days, Stephen Hawking); he resigned this
professorship in 1839 to devote his full attention to the Analytical Engine.
Babbage was a Fellow of the Royal Society and co-founder of the British
Association for the Advancement of Science, the Royal Astronomical Society,
and the Statistical Society of London. He was a close acquaintance of Charles
Darwin, Sir John Herschel, Laplace, and Alexander Humboldt, and was author
of more than eighty papers and books on a broad variety of topics.
His vision of a massive brass, steam-powered, general-purpose, mechanical
computer inspired some of the great minds of the nineteenth century but failed
to persuade any backer to provide the funds to actually construct it. It was
only after the first electromechanical and later, electronic computers had been
built in the twentieth century, that designers of those machines discovered the
extent to which Babbage had anticipated almost every aspect of their work.
These pages are an on-line museum celebrating Babbage's Analytical Engine.
Here you will find a collection of original historical documents tracing the
evolution of the Engine from the original concept through concrete design,
ending in disappointment when it became clear it would never be built. You'll
see concepts used every day in the design and programming of modern
computers described for the very first time, often in a manner more lucid than
contemporary expositions. You'll get a sense of how mathematics, science,
and technology felt in the nineteenth century, and for the elegant language
used in discussing those disciplines, and thereby peek into the personalities
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
of the first computer engineer and programmer our species managed to
produce. If you are their intellectual heir, perhaps you'll see yourself and your
own work through their Victorian eyes.
Since we're fortunate enough to live in a world where Babbage's dream has
been belatedly realized, albeit in silicon rather than brass, we can not only
read about The Analytical Engine but experience it for ourselves. These pages
include a Java-based emulator for The Analytical Engine and a variety of
programs for it. You can run the emulator as an applet within a Web page or
as a command-line application on your own computer (assuming it is
equipped with a Java runtime environment). These pages are a museum, and
its lobby is the Table of Contents, to which all other documents are linked.
Rather than forcing you to follow a linear path through the various resources
here, you can explore in any order you wish, returning to the Table of
Contents to select the next document that strikes you as interesting. Every
page has a link to the Table of Contents at the bottom, so it's easy to get back
when you've finished reading a document or decided to put it aside and
explore elsewhere.
The Five Generations of Computers
The history of computer development is often referred to in reference to the
different generations of computing devices. Each generation of computer is
characterized by a major technological development that fundamentally
changed the way computers operate, resulting in increasingly smaller,
cheaper, more powerful and more efficient and reliable devices. Read about
each generation and the developments that led to the current devices that we
use today.
First Generation - 1940-1956: Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for
memory, and were often enormous, taking up entire rooms. They were very
expensive to operate and in addition to using a great deal of electricity,
generated a lot of heat, which was often the cause of malfunctions. First
generation computers relied on machine language to perform operations, and
they could only solve one problem at a time. Input was based on punched
cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation
computing devices. The UNIVAC was the first commercial computer delivered
to a business client, the U.S. Census Bureau in 1951.
Second Generation - 1956-1963: Transistors
Transistors replaced vacuum tubes and ushered in the second generation of
computers. The transistor was invented in 1947 but did not see widespread
use in computers until the late 50s. The transistor was far superior to the
vacuum tube, allowing computers to become smaller, faster, cheaper, more
energy-efficient and more reliable than their first-generation predecessors.
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
Though the transistor still generated a great deal of heat that subjected the
computer to damage, it was a vast improvement over the vacuum tube.
Second-generation computers still relied on punched cards for input and
printouts for output.
Second-generation computers moved from cryptic binary machine language
to symbolic, or assembly, languages, which allowed programmers to specify
instructions in words. High-level programming languages were also being
developed at this time, such as early versions of COBOL and FORTRAN.
These were also the first computers that stored their instructions in their
memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy
industry.
Third Generation - 1964-1971: Integrated Circuits
The development of the integrated circuit was the hallmark of the third
generation of computers. Transistors were miniaturized and placed on silicon
chips, called semiconductors, which drastically increased the speed and
efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating
system, which allowed the device to run many different applications at one
time with a central program that monitored the memory. Computers for the
first time became accessible to a mass audience because they were smaller
and cheaper than their predecessors.
Fourth Generation - 1971-Present: Microprocessors
The microprocessor brought the fourth generation of computers, as thousands
of integrated circuits were built onto a single silicon chip. What in the first
generation filled an entire room could now fit in the palm of the hand. The Intel
4004 chip, developed in 1971, located all the components of the computer from the central processing unit and memory to input/output controls - on a
single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh. Microprocessors also moved out of the realm of
desktop computers and into many areas of life as more and more everyday
products began to use microprocessors.
As these small computers became more powerful, they could be linked
together to form networks, which eventually led to the development of the
Internet. Fourth generation computers also saw the development of GUIs, the
mouse and handheld devices.
Fifth Generation - Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in
development, though there are some applications, such as voice recognition,
that are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality. Quantum
computation and molecular and nanotechnology will radically change the face
of computers in years to come. The goal of fifth-generation computing is to
develop devices that respond to natural language input and are capable of
learning and self-organization.
Computer Software
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad

System Software
System software will come provided with each computer and is necessary
for the computers operation. This software acts as an interpreter between
the computer and user. It interprets your instructions into binary code and
likewise interprets binary code into language the user can understand. In the
past you may have used MS-DOS or Microsoft Disk Operating System which
was a command line interface. This form of system software required specific
commands to be typed. Windows 95 is a more recent version of system
software and is known as a graphical interface. This means that it uses
graphics or "icons" to represent various operations. You no longer have to
memorize commands; you simply point to an icon and click.
Program Software
Program software is software used to write computer programs in specific
computer languages.
Application Software
Application software is any software used for specified applications such
as:
Word Processing
Spreadsheet
Database
Presentation Graphics
Communication
Tutorials
Entertainment, Games
THE INTERNET
The Internet is a global computer and resource network that provides the user
with a huge amount of information and services including such things as
database access, electronic mail (E-Mail), file transport, discussion lists, online news, weather information, bulletin boards, airline traffic, crop production,
on-line text books, and services offered by the World Wide Web, Gopher and
WAIS. Now, what does all of this mean to the practicing physician?
The main advantage of the Internet is the interactive exchange of information
regardless of location or medium. Researchers and commercial and federal
agencies were looking for methods to interconnect computers and various
attached devices to allow for this exchange and sharing of information.
Physically linking computers and other devices together with cables,
telephone lines, satellite links, and special electronic equipment into a
computer network has been accomplished, but at a cost: proprietary and often
complex, incompatible and expensive systems resulted. Interconnectivity of
individual networks and different technologies to form larger networks that
could span the world became prudent. Nowadays, the term "Information
Superhighway" is being used to illustrate that need, but that term is more a
political smoke screen than a truthful representation of the Internet. The
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
Internet (Note the capital I) is NOT a "network of networks." There is no such
thing. Nor is the Internet maintained or managed by someone or some
organization. These fallacies do injustice to the Internet since they only
address the physical layer and do not emphasize the functional integration of
the diverse and dispersed resources that this physical layer transports.
To appreciate the current state of the Internet, it is helpful to spend some time
on its roots. Interestingly, the United States Department of Defense early on
recognized and funded research in the area of interconnecting diverse
networks. Toward this goal, the Advanced Research Projects Agency (ARPA)
of the United States in the mid-1960's sponsored universities and other
research organizations in developing a protocol that would satisfy ARPA's
(military) requirements: being able to withstand a nuclear attack. In the late
1960's, the Internet Protocol and Transmission Control Protocol (TCP/IP)
family of protocols was completed. The TCP/IP protocol that resulted is the
basis of the current Internet user community and defines how computers do
such things as start an interconnection, send data back and forth, or terminate
a connection. The initial "Internet" was primarily a private information access
tool for researchers and scientists at universities and federal agencies.
However, the ease of interconnecting TCP/IP networks together, combined
with the fact that TCP/IP networks are allowed to grow without disrupting
existing networks, and the policy to make the TCP/IP protocol available to
everyone in both the academic and research environment has stimulated
today's enormous popularity. Soon, networks based on the TCP/IP protocol
grew from just a few hundred computers to the world largest network of
academic,
government,
commercial,
and
educational
networks
interconnecting millions of systems.
Who Pays for the Internet?
The federal government through the National Science Foundation subsidizes
the Internet. Academic institutions and commercial parties also bear some of
the cost to make an Internet connection available to their faculty, students,
and employees. Nowadays, Internet connections for home use are becoming
available from commercial providers. It should not come as a surprise in the
current climate of cost containment that the federal subsidies are being
reduced, if not phased out, and that private industry replaces this financial
void such as CompuServe and America Online. The other side of the coin is
that commercialization will increase on the Internet.
How does it Work?
When we pick up the phone to make a call, a
dedicated connection between our phone and the
other phone is made. This type of dedicated
connection
is
called
circuit-switched.
All
communication between the two stations uses the established dedicated
connection. Network connections rely on another type of connection: packetswitched. Compared to circuit-switched, there is no dedicated connection
between the two systems. The information stream is chopped into small
pieces (packets) that are placed onto the network by the network hardware of
the source system. It is the responsibility of other network hardware to deliver
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
that packet to the destination using the address of the destination system that
is contained in each packet. Once the destination receives the packets, they
must be re-assembled into the original information stream. The network
hardware only knows where to deliver the packet and need not know what the
route is the packet will be traveling.
How then does the network know where to send the packet? The answer to
this question is: every computer system that is on the Internet has a unique
address referred to as the Internet address or IP number. (Please note that
this is NOT your E-Mail address.) This unique address is a 32-bit number, but
to make it more readable for us humans it is commonly written as four
numbers separated by periods also called the dotted decimal notation.
Internet address written in the dotted decimal notation When you want to
connect your computer to the Internet you must obtain such a unique address.
If you wish to communicate with another computer over the Internet, you need
to know the other system's IP number. We humans are better at remembering
names rather than numbers and, therefore, each computer on the Internet can
also be given a name. However, since the computer really uses the IP
number, the corresponding IP address needs to be found for each name,
similar to the way we use a telephone book.
The TCP/IP family of protocols defines the standard of how to communicate
among the interconnected networks of the Internet. This set of rules allows
systems from various manufacturers, running different computer software on
incompatible hardware to exchange information, provide the TCP/IP
standards were followed. These standards define how the packets look, how
computers set up a connection to exchange data, what to do in case of errors,
and so on. The versatility of the TCP/IP protocol was illustrated in 1978 when
a dumb terminal located in a mobile van driving along a California highway
made a connection to a computer located in London, England. Modern
computer networks are comprised of powerful personal computers instead of
dumb terminals, but the basic TCP/IP services: file transfer (the File Transport
Protocol FTP), remote access to a host computer (Telnet), and electronic mail
(E-MAIL), are very much an essential part of any TCP/IP implementation.
The History of Computer Programming Languages
Ever since the invention of Charles Babbage's difference engine in
1822, computers have required a means of instructing them to perform a
specific task. This means is known as a programming language. Computer
languages were first composed of a series of steps to wire a particular
program; these morphed into a series of steps keyed into the computer and
then executed; later these languages acquired advanced features such as
logical branching and object orientation. The computer languages of the last
fifty years have come in two stages, the first major languages and the second
major languages, which are in use today.
In the beginning, Charles Babbage's difference engine could only be
made to execute tasks by changing the gears which executed the
calculations. Thus, the earliest form of a computer language was physical
motion. Eventually, physical motion was replaced by electrical signals when
the US Government built the ENIAC in 1942. It followed many of the same
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
principles of Babbage's engine and hence, could only be "programmed" by
presetting switches and rewiring the entire system for each new "program" or
calculation. This process proved to be very tedious.
In 1945, John Von Neumann was working at the Institute for Advanced
Study. He developed two important concepts that directly affected the path of
computer programming languages. The first was known as "shared-program
technique" (www.softlord.com). This technique stated that the actual computer
hardware should be simple and not need to be hand-wired for each program.
Instead, complex instructions should be used to control the simple hardware,
allowing it to be reprogrammed much faster.
The second concept was also extremely important to the development
of programming languages. Von Neumann called it "conditional control
transfer" (www.softlord.com). This idea gave rise to the notion of subroutines,
or small blocks of code that could be jumped to in any order, instead of a
single set of chronologically ordered steps for the computer to take. The
second part of the idea stated that computer code should be able to branch
based on logical statements such as IF (expression) THEN, and looped such
as with a FOR statement. "Conditional control transfer" gave rise to the idea
of "libraries," which are blocks of code that can be reused over and over.
In 1949, a few years after Von Neumann's work, the language Short
Code appeared (www.byte.com). It was the first computer language for
electronic devices and it required the programmer to change its statements
into 0's and 1's by hand. Still, it was the first step towards the complex
languages of today. In 1951, Grace Hopper wrote the first compiler, A-0
(www.byte.com). A compiler is a program that turns the language's statements
into 0's and 1's for the computer to understand. This lead to faster
programming, as the programmer no longer had to do the work by hand.
In 1957, the first of the major languages appeared in the form of
FORTRAN. Its name stands for FORmula TRANslating system. The language
was designed at IBM for scientific computing. The components were very
simple, and provided the programmer with low-level access to the computers
innards. Today, this language would be considered restrictive as it only
included IF, DO, and GOTO statements, but at the time, these commands
were a big step forward. The basic types of data in use today got their start in
FORTRAN, these included logical variables (TRUE or FALSE), and integer,
real, and double-precision numbers.
Though FORTAN was good at handling numbers, it was not so good at
handling input and output, which mattered most to business computing.
Business computing started to take off in 1959, and because of this, COBOL
was developed. It was designed from the ground up as the language for
businessmen. Its only data types were numbers and strings of text. It also
allowed for these to be grouped into arrays and records, so that data could be
tracked and organized better. It is interesting to note that a COBOL program is
built in a way similar to an essay, with four or five major sections that build into
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
an elegant whole. COBOL statements also have a very English-like grammar,
making it quite easy to learn. All of these features were designed to make it
easier for the average business to learn and adopt it.
In 1958, John McCarthy of MIT created the LISt Processing (or LISP)
language. It was designed for Artificial Intelligence (AI) research. Because it
was designed for such a highly specialized field, its syntax has rarely been
seen before or since. The most obvious difference between this language and
other languages is that the basic and only type of data is the list, denoted by a
sequence of items enclosed by parentheses. LISP programs themselves are
written as a set of lists, so that LISP has the unique ability to modify itself, and
hence grow on its own. The LISP syntax was known as "Cambridge Polish,"
as it was very different from standard Boolean logic (Wexelblat, 177) :
x V y - Cambridge Polish, what was used to describe the LISP program
OR(x,y) - parenthesized prefix notation, what was used in the LISP program
x OR y - standard Boolean logic
LISP remains in use today because its highly specialized and abstract nature.
The Algol language was created by a committee for scientific use in
1958. Its major contribution is being the root of the tree that has led to such
languages as Pascal, C, C++, and Java. It was also the first language with a
formal grammar, known as Backus-Naar Form or BNF (McGraw-Hill
Encyclopedia of Science and Technology, 454). Though Algol implemented
some novel concepts, such as recursive calling of functions, the next version
of the language, Algol 68, became bloated and difficult to use
(www.byte.com). This lead to the adoption of smaller and more compact
languages,such as Pascal.
Pascal was begun in 1968 by Niklaus Wirth. Its development was
mainly out of necessity for a good teaching tool. In the beginning, the
language designers had no hopes for it to enjoy widespread adoption.
Instead, they concentrated on developing good tools for teaching such as a
debugger and editing system and support for common early microprocessor
machines which were in use in teaching institutions.
Pascal was designed in a very orderly approach, it combined many of
the best features of the languages in use at the time, COBOL, FORTRAN,
and ALGOL. While doing so, many of the irregularities and oddball statements
of these languages were cleaned up, which helped it gain users (Bergin, 100101). The combination of features, input/output and solid mathematical
features, made it a highly successful language. Pascal also improved the
"pointer" data type, a very powerful feature of any language that implements
it. It also added a CASE statement that allowed instructions to to branch like a
tree in such a manner:

Programming Languages
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
All computers have an internal machine language which they execute directly.
This language is coded in a binary representation and is very tedious to write.
Most instructions will consist of an operation code part and an address part.
The operation code indicates which operation is to be carried out while the
address part of the instruction indicates which memory location is to be used
as the operand of the instruction. For example in a hypothetical computer
successive bytes of a program may contain:
operation
code
00010101
00010111
00010110

address

meaning

10100001 load c(129) into accumulator


10100010 add c(130) to accumulator
10100011 store c(accumulator) in location 131

where c( ) means `the contents of' and the accumulator is a special register in
the CPU. This sequence of code then adds the contents of location 130 to the
contents of the accumulator, which has been previously loaded with the
contents of location 129, and then stores the result in location 131. Most
computers have no way of deciding whether a particular bit pattern is
supposed to represent data or an instruction.
Programmers using machine language have to keep careful track of which
locations they are using to store data, and which locations are to form the
executable program. Programming errors which lead to instructions being
overwritten with data, or erroneous programs which try to execute part of their
data are very difficult to correct. However the ability to interpret the same bit
pattern as both an instruction and as data is a very powerful feature; it allows
programs to generate other programs and have them executed.
Assembly Language
The bookkeeping involved in machine language programming is very tedious.
If a programmer is modifying a program and decides to insert an extra data
item, the addresses of other data items may be changed. The programmer
will have to carefully examine the whole program deciding which bit patterns
represent the addresses which have changed, and modify them.
Human beings are notoriously bad at simple repetitive tasks; computers thrive
on them. Assembly languages are a more human friendly form of machine
language. Machine language commands are replaced by mnemonic
commands on a one-to-one basis. The assembler program takes care of
converting from the mnemonic to the corresponding machine language code.
The programmer can also use symbolic addresses for data items. The
assembler will assign machine addresses and ensure that distinct data items
do not overlap in storage, a depressingly common occurrence in machine
language programs. For example the short section of program above might be
written in assembly language as:

operation address
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad

code
LOAD
ADD
STORE

A
B
C

Obviously this leaves less scope for error but since the computer does not
directly understand assembly language this has to be translated into machine
language by a program called an assembler. The assembler replaces the
mnemonic operation codes such as ADD with the corresponding binary codes
and allocates memory addresses for all the symbolic variables the
programmer uses. It is responsible for associating the symbol A, B, and C with
an addresses, and ensuring that they are all distinct. Thus by making the
process of programming easier for the human being another level of
processing for the computer has been introduced. Assembly languages are
still used in some time-critical programs since they give the programmer very
precise control of what exactly happens inside the computer. Assembly
languages still require that the programmer should have a good knowledge of
the internal structure of the computer. For example, different ADD instructions
will be needed for different types of data item. Assembly languages are still
machine specific and hence the program will have to be re-written if it is to be
implemented on another type of computer.
High level Languages
Very early in the development of computers attempts were made to make
programming easier by reducing the amount of knowledge of the internal
workings of the computer that was needed to write programs. If programs
could be presented in a language that was more familiar to the person solving
the problem, then fewer mistakes would be made. High-level programming
languages allow the specification of a problem solution in terms closer to
those used by human beings. These languages were designed to make
programming far easier, less error-prone and to remove the programmer from
having to know the details of the internal structure of a particular computer.
These high-level languages were much closer to human language. One of the
first of these languages was Fortran II which was introduced in about 1958. In
Fortran II our program above would be written as:
C =A+ B
which is obviously much more readable, quicker to write and less error-prone.
As with assembly languages the computer does not understand these highlevel languages directly and hence they have to be processed by passing
them through a program called a compiler which translates them into internal
machine language before they can be executed.
Another advantage accrues from the use of high-level languages if the
languages are standardized by some international body. Then each
manufacturer produces a compiler to compile programs that conform to the
standard into their own internal machine language. Then it should be easy to

MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
take a program which conforms to the standard and implement it on many
different computers merely by re-compiling it on the appropriate computer.
This great advantage of portability of programs has been achieved for several
high-level languages and it is now possible to move programs from one
computer to another without too much difficulty. Unfortunately many compiler
writers add new features of their own which means that if a programmer uses
these features then their program becomes non-portable. It is well worth
becoming familiar with the standard and writing programs which obey it, so
that your programs are more likely to be portable. As with assembly language
human time is saved at the expense of the compilation, time required to
translate the program to internal machine language. The compilation time
used in the computer is trivial compared with the human time saved, typically
seconds as compared with weeks.
Many high level languages have appeared since Fortran II (and many have
also disappeared!), among the most widely used have been:
COBOL
Business applications
FORTRAN Engineering & Scientific Applications
PASCAL General use and as a teaching tool
C & C++ General Purpose - currently most popular
PROLOG Artificial Intelligence
JAVA
General Purpose - gaining popularity rapidly
All these languages are available on a large variety of computers.
Central Processing Unit
The Central Processing Unit (CPU) performs the actual processing of data.
The data it processes is obtained, via the system bus, from the main memory.
Results from the CPU are then sent back to main memory via the system bus.
In addition to computation the CPU controls and co-ordinates the operation of
the other major components. The CPU has two main components, namely:
The Control Unit -- controls the fetching of instructions from the main
memory and the subsequent execution of these instructions. Among other
tasks carried out are the control of input and output devices and the passing
of data to the Arithmetic/Logical Unit for computation.
The Arithmetic/Logical Unit (ALU) -- carries out arithmetic operations on
integer (whole number) and real (with a decimal point) operands. It can also
perform simple logical tests for equality and greater than and less than
between operands.
It is worth noting here that the only operations that the CPU can carry out are
simple arithmetic operations, comparisons between the result of a calculation
and other values, and the selection of the next instruction for processing. All
the rest of the apparently limitless things a computer can do are built on this
very primitive base by programming!
Modern CPUs are very fast. At the time of writing, the CPU of a typical PC is
capable of executing many tens of millions of instructions per second.

MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
Computer bus
In computer architecture, a bus is a subsystem that transfers data or power
between computer components inside a computer or between computers.
Unlike a point-to-point connection, a bus can logically connect several
peripherals over the same set of wires. Each bus defines its set of connectors
to physically plug devices, cards or cables together.
Early computer buses were literally parallel electrical buses with multiple
connections, but the term is now used for any physical arrangement that
provides the same logical functionality as a parallel electrical bus. Modern
computer buses can use both parallel and bit-serial connections, and can be
wired in either a multidrop (electrical parallel) or daisy chain topology, or
connected by switched hubs, as in the case of USB.
Memory
The memory of a computer can hold program instructions, data values, and
the intermediate results of calculations. All the information in memory is
encoded in fixed size cells called bytes. A byte can hold a small amount of
information, such as a single character or a numeric value between 0 and
255. The CPU will perform its operations on groups of one, two, four, or eight
bytes, depending on the interpretation being placed on the data, and the
operations required.
There are two main categories of memory, characterised by the time it takes
to access the information stored there, the number of bytes which are
accessed by a single operation, and the total number of bytes which can be
stored. Main Memory is the working memory of the CPU, with fast access
and limited numbers of bytes being transferred. External memory is for the
long term storage of information. Data from external memory will be
transferred to the main memory before the CPU can operate on it. Access to
the external memory is much slower, and usually involves groups of several
hundred bytes.
Main memory
The main memory of the computer is also known as RAM, standing for
Random Access Memory. It is constructed from integrated circuits and needs
to have electrical power in order to maintain its information. When power is
lost, the information is lost too! It can be directly accessed by the CPU. The
access time to read or write any particular byte are independent of
whereabouts in the memory that byte is, and currently is approximately 50
nanoseconds (a thousand millionth of a second). This is broadly comparable
with the speed at which the CPU will need to access data. Main memory is
expensive compared to external memory so it has limited capacity. The
capacity available for a given price is increasing all the time. For example
many home Personal Computers now have a capacity of 16 megabytes
(million bytes), while 64 megabytes is commonplace on commercial
workstations. The CPU will normally transfer data to and from the main
memory in groups of two, four or eight bytes, even if the operation it is
undertaking only requires a single byte.
External Memory
MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
External memory which is sometimes called backing store or secondary
memory, allows the permanent storage of large quantities of data. Some
method of magnetic recording on magnetic disks or tapes is most commonly
used. More recently optical methods which rely upon marks etched by a laser
beam on the surface of a disc (CD-ROM) have become popular, although they
remain more expensive than magnetic media. The capacity of external
memory is high, usually measured in hundreds of megabytes or even in
gigabytes (thousand million bytes) at present. External memory has the
important property that the information stored is not lost when the computer is
switched off.
The most common form of external memory is a hard disc which is
permanently installed in the computer and will typically have a capacity of
hundreds of megabytes. A hard disc is a flat, circular oxide-coated disc which
rotates continuously. Information is recorded on the disc by magnetising spots
of the oxide coating on concentric circular tracks. An access arm in the disc
drive positions a read/write head over the appropriate track to read and write
data from and to the track. This means that before accessing or modifying
data the read/write head must be positioned over the correct track. This time
is called the seek time and is measured in milliseconds. There is also a
small delay waiting for the appropriate section of the track to rotate under the
head. This latency is much smaller than the seek time. Once the correct
section of the track is under the head, successive bytes of information can be
transferred to the main memory at rates of several megabytes per second.
This discrepancy between the speed of access to the first byte required, and
subsequent bytes on the same track means that it is not economic to transfer
small numbers of bytes. Transfers are usually of blocks of several hundred
bytes or even more. Notice that the access time to data stored in secondary
storage will depend on its location.
The hard disc will hold all the software that is required to run the computer,
from the operating system to packages like word-processing and spreadsheet
programs. All the user's data and programs will also be stored on the hard
disc. In addition most computers have some form of removable storage
device which can be used to save copies of important files etc. The most
common device for this purpose is a floppy disc which has a very limited
capacity. Various magnetic tape devices can be used for storing larger
quantities of data and more recently removable optical discs have been used.
It is important to note that the CPU can only directly access data that is in
main memory. To process data that resides in external memory the CPU must
first transfer it to main memory. Accessing external memory to find the
appropriate data is slow (milliseconds) in relation to CPU speeds but the rate
of transfer of data to main memory is reasonably fast once it has been
located.

MAJID AZIZ KALERI

ANIS HASSAN
Center of Excellence Hyderabad
Input/Output Devices
When using a computer the text of programs, commands to the computer and
data for processing have to be entered. Also information has to be returned
from the computer to the user. This interaction requires the use of input and
output devices.
The most common input devices used by the computer are the keyboard and
the mouse. The keyboard allows the entry of textual information while the
mouse allows the selection of a point on the screen by moving a screen
cursor to the point and pressing a mouse button. Using the mouse in this way
allows the selection from menus on the screen etc. and is the basic method of
communicating with many current computing systems. Alternative devices to
the mouse are tracker balls, light pens and touch sensitive screens.
The most common output device is a monitor which is usually a Cathode Ray
Tube device which can display text and graphics. If hard-copy output is
required then some form of printer is used.
Port
In computing, a port (derived from seaport) is usually a connection through
which data is sent and received. An exception is a software port (porting,
derived from transport), which is software that has been "transported" to
another computer system
Serial port

A male DE-9 serial port on the rear panel of a PC.


In computing, a serial port is an interface on a computer system with which
information is transferred in or out one bit at a time (contrast parallel port).
Throughout most of the history of personal computers, this was accomplished
using the RS-232 standard over simple cables connecting the computer to a
device such as a terminal or modem. Mice, keyboards, and other devices
were also often connected this way.
Parallel port
In computing, a parallel port is an interface from a computer system where
data is transferred in or out in parallel, that is, on more than one wire. A
parallel port carries one bit on each wire thus multiplying the transfer rate
obtainable over a single cable (contrast serial port). There are also several
extra wires on the port that are used for control and status signals to indicate
when data is ready to be sent or received, initiate a reset, indicate an error
condition (such as paper out), and so forth. On many modern (2005)
computers, the parallel port is omitted for cost savings, and is considered to
be a legacy port

MAJID AZIZ KALERI

Das könnte Ihnen auch gefallen