Beruflich Dokumente
Kultur Dokumente
The first computers of this generation were developed for the atomic energy industry.
Instead of punched cards and printouts, users interacted with third generation computers
through keyboards and monitors and interfaced with an operating system, which allowed
the device to run many different applications at one time with a central program that
monitored the memory. Computers for the first time became accessible to a mass
audience because they were smaller and cheaper than their predecessors.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh. Microprocessors also moved out of the realm of desktop
computers and into many areas of life as more and more everyday products began to use
microprocessors.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation
computers also saw the development of GUIs, the mouse and handheld devices.
Also from Wikipedia, "The term fifth generation was intended to convey the system as
being a leap beyond existing machines. ...the fifth generation, it was widely believed at
the time, would instead turn to massive numbers of CPUs for added performance."
The project was canceled in 1993 with little appreciable lasting impact.
Generations of Computer
By Najmi
The tracks on a magnetic drum are assigned to channelslocated around the circumference
of the drum, formingadjacent circular bands that wind around the drum. Asingle drum
can have up to 200 tracks. As the drumrotates at a speed of up to 3,000 rpm, the
device'sread/write heads deposit magnetized spots on the drumduring the write operation
and sense these spotsduring a read operation. This action is similar tothat of a magnetic
tape or disk drive.
They were very expensive to operate and in additionto using a great deal of electricity,
generated a lotof heat, which was often the cause of malfunctions.First generation
computers relied on machine languageto perform operations, and they could only solve
oneproblem at a time. Machine languages are the onlylanguages understood by
computers. While easilyunderstood by computers, machine languages are
almostimpossible for humans to use because they consistentirely of numbers.
Programmers, therefore, useeither a high-level programming language or anassembly
language. An assembly language contains thesame instructions as a machine language,
but theinstructions and variables have names instead of beingjust numbers.
Every CPU has its own unique machine language.Programs must be rewritten or
recompiled, therefore,to run on different types of computers. Input wasbased on punched
cards and paper tape, and output wasdisplayed on printouts.
The UNIVAC and ENIAC computers are examples offirst-generation computing devices.
The UNIVAC was thefirst commercial computer delivered to a businessclient, the U.S.
Census Bureau in 1951.
Acronym for Electronic Numerical Integrator AndComputer, the world's first operational
electronicdigital computer, developed by Army Ordnance tocompute World War II
ballistic firing tables. TheENIAC, weighing 30 tons, using 200 kilowatts ofelectric power
and consisting of 18,000 vacuum tubes,1,500 relays, and hundreds of thousands of
resistors,capacitors, and inductors, was completed in 1945. Inaddition to ballistics, the
ENIAC's field ofapplication included weather prediction, atomic-energycalculations,
cosmic-ray studies, thermal ignition,random-number studies, wind-tunnel design, and
otherscientific uses. The ENIAC soon became obsolete as theneed arose for faster
computing speeds.
The transistor was invented in 1947 but did not seewidespread use in computers until the
late 50s. Thetransistor was far superior to the vacuum tube,allowing computers to become
smaller, faster, cheaper,more energy-efficient and more reliable than theirfirst-generation
predecessors. Though the transistorstill generated a great deal of heat that subjectedthe
computer to damage, it was a vast improvement overthe vacuum tube. Second-generation
computers stillrelied on punched cards for input and printouts foroutput.
The first computers of this generation were developedfor the atomic energy industry.
Silicon is the basic material used to make computerchips, transistors, silicon diodes and
otherelectronic circuits and switching devices because itsatomic structure makes the
element an idealsemiconductor. Silicon is commonly doped, or mixed,with other
elements, such as boron, phosphorous andarsenic, to alter its conductive properties.
Computer chips, both for CPU and memory, are composedof semiconductor materials.
Semiconductors make itpossible to miniaturize electronic components, such astransistors.
Not only does miniaturization mean thatthe components take up less space, it also means
thatthey are faster and require less energy.
Instead of punched cards and printouts, usersinteracted with third generation computers
throughkeyboards and monitors and interfaced with anoperating system, which allowed
the device to run manydifferent applications at one time with a centralprogram that
monitored the memory. Computers for thefirst time became accessible to a mass
audiencebecause they were smaller and cheaper than theirpredecessors.
In both cases, the higher the value, the more powerfulthe CPU. For example, a 32-bit
microprocessor thatruns at 50MHz is more powerful than a 16-bitmicroprocessor that
runs at 25MHz.
What in the first generation filled an entire roomcould now fit in the palm of the hand.
The Intel 4004chip, developed in 1971, located all the components ofthe computer - from
the central processing unit andmemory to input/output controls - on a single chip.
• The control unit, which extracts instructions frommemory and decodes and
executes them, calling on theALU when necessary.
In 1981 IBM introduced its first computer for the homeuser, and in 1984 Apple
introduced the Macintosh.Microprocessors also moved out of the realm of
desktopcomputers and into many areas of life as more and moreeveryday products began
to use microprocessors.
As these small computers became more powerful, theycould be linked together to form
networks, whicheventually led to the development of the Internet.Fourth generation
computers also saw the developmentof GUIs, the mouse and handheld devices
Currently, no computers exhibit full artificialintelligence (that is, are able to simulate
humanbehavior). The greatest advances have occurred in thefield of games playing. The
best computer chessprograms are now capable of beating humans. In May,1997, an IBM
super-computer called Deep Blue defeatedworld chess champion Gary Kasparov in a
chess match.
In the area of robotics, computers are now widely usedin assembly plants, but they are
capable only of verylimited tasks. Robots have great difficultyidentifying objects based
on appearance or feel, andthey still move and handle objects clumsily.
In the early 1980s, expert systems were believed torepresent the future of artificial
intelligence and ofcomputers in general. To date, however, they have notlived up to
expectations. Many expert systems helphuman experts in such fields as medicine
andengineering, but they are very expensive to produceand are helpful only in special
situations.
Today, the hottest area of artificial intelligence isneural networks, which are proving
successful in anumber of disciplines such as voice recognition andnatural-language
processing.
There are several programming languages that are knownas AI languages because they
are used almostexclusively for AI applications. The two most commonare LISP and
Prolog.
Voice Recognition
The field of computer science that deals withdesigning computer systems that can
recognize spokenwords. Note that voice recognition implies only thatthe computer can
take dictation, not that itunderstands what is being said. Comprehending humanlanguages
falls under a different field of computerscience called natural language processing. A
number of voice recognition systems are available onthe market. The most powerful can
recognize thousandsof words. However, they generally require an extendedtraining
session during which the computer systembecomes accustomed to a particular voice and
accent.Such systems are said to be speaker dependent.
Many systems also require that the speaker speakslowly and distinctly and separate each
word with ashort pause. These systems are called discrete speechsystems. Recently, great
strides have been made incontinuous speech systems -- voice recognition systemsthat
allow you to speak naturally. There are nowseveral continuous-speech systems available
forpersonal computers.
Because of their limitations and high cost, voicerecognition systems have traditionally
been used onlyin a few specialized situations. For example, suchsystems are useful in
instances when the user isunable to use a keyboard to enter data because his orher hands
are occupied or disabled. Instead of typingcommands, the user can simply speak into a
headset.Increasingly, however, as the cost decreases andperformance improves, speech
recognition systems areentering the mainstream and are being used as analternative to
keyboards.
Most computers have just one CPU, but some models haveseveral. There are even
computers with thousands ofCPUs. With single-CPU computers, it is possible toperform
parallel processing by connecting thecomputers in a network. However, this type of
parallelprocessing requires very sophisticated software calleddistributed processing
software.
Note that parallel processing differs frommultitasking, in which a single CPU executes
severalprograms at once.
Quantum computation and molecular and nanotechnologywill radically change the face
of computers in yearsto come. First proposed in the 1970s, quantumcomputing relies on
quantum physics by takingadvantage of certain quantum physics properties ofatoms or
nuclei that allow them to work together asquantum bits, or qubits, to be the
computer'sprocessor and memory. By interacting with each otherwhile being isolated
from the external environment,qubits can perform certain calculations exponentiallyfaster
than conventional computers.
Qubits do not rely on the traditional binary nature ofcomputing. While traditional
computers encodeinformation into bits using binary numbers, either a 0or 1, and can only
do calculations on one set ofnumbers at once, quantum computers encode informationas a
series of quantum-mechanical states such as spindirections of electrons or polarization
orientationsof a photon that might represent a 1 or a 0, mightrepresent a combination of
the two or might representa number expressing that the state of the qubit issomewhere
between 1 and 0, or a superposition of manydifferent numbers at once. A quantum
computer can doan arbitrary reversible classical computation on allthe numbers
simultaneously, which a binary systemcannot do, and also has some ability to
produceinterference between various different numbers. Bydoing a computation on many
different numbers at once,then interfering the results to get a single answer, aquantum
computer has the potential to be much morepowerful than a classical computer of the
same size.In using only a single processing unit, a quantumcomputer can naturally
perform myriad operations inparallel.
Quantum computing is not well suited for tasks such asword processing and email, but it
is ideal for taskssuch as cryptography and modeling and indexing verylarge databases.
Although research in this field dates back to RichardP. Feynman's classic talk in 1959,
the termnanotechnology was first coined by K. Eric Drexler in1986 in the book Engines
of Creation.
In the popular press, the term nanotechnology issometimes used to refer to any sub-
micron process,including lithography. Because of this, manyscientists are beginning to
use the term molecularnanotechnology when talking about true nanotechnologyat the
molecular level.
Here natural language means a human language. Forexample, English, French, and
Chinese are naturallanguages. Computer languages, such as FORTRAN and C,are not.