Sie sind auf Seite 1von 10

What is Operating System (OS) ?

An operating system (OS) is software that manages computer hardware and software resources
and provides common services for computer programs. The operating system is an essential component of
the system software in a computer system. Application programs usually require an operating system to
function. There are several types of operating systems as example real-time, multi-user, multi-tasking
versus single tasking, distributed, template, and embedded.
The earliest computers were mainframes that lacked any form of operating system. Each user had
sole use of the machine for a scheduled period of time and would arrive at the computer with program and
data, often on punched paper cards and magnetic or paper tape. The program would be loaded into the
machine, and the machine would be set to work until the program completed or crashed. Programs could
generally be debugged via a control panel using toggle switches and panel lights. Symbolic
languages, assemblers, and compilers were developed for programmers to translate symbolic programcode into machine code that previously would have been hand-encoded. Later machines came
with libraries of support code on punched cards or magnetic tape, which would be linked to the user's
program to assist in operations such as input and output. This was the genesis of the modern-day
operating system. However, machines still ran a single job at a time. At Cambridge University in England
the job queue was at one time a washing line from which tapes were hung with different colored clothespegs to indicate job-priority.
As machines became more powerful, the time to run programs diminished and the time to hand
off the equipment to the next user became very large by comparison. Accounting for and paying for
machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues
evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches
of punch-cards stacked one on top of the other in the reader, until the machine itself was able to select and
sequence which magnetic tape drives processed which tapes. Where program developers had originally
had access to run their own jobs on the machine, they were supplanted by dedicated machine operators
who looked after the well-being and maintenance of the machine and were less and less concerned with
implementing tasks manually. When commercially available computer centers were faced with the
implications of data lost through tampering or operational errors, equipment vendors were put under
pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring
was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage
used and for signaling when operator intervention was required by jobs such as changing magnetic tapes
and paper forms. Security features were added to operating systems to record audit trails of which

programs were accessing which files and to prevent access to a production payroll file by an engineering
program, for example.
All these features were building up towards a fully capable operating system. In 1940-1950, for early pc,
computers were built to perform a series of single tasks, and have function like a calculator. Basic
operating system features were developed in the 1950s, such as resident monitor functions that could
automatically run different programs in succession to speed up processing. Operating systems did not
exist in their modern and more complex forms until the early 1960s. Hardware features were added, that
enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became
popular in the 1980s, operating systems were made for them similar in concept to those used on larger
computers.
In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this
time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were
special-purpose systems that, for example, generated ballistics tables for the military or controlled the
printing of payroll checks from data on punched paper cards. After programmable general purpose
computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on
punched paper tape) were introduced that sped up the programming process.
In the early 1950s, a computer could execute only one program at a time. Each user had sole use
of the computer for a limited period of time and would arrive at a scheduled time with program and data
on punched paper cards and/or punched tape. The program would be loaded into the machine, and the
machine would be set to work until the program completed or crashed. Programs could generally be
debugged via a front panel using toggle switches and panel lights. Later machines came with libraries
of programs, which would be linked to a user's program to assist in operations such as input and output
and generating computer code from human-readable symbolic code. This was the genesis of the modernday operating system. However, machines still ran a single job at a time

History of Operating Systems (OS)


1940s (First generations)

The earliest electronic digital computers had no operating systems.


Machines of the time were so primitive that programs were often

entered one bit at time on rows of mechanical switches (plug


boards). Programming languages were unknown (not even
assembly languages).Operating systems were never heard before
1950s (Second generations)

By the early 1950's, the routine had improved somewhat with the
introduction of punch cards. The General Motors Research
Laboratories implemented the first operating systems in early
1950's for their IBM 701. The system of the 50's generally ran one
job at a time. These were called single-stream batch processing
systems because programs and data were submitted in groups or
batches.

1960s (Third generations)

The systems of the 1960's were also batch processing systems, but
they were able to take better advantage of the computer's resources
by running several jobs at once. So operating systems designers
developed the concept of multiprogramming in which several jobs
are in main memory at once; a processor is switched from job to
job as needed to keep several jobs advancing while keeping the
peripheral devices in use. For example, on the system with no
multiprogramming, when the current job paused to wait for other
I/O operation to complete, the CPU simply sat idle until the I/O
finished. The solution for this problem that evolved was to
partition memory into several pieces, with a different job in each
partition. While one job was waiting for I/O to complete, another
job could be using the CPU. Another major feature in thirdgeneration operating system was the technique called spooling
(simultaneous peripheral operations on line). In spooling, a highspeed device like a disk interposed between a running program and
a low-speed device involved with the program in input/output.
Instead of writing directly to a printer, for example, outputs are
written to the disk. Programs can run to completion faster, and
other programs can be initiated sooner when the printer becomes
available, the outputs may be printed. Note that spooling technique
is much like thread being spun to a spool so that it may be later be

unwound as needed. Another feature present in this generation was


time-sharing technique, a variant of multiprogramming technique,
in which each user has an on-line terminal. Because the user is
present and interacting with the computer, the computer system
must respond quickly to user requests, otherwise user productivity
could

suffer.

Timesharing

systems

were

developed

to

multiprogram large number of simultaneous interactive users

Fourth generations

The systems of the 1960's were also batch processing systems, but
they were able to take better advantage of the computer's resources
by running several jobs at once. So operating systems designers
developed the concept of multiprogramming in which several jobs
are in main memory at once; a processor is switched from job to
job as needed to keep several jobs advancing while keeping the
peripheral devices in use.
For example, on the system with no multiprogramming, when the
current job paused to wait for other I/O operation to complete, the
CPU simply sat idle until the I/O finished. The solution for this
problem that evolved was to partition memory into several pieces,
with a different job in each partition. While one job was waiting
for I/O to complete, another job could be using the CPU.
Another major feature in third-generation operating system was
the technique called spooling (simultaneous peripheral operations
on line). In spooling, a high-speed device like a disk interposed
between a running program and a low-speed device involved with
the program in input/output. Instead of writing directly to a printer,
for example, outputs are written to the disk. Programs can run to
completion faster, and other programs can be initiated sooner
when the printer becomes available, the outputs may be printed.
Note that spooling technique is much like thread being spun to a
spool so that it may be later be unwound as needed.
Another feature present in this generation was time-sharing
technique, a variant of multiprogramming technique, in which

each user has an on-line (i.e., directly connected) terminal.


Because the user is present and interacting with the computer, the
computer system must respond quickly to user requests, otherwise
user productivity could suffer. Timesharing systems were
developed to multiprogram large number of simultaneous
interactive users.

The First Generation(194555) Vacuum Tubes and Plugboards

In these early days, a single group of people designed, built, programmed, operated, and maintained each
machine. All programming was done in absolute machine language, often by wiring up plugboards to
control the machine's basic functions. Programming languages were unknown (even assembly language
was unknown). Operating systems were unheard of. The usual mode of operation was for the programmer
to sign up for a block of time on the signup sheet on the wall, then come down to the machine room,
insert his or her plugboard into the computer, and spend the next few hours hoping that none of the
20,000 or so vacuum tubes would burn out during the run. Virtually all the problems were straightforward
numerical calculations, such as grinding out tables of sines, cosines, and logarithms.
By the early 1950s, the routine had improved somewhat with the introduction of punched cards. It was
now possible to write programs on cards and read them in instead of using plugboards; otherwise, the
procedure was the same.

The Second Generation (195565) Transistors and Batch Systems


The introduction of the transistor in the mid-1950s changed the picture radically. Computers became
reliable enough that they could be manufactured and sold to paying customers with the expectation that
they would continue to function long enough to get some useful work done. These machines, now
called mainframes, were locked away in specially air conditioned computer rooms, with staffs of
professional operators to run them. Only big corporations or major government agencies or universities
could afford the multimillion dollar price tag. To run a job (i.e., a program or set of programs), a
programmer would first write the program on paper (in FORTRAN or assembler), then punch it on cards.
He would then bring the card deck down to the input room and hand it to one of the operators and go
drink coffee until the output was ready. When the computer finished whatever job it was currently

running, an operator would go over to the printer and tear off the output and carry it over to the output
room, so that the programmer could collect it later. Then he would take one of the card decks that had
been brought from the input room and read it in. If the FORTRAN compiler was needed, the operator
would have to get it from a file cabinet and read it in. Much computer time was wasted while operators
were walking around the machine room. Given the high cost of the equipment, it is not surprising that
people quickly looked for ways to reduce the wasted time. The solution generally adopted was the batch
system. The idea behind it was to collect a tray full of jobs in the input room and then read them onto a
magnetic tape using a small (relatively) inexpensive computer, such as the IBM 1401, which was very
good at reading cards, copying tapes, and printing output, but not at all good at numerical calculations.
Other, much more expensive machines, such as the IBM 7094, were used for the real computing.

Present Personal Computers


An interesting development that began taking place during the mid-1980s is the growth of networks of
personal computers running network operating systems and distributed operating systems(Tanenbaum
and Van Steen, 2002). In a network operating system, the users are aware of the existence of multiple
computers and can log in to remote machines and copy files from one machine to another. Each machine
runs its own local operating system and has its own local user (or users). Network operating systems are
not fundamentally different from single-processor operating systems. They obviously need a network
interface controller and some low-level software to drive it, as well as programs to achieve remote login
and remote file access, but these additions do not change the essential structure of the operating system. A
distributed operating system, in contrast, is one that appears to its users as a traditional uniprocessor
system, even though it is actually composed of multiple processors. The users should not be aware of
where their programs are being run or where their files are located; that should all be handled
automatically and efficiently by the operating system. True distributed operating systems require more
than just adding a little code to a uniprocessor operating system, because distributed and centralized
systems differ in critical ways. Distributed systems, for example, often allow applications to run on
several processors at the same time, thus requiring more complex processor scheduling algorithms in
order to optimize the amount of parallelism. Communication delays within the network often mean that
these (and other) algorithms must run with incomplete, outdated, or even incorrect information. This
situation is radically different from a single-processor system in which the operating system has complete
information about the system state.

Evolution of Operating Systems Design/Networking and Security


When networking first started, connections between computers were simple one directional links. Then
two directional links were developed. Then banks of connections became possible, and so on. Today
literally millions of computers are connected to every computer in the internet. At each expansion of
networking to a large audience, the problems caused by connectivity increased.

Trusted
computers
Hubs and
packet
sniffers

Demilitarized
Zones

Intrusion
Detection

Network
Distributions
Firewalls

Package
filters

Socket
scanners

Development of UNIX Operating System


In the 1970s a new operating system called Unix was developed at Bell Labs.
Unix was designed to be a portable
This meant that it could be moved from one type of computer to another different
Type of computer which used different hardware, without too much difficulty.
Unix has to be tailored for each machine, but it is designed to make this tailoring relatively
straightforward
By the late 1980's, Unix had been implemented on every common make of computer, from the Digital
VAX to IBM mainframes, from the humble microcomputer to powerful supercomputers
It is the only operating system that has been implemented on such a diverse range of hardware
platforms
As a result, Unix and more often Linux (a variant of Unix) is now one of the most commonly used
operating systems in the world
Linux is free and comes with a wealth of software and software development tools

Development of PC Operating System


In the microcomputer world, the MS-DOS operating system was very widely used on IBM
microcomputers (PCs) and their clones.
When IBM introduced their PC to the market in the early 1980s, many of their competitors in effect
copied the machine producing IBM compatibles or clones.

Their competitors then acquired the same operating system for the clones, from a company called
Microsoft who had developed the operating system for IBM!
The IBM PC operating system was called PC-DOS while that of its clones was called MS-DOS. For
practical purposes they were almost identical.
MS-DOS became the most widely used operating system in the 1980/90s since there were hundreds
of million PCs in use around the world using MS-DOS.
In the mid-1980s Apple developed the Macintosh with the a successful GUI based on one developed
by Rank Xerox.
The Macintosh GUI with its WIMP (Windows, Icons, Mouse, Pull-down menus)technology
introduced GUIs as the design paradigm of choice
Microsoft then replaced MS-DOS by their Windows (GUI-based) operatingsystem. Numerous
versions of Windows have been released over the years: e.g.Windows 95, Windows 98, Windows
ME, Windows 2000, Windows NT and Windows XP

LINUX : Future Vision of Operating System

Linux is already successful on many different kinds of devices, but there are also many technological
areas where Linux is moving towards, even as desktop and server development continues to grow faster
than any other operating system today. Linux is being installed on the system BIOS of laptop and
notebook computers, which will enable users to turn their devices on in a matter of seconds, bringing up a
streamlined Linux environment. This environment will have Internet connectivity tools such as a web
browser and an e-mail client, allowing users to work on the Internet without having to boot all the way
into their device's primary operating system. Even if that operating system is Windows.
At the same time, Linux is showing up on mobile Internet devices (MIDs). This includes embedded
devices such as smartphones and PDAs, as well as netbook devices--small laptop-type machines that
feature the core functionality of their larger counterparts in a smaller, more energy-efficient package.
The growth of cloud computing is a natural fit for Linux, which already runs many of the Internet's web
servers. Linux enables cloud services such as Amazon's A3 to work with superior capability to deliver
online applications and information to users.
Related to Linux' growth in cloud computing is the well-known success of Linux on supercomputers, both
in the high-performance computing (HPC) and high-availability (HA) areas, where academic research in
physics and bioengineering, and firms in the financial and energy industries need reliable and scalable
computing power to accomplish their goals.
Many of the popular Web 2.0 services on the Internet, such as Twitter, Linked In, YouTube, and Google
all rely on Linux as their operating system. As new web services arrive in the future, Linux will
increasingly be the platform that drives these new technologies.

CONCLUSION

The next-generation operating system starts with the user. It ignores the underlying hardware -- and as a
result, such systems are inherently less efficient than today's primitive, machine-centered ones. Instead, it
reflects the shape of your life. Its role is to track your life event by event, moment by moment, thought by
thought.
A life is a sequence of events in time. The future of information management is narrative information
management, in which all of your stored documents are arranged as a "documentary history" of your life.
All the digital documents you create or receive, all the "community" documents you want or need to see,
are laid out in one narrative stream with a past, present and future. The stream flows because time flows;
the future (where your calendar notes, meeting reminders and plans are stored) flows into the present,
then into the past. E-mail shows up at your "now line," and flows into the past. Everything you've got is
on your stream. And it needs to be very simple in order to fulfill people needs and make interactions
between them better

Das könnte Ihnen auch gefallen