Sie sind auf Seite 1von 7

Module VII: Introduction to

Operating Systems
Introduction
An operating system, or OS, is a software program that enables the computer hardware to communicate
and operate with the computer software. Without a computer operating system, a computer would be
useless.

An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded
into the computer by a boot program, manages all the other programs in a computer. The other programs
are called applications or application programs. The application programs make use of the operating
system by making requests for services through a defined application program interface (API). In
addition, users can interact directly with the operating system through a user interface such as a
command language or a graphical user interface (GUI).

1. Need of Operating System

At the simplest level, an operating system does two things:

1. It manages the hardware and software resources of the system. In a desktop computer, these resources
include such things as the processor, memory, disk space, etc. (On a cell phone, they include the keypad,
the screen, the address book, the phone dialer, the battery and the network connection.)
2. It provides a stable, consistent way for applications to deal with the hardware without having to know
all the details of the hardware.

The first task, managing the hardware and software resources, is very important, as various
programs and input methods compete for the attention of the central processing unit (CPU) and
demand memory, storage and input/output (I/O) bandwidth for their own purposes. In this capacity,
the operating system plays the role of the good parent, making sure that each application gets the
necessary resources while playing nicely with all the other applications, as well as husbanding the
limited capacity of the system to the greatest good of all the users and applications.

The second task, providing a consistent application interface, is especially important if there is to be

93
more than one of a particular type of computer using the operating system, or if the hardware making
up the computer is ever open to change. A consistent application program interface (API) allows a
software developer to write an application on one computer and have a high level of confidence that it
will run on another computer of the same type, even if the amount of memory or the quantity of
storage is different on the two machines.

Even if a particular computer is unique, an operating system can ensure that applications continue to
run when hardware upgrades and updates occur. This is because the operating system and not the
application is charged with managing the hardware and the distribution of its resources. One of the
challenges facing developers is keeping their operating systems flexible enough to run hardware
from the thousands of vendors manufacturing computer equipment. Today's systems can
accommodate thousands of different printers, disk drives and special peripherals in any possible
combination.

The operating system is the interface between the hardware and the user. If there were no O/S, the
computer would be an expensive door stop.

An operating system performs these services for applications:

In a multitasking operating system where multiple programs can be running at the same time, the
operating system determines which applications should run in what order and how much time
should be allowed for each application before giving another application a turn.
It manages the sharing of internal memory among multiple applications.
It handles input and output to and from attached hardware devices, such as hard disks, printers,
and dial-up ports.
It sends messages to each application or interactive user (or to a system operator) about the status
of operation and any errors that may have occurred.
It can offload the management of what are called batch jobs (for example, printing) so that the
initiating application is freed from this work.
On computers that can provide parallel processing, an operating system can manage how to
divide the program so that it runs on more than one processor at a time.

94
2. Types of Operating Systems

BATCH OPERATING Systems


A batch system is one in which jobs are bundled together with the instructions necessary to allow
them to be processed without intervention. Often jobs of a similar nature can be bundled together to
further increase economy

The monitor is system software that is responsible for interpreting and carrying out the instructions
in the batch jobs. When the monitor started a job, it handed over control of the entire computer to
the job, which then controlled the computer until it finished.

Often magnetic tapes and drums were used to store intermediate data and compiled programs.

1. Advantages of batch systems


o move much of the work of the operator to the computer
o increased performance since it was possible for job to start as soon as the previous
job finished
2. Disadvantages
o turn-around time can be large from user standpoint
o more difficult to debug program
o due to lack of protection scheme, one batch job can affect pending jobs (read too
many cards, etc)
o a job could corrupt the monitor, thus affecting pending jobs
o a job could enter an infinite loop

As mentioned above, one of the major shortcomings of early batch systems was that there was no
protection scheme to prevent one job from adversely affecting other jobs.

The solution to this was a simple protection scheme, where certain memory (e.g. where the
monitor resides) were made off-limits to user programs. This prevented user programs from
corrupting the monitor.

To keep user programs from reading too many (or not enough) cards, the hardware was changed to
allow the computer to operate in one of two modes: one for the monitor and one for the user
programs. IO could only be performed in monitor mode, so that IO requests from the user programs
were passed to the monitor. In this way, the monitor could keep a job from reading past it's on
$EOJ card.

95
To prevent an infinite loop, a timer was added to the system and the $JOB card was modified so
that a maximum execution time for the job was passed to the monitor. The computer would
interrupt the job and return control to the monitor when this time was exceeded.

Spooling Batch Systems (mid 1960s - late 1970s)


One difficulty with simple batch systems is that the computer still needs to read the the deck of
cards before it can begin to execute the job. This means that the CPU is idle (or nearly so) during
these relatively slow operations.

Since it is faster to read from a magnetic tape than from a deck of cards, it became common for
computer centers to have one or more less powerful computers in addition to there main
computer. The smaller computers were used to read a decks of cards onto a tape, so that the tape
would contain many batch jobs. This tape was then loaded on the main computer and the jobs on the
tape were executed. The output from the jobs would be written to another tape which would then be
removed and loaded on a less powerful computer to produce any hardcopy or other desired output.

It was a logical extension of the timer idea described above to have a timer that would only let
jobs execute for a short time before interrupting them so that the monitor could start an IO operation.
Since the IO operation could proceed while the CPU was crunching on a user program, little
degradation in performance was noticed.

Since the computer can now perform IO in parallel with computation, it became possible to have the
computer read a deck of cards to a tape, drum or disk and to write out to a tape printer while it
was computing. This process is called SPOOLing: Simultaneous Peripheral Operation OnLine.

Spooling batch systems were the first and are the simplest of the multiprogramming systems.

96
One advantage of spooling batch systems was that the output from jobs was available as soon as the
job completed, rather than only after all jobs in the current cycle were finished.

Multiprogramming Systems (1960s - present)

As machines with more and more memory became available, it was possible to extend the idea of
multiprogramming (or multiprocessing) as used in spooling batch systems to create systems that
would load several jobs into memory at once and cycle through them in some order, working on each
one for a specified period of time.

At this point the monitor is growing to the point where it begins to resemble a modern operating
system. It is responsible for:

starting user jobs


spooling operations
IO for user jobs
switching between user jobs
ensuring proper protection while doing the above

Multiprogramming is a rudimentary form of parallel processing in which several programs are run at
the same time on a uniprocessor. Since there is only one processor , there can be no true
simultaneous execution of different programs. Instead, the operating system executes part of one
program, then part of another, and so on. To the user it appears that all programs are executing at the
same time.

If the machine has the capability of causing an interrupt after a specified time interval, then the
operating system will execute each program for a given length of time, regain control, and then
execute another program for a given length of time, and so on. In the absence of this mechanism, the
operating system has no choice but to begin to execute a program with the expectation, but not the
certainty, that the program will eventually return control to the operating system.

If the machine has the capability of protecting memory , then a bug in one program is less likely to
interfere with the execution of other programs. In a system without memory protection, one program
can change the contents of storage assigned to other programs or even the storage assigned to the
operating system. The resulting system crashes are not only disruptive, they may be very difficult to
debug since it may not be obvious which of several programs is at fault.
Timesharing Systems (1970s - present)

Back in the days of the "bare" computers without any operating system to speak of, the programmer
had complete access to the machine. As hardware and software was developed to create monitors,
simple and spooling batch systems and finally multiprogrammed systems, the separation between the
user and the computer became more and more pronounced.

Users, and programmers in particular, longed to be able to "get to the machine" without having to go
through the batch process. In the 1970s and especially in the 1980s this became possible two different
ways.

The first involved timesharing or time slicing. The idea of multiprogramming was extended to allow
for multiple terminals to be connected to the computer, with each in-use terminal being associated
with one or more jobs on the computer. The operating system is responsible for switching between
the jobs, now often called processes, in such a way that favored user interaction. If the context-
switches occurred quickly enough, the user had the impression that he or she had direct access to the
computer.

Interactive processes are given a higher priority so that when IO is requested (e.g. a key is pressed),
the associated process is quickly given control of the CPU so that it can process it. This is usually
done through the use of an interrupt that causes the computer to realize that an IO event has
occurred.

It should be mentioned that there are several different types of time sharing systems. One type is
represented by computers like our VAX/VMS computers and UNIX workstations. In these computers
entire processes are in memory (albeit virtual memory) and the computer switches between
executing code in each of them. In other types of systems, such as airline reservation systems, a
single application may actually do much of the timesharing between terminals. This way there does
not need to be a different running program associated with each terminal.

Personal Computers

The second way that programmers and users got back at the machine was the advent of
personal computers around 1980. Finally computers became small enough and inexpensive
enough that an individual could own one, and hence have complete access to it.
Real-Time, Multiprocessor, and Distributed/Networked Systems

A real-time computer is one that execute programs that are guaranteed to have an upper bound on
tasks that they carry out. Usually it is desired that the upper bound be very small. Examples included
guided missile systems and medical monitoring equipment. The operating system on real-time
computers is severely constrained by the timing requirements.

Dedicated computers are special purpose computers that are used to perform only one or more tasks.
Often these are real-time computers and include applications such as the guided missile mentioned
above and the computer in modern cars that controls the fuel injection system.

A multiprocessor computer is one with more than one CPU. The category of multiprocessor
computers can be divided into the following
sub-categories:

shared memory multiprocessors have multiple CPUs, all with access to the same memory.
Communication between the the processors is easy to implement, but care must be taken so that
memory accesses are synchronized.
distributed memory multiprocessors also have multiple CPUs, but
each CPU has it's own associated memory. Here, memory access synchronization is not
a problem, but communication between the
processors is often slow and complicated.

Related to multiprocessors are the following:

Networked systems consist of multiple computers that are networked together, usually
with a common operating system and shared resources. Users, however, are aware of the
different computers that make up the system.
Distributed systems also consist of multiple computers but differ
from networked systems in that the multiple computers are transparent to the user.
Often there are redundant resources and a sharing of the workload among the different
computers, but this is all transparent to the user.

Das könnte Ihnen auch gefallen