Sie sind auf Seite 1von 12

OPERATING SYSTEMS

ASSIGNMENT # 1

History of Operating Systems


&
Directory Structure of Linux

Submitted By:
Abdul Basit
Roll# 925
BS(CS) 3rd Semester

Submitted To:

Miss Mobeen
CONTENTS:

History of Computer Operating Systems


• Stacked Job Batch Systems (mid 1950s-mid 1960s)
• Spooling Batch Systems (mid 1960s-late 1970s)
• Multiprogramming Systems (1960s-present)
• Timesharing Systems (1970s-present)
• Personal Computers
• Real-Time, Multiprocessor, and Distributed/Networked Systems

Directory Structure of LINUX


• “/”
• “ /bin ”
• “ /boot ”
• “ /dev ”
• “ /etc ”
• “ /home ”
• “ /lib ”
• “ /mnt ”
• “ /opt ”
• “ /root ”
• “ /sbin ”
• “ /tmp ”
• “ /usr ”
• “ /var ”
History of Computer Operating Systems
Stacked Job Batch Systems (mid 1950s - mid 1960s)

A batch system is one in which jobs are bundled together with the instructions necessary
to allow them to be processed without intervention.

Often jobs of a similar nature can be bundled together to further increase economy

The basic physical layout of the memory of a batch job computer is shown below:

--------------------------------------
| |
| Monitor (permanently resident) |
| |
--------------------------------------
| |
| User Space |
| (compilers, programs, data, etc.) |
| |
--------------------------------------

The monitor is system software that is responsible for interpreting and carrying out the
instructions in the batch jobs. When the monitor started a job, it handed over control of
the entire computer to the job, which then controlled the computer until it finished.

A sample of several batch jobs might look like:

$JOB user_spec ; identify the user for accounting


purposes
$FORTRAN ; load the FORTRAN compiler
source program cards
$LOAD ; load the compiled program
$RUN ; run the program
data cards
$EOJ ; end of job

$JOB user_spec ; identify a new user


$LOAD application
$RUN
data
$EOJ

Often magnetic tapes and drums were used to store intermediate data and compiled
programs.

1. Advantages of batch systems


o move much of the work of the operator to the computer
oincreased performance since it was possible for job to start as soon as the
previous job finished
2. Disadvantages
o turn-around time can be large from user standpoint
o more difficult to debug program
o due to lack of protection scheme, one batch job can affect pending jobs
(read too many cards, etc)
o a job could corrupt the monitor, thus affecting pending jobs
o a job could enter an infinite loop

As mentioned above, one of the major shortcomings of early batch systems was that there
was no protection scheme to prevent one job from adversely affecting other jobs.

The solution to this was a simple protection scheme, where certain memory (e.g. where
the monitor resides) were made off-limits to user programs. This prevented user programs
from corrupting the monitor.

To keep user programs from reading too many (or not enough) cards, the hardware was
changed to allow the computer to operate in one of two modes: one for the monitor and
one for the user programs. IO could only be performed in monitor mode, so that IO
requests from the user programs were passed to the monitor. In this way, the monitor
could keep a job from reading past it's on $EOJ card.

To prevent an infinite loop, a timer was added to the system and the $JOB card was
modified so that a maximum execution time for the job was passed to the monitor. The
computer would interrupt the job and return control to the monitor when this time was
exceeded.

Spooling Batch Systems (mid 1960s - late 1970s)

One difficulty with simple batch systems is that the computer still needs to read the the
deck of cards before it can begin to execute the job. This means that the CPU is idle (or
nearly so) during these relatively slow operations.

Since it is faster to read from a magnetic tape than from a deck of cards, it became
common for computer centers to have one or more less powerful computers in addition to
there main computer. The smaller computers were used to read a decks of cards onto a
tape, so that the tape would contain many batch jobs. This tape was then loaded on the
main computer and the jobs on the tape were executed. The output from the jobs would
be written to another tape which would then be removed and loaded on a less powerful
computer to produce any hardcopy or other desired output.

It was a logical extension of the timer idea described above to have a timer that would
only let jobs execute for a short time before interrupting them so that the monitor could
start an IO operation. Since the IO operation could proceed while the CPU was crunching
on a user program, little degradation in performance was noticed.
Since the computer can now perform IO in parallel with computation, it became possible
to have the computer read a deck of cards to a tape, drum or disk and to write out to a
tape printer while it was computing. This process is called SPOOLing: Simultaneous
Peripheral Operation OnLine.

Spooling batch systems were the first and are the simplest of the multiprogramming
systems.

One advantage of spooling batch systems was that the output from jobs was available as
soon as the job completed, rather than only after all jobs in the current cycle were
finished.

Multiprogramming Systems (1960s - present)

As machines with more and more memory became available, it was possible to extend
the idea of multiprogramming (or multiprocessing) as used in spooling batch systems to
create systems that would load several jobs into memory at once and cycle through them
in some order, working on each one for a specified period of time.

--------------------------------------
| Monitor |
| (more like a operating system) |
--------------------------------------
| User program 1 |
--------------------------------------
| User program 2 |
--------------------------------------
| User program 3 |
--------------------------------------
| User program 4 |
--------------------------------------

At this point the monitor is growing to the point where it begins to resemble a modern
operating system. It is responsible for:

• starting user jobs


• spooling operations
• IO for user jobs
• switching between user jobs
• ensuring proper protection while doing the above

As a simple, yet common example, consider a machine that can run two jobs at once.
Further, suppose that one job is IO intensive and that the other is CPU intensive. One way
for the monitor to allocate CPU time between these jobs would be to divide time equally
between them. However, the CPU would be idle much of the time the IO bound process
was executing.
A good solution in this case is to allow the CPU bound process (the background job) to
execute until the IO bound process (the foreground job) needs some CPU time, at which
point the monitor permits it to run. Presumably it will soon need to do some IO and the
monitor can return the CPU to the background job.

Timesharing Systems (1970s - present)

Back in the days of the "bare" computers without any operating system to speak of, the
programmer had complete access to the machine. As hardware and software was
developed to create monitors, simple and spooling batch systems and finally
multiprogrammed systems, the separation between the user and the computer became
more and more pronounced.

Users, and programmers in particular, longed to be able to "get to the machine" without
having to go through the batch process. In the 1970s and especially in the 1980s this
became possible two different ways.

The first involved timesharing or timeslicing. The idea of multiprogramming was


extended to allow for multiple terminals to be connected to the computer, with each in-
use terminal being associated with one or more jobs on the computer. The operating
system is responsible for switching between the jobs, now often called processes, in such
a way that favored user interaction. If the context-switches occurred quickly enough, the
user had the impression that he or she had direct access to the computer.

Interactive processes are given a higher priority so that when IO is requested (e.g. a key
is pressed), the associated process is quickly given control of the CPU so that it can
process it. This is usually done through the use of an interrupt that causes the computer
to realize that an IO event has occurred.

It should be mentioned that there are several different types of time sharing systems. One
type is represented by computers like our VAX/VMS computers and UNIX workstations.
In these computers entire processes are in memory (albeit virtual memory) and the
computer switches between executing code in each of them. In other types of systems,
such as airline reservation systems, a single application may actually do much of the
timesharing between terminals. This way there does not need to be a different running
program associated with each terminal.

Personal Computers

The second way that programmers and users got back at the machine was the advent of
personal computers around 1980. Finally computers became small enough and
inexpensive enough that an individual could own one, and hence have complete access to
it.
Real-Time, Multiprocessor, and Distributed/Networked Systems

A real-time computer is one that execute programs that are guaranteed to have an upper
bound on tasks that they carry out. Usually it is desired that the upper bound be very
small. Examples included guided missile systems and medical monitoring equipment.
The operating system on real-time computers is severely constrained by the timing
requirements.

Dedicated computers are special purpose computers that are used to perform only one or
more tasks. Often these are real-time computers and include applications such as the
guided missile mentioned above and the computer in modern cars that controls the fuel
injection system.

A multiprocessor computer is one with more than one CPU. The category of
multiprocessor computers can be divided into the following sub-categories:

• shared memory multiprocessors have multiple CPUs, all with access to the
same memory. Communication between the the processors is easy to implement,
but care must be taken so that memory accesses are synchronized.
• distributed memory multiprocessors also have multiple CPUs, but each CPU
has it's own associated memory. Here, memory access synchronization is not a
problem, but communication between the processors is often slow and
complicated.

Related to multiprocessors are the following:

• networked systems consist of multiple computers that are networked together,


usually with a common operating system and shared resources. Users, however,
are aware of the different computers that make up the system.
• distributed systems also consist of multiple computers but differ from networked
systems in that the multiple computers are transparent to the user. Often there are
redundant resources and a sharing of the workload among the different computers,
but this is all transparent to the user.

Goals of Operating System


 Job Management

Job management controls the order and time in which programs are run and is
more sophisticated in the mainframe environment where scheduling the daily
work has always been routine. IBM's job control language (JCL) was developed
decades ago. In a desktop environment, batch files can be written to perform a
sequence of operations which can be scheduled to start at a given time.
 Task Management

Multitasking, which is the ability to simultaneously execute multiple programs, is


available in all operating systems today. Critical in the mainframe and large server
environment, applications can be prioritized to run faster or slower depending on
their purpose. In the desktop world, multitasking is necessary just for keeping
several applications open at the same time so you can bounce back and forth
among them. See multitasking.
 Data Management

Data management keeps track of the data on disk, tape and optical storage
devices. The application program deals with data by file name and a particular
location within the file. The operating system's file system knows where that data
are physically stored (which sectors on disk) and interaction between the
application and operating system is through the programming interface. Whenever
an application needs to read or write data, it makes a call to the operating system.
 Device Management

Device management controls peripheral devices by sending them commands in


their own proprietary language. The software routine that knows how to deal with
each device is called a "driver." The operating system contains all the drivers for
the peripherals attached to the computer. When a new peripheral is added, that
device's driver is installed into the operating system.
 Security

Multiuser operating systems provide password protection to keep unauthorized


users out of the system. Large operating systems also maintain activity logs and
accounting of the user's time for billing purposes. They also provide backup and
recovery routines for starting over in the event of a system fail.
Linux Directory Structure
One of the best ways to come to terms with a new operating system is to get familiar
with the way that the system organizes its files. Files are usually stored in directories
which are arranged in a hierarchal tree structure. This is commonly called the
directory structure.

Here is an example of a common Linux directory


structure represented by the KDE desktop. This
should look similar to what you would see with
most other visual operating systems.

Linux uses "The Filesystem Hierarchy Standard".


There will be slight variations but a lot of what
you see here is common to many Unix-based
systems. Most Linux distributions such as Red
Hat, Mandrake, Suse, Debian, etc, use this file
system, or at least something very close to it. It
should be noted, you can build your own Linux
system any way that you want (Free software by
definition). There is no strict requirement that
you use any particular structure, however, other
users will have difficulty using your system and
it could turn into a maintenance nightmare very
quickly.

Some of the directories shown here will be of little interest to many Linux users. For
most users, other then root, their primary concern will be their home directory which
can be structured any way that is convenient.

Windows users will notice that there are no hard drive distinctions. The directory
structure shown actually represents a system which has 3 hard drives. With Unix-
based systems, drives are not shown. File systems are mounted to particular drives
but for the user, the actual implementation is transparent. This approach allows files
to be presented to the user in a more centralized view even though the files may
actually be spread out among several hard drives and partitions. The result is better
security and protection against system crashes and data loss.

Lets take a look at each directory and what it is used for:

"/"
This is a mandatory directory. It is the beginning of the filesystem and includes all of
the directories beneath it. If you type cd / and then ls you will see a listing of all
directories on the system (That you have permission to see). This should be similar
to the graphic above.

/bin
This is a mandatory directory. This directory contains the binaries that are used in
single-user systems. For multi-user systems these binaries are usually stored in the
/usr/bin directory. When you type a command such as ls or chmod it is usually
directed to one of these two directories where the program exists.
/boot
This is a mandatory directory. This directory stores the files which are used for the
system startup except for the configuration and map installer. Frequently the kernel
is stored here especially if more then one kernel is installed.

/dev
This is a mandatory directory. The device files, sockets, and named pipes are stored
here.

/etc
This is a mandatory directory. This directory, pronounced "et-see", holds the
configuration files for the system. It is divided into many subdirectories.

/home
This is an optional but widely used directory. The other variation on the /home
directory is to use a subdirectory in the /var directory. This is where users will do
most of their work. Each user is given their own directory in the /home directory
which is theirs to organize and use as they choose. Frequently web server document
roots are located in the /home directory. (ex. /home/public_html or
/home/www/public_html) The /home directory is designed to host dynamically
changing files and usually occupies one of the larger partitions on a hard disk.

/lib
This is a mandatory directory. Shared libraries needed at bootup or which need to be
run by top level commands are stored here. Libraries which support users are usually
stored in the /usr/lib directory.

/mnt
This is an optional but very popular directory. This directory contains mount points
for external storage devices. To access a floppy disk drive you cd to mnt/floppy. Once
an external drive is accessed, its file system is mounted to the host system in the
/mnt directory.

/opt
This is an optional directory. It is a directory intended to contain software packages
which are added to the original system. On my system it is present, but empty.

/proc
This is an optional but widely used directory. It contains a virtual filesystem which is
created and used by the currently running kernel. It is deleted when the system is
shut down. Frequently, monitoring programs use the /proc directory to obtain
information on currently running processes and other environmental information.

/root
This is an optional but widely used directory. It is often created to eliminate clutter
from the "/" directory. It contains configuration files for the root user.

/sbin
This is a mandatory directory. This directory was originally a place to store static
binaries. It has been expanded to include administrative binaries which are used by
the root user only.
/tmp
This is a mandatory directory. This directory is used by programs to store temporary
files. Files which are located here are often flushed on reboot or flushed periodically.

/usr
This is a mandatory directory. The /usr directory is designed to store static, sharable,
read-only data. Programs which are used by all users are frequently stored here.
Data which results from these programs is usually stored elsewhere (often /var).

/var
This is a mandatory directory. This directory stores variable data like logs, mail, and
process specific files. Most, but not all, subdirectories and files in the /var directory
are shared. This is another popular location for web server document roots.
REFERENCES

http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/osHistory.htm

http://www.math-cs.gordon.edu/courses/cs322/lectures/history.html

http://www.osdata.com/kind/history.htm

http://en.wikipedia.org/wiki/History_of_operating_systems

http://doc.vic.computerbank.org.au/tutorials/linuxdirectorystructure/

http://www.geocities.com/sunnylug/lindir.html

http://linux.omnipotent.net/article.php?article_id=9027

http://www.dynamic-apps.com/linux_directories.jsp

http://www.comptechdoc.org/os/linux/usersguide/linux_ugfilestruct.html

http://www.cae.wisc.edu/site/public/?title=lindir

http://www.tuxfiles.org/linuxhelp/linuxdir.html

Das könnte Ihnen auch gefallen