Sie sind auf Seite 1von 8

Operating Systems, Stallings, Chapter 2 Notes

Derek Harter

Due: 9/6/2013
Electronic Version

This chapter starts with a brief history of the evolution of operating systems. This history is not just to list some important systems and the dates
on which they were created. The history of the evolution of operating system properties gives us some insight into the important developments and
thought processes that went into developing the concepts and capabilities
behind modern operating systems.
For the most part, I am not requiring you to read or remember the
materials specic to particular OSes. In this chapter, sections 2.7-2.11 are
specic discussions of Microsoft Windows, Unix and Linux OSes. Of course
if interested I encourage you to read these sections. I will point out or let
you know for future chapters if there are parts in these OS specic sections
that you should be reading.
Learning Objectives

After studying this chapter, you should be able to:

Summarize, at a top level, the key functions of an operating system


Discuss the evolution of operating systems for early simple batch sys-

tems to modern complex systems.

Give a brief explanation of each of the major achievements in OS re-

search as dened in Section 2.3.

Discuss the key design areas that have been instrumental in the devel-

opment of modern operating systems.

Dene and discuss virtual machines and virtualization.

Understand the OS design issues raised by the introduction of multi-

processor and multicore organizations.

2 Operating System Overview

2.1 Operating System Objectives and Functions

The 3 objectives of OS design identied by Stallings:

1. Convenience
2. Eciency
3. Ability to evolve
are pretty good general goals. One important thing to understand is that
the design of an OS is an engineering problem that involves certain tradeos (as all engineering problems do). Some of the 3 listed objectives can be
mutually exclusive. So for example, increasing convenience usually involves
adding more code and complexity, which can often work against ecient
The layered hierarchy in Figure 2.1 makes several points. You should remember that one way of viewing the OS is as an interface between the User
(and/or their Application programs) and the hardware. Thus one fundamental purpose of an OS is to abstract away the complexity of the hardware,
and provide a more simplied picture of access to system resources. Another
thing to keep in mind about Figure 2.1 is that this is a generic view of the
typical layers of abstraction created by the OS and instruction set. The
details of particular OS and CPU instruction sets will vary.

The OS masks the details of the hardware from the programmer and
provides the programmer with a convenient interface for using the system.
It acts as a mediator and as a resource manager.
The OS as a system control or control mechanism is unusual in two respects. It functions the same way as an ordinary application or program
running on the system, and it frequently relinquishes control to other applications, and must rely on the processor to allow it to regain control. This
later statement implies that, for correct and robust system implementation,
there must be certain functions built into the hardware processor for the
OS to function as a controller properly. We will see this includes both the
ability to protect and deny/grant access to areas of memory based on which
application is currently running, and the ability to interrupt programs and
give control back to other programs (e.g. the OS).
And nally we should not forget the ability of the OS to grow as new
hardware and applications are developed. While performance and ease of
use by programmers and users are probably most important, in the long run
a system that does not provide a good set of abstractions, able to handle and
be easily ported to new hardware and applications, looses out in the market
place. This is one reason for the success of Unix as an OS in phones, tables,
servers and desktop PCs, it has a very solid set of abstractions making it
easy to evolve and port.
2.2 The Evolution of Operating Systems

Section 2.2 gives a brief history of how the design of OS systems have evolved.

Serial Processing (1940s - mid 1950s)

 Basically one program at a time

 Scheduling of programs was a human organizational task (e.g.
sign up sheet)
 Had to continually resetup same things over and over
 What if you couldn't nish in your block of alloted time? Loose
all your work?
1st Generation Batch Systems (mid 1950s - 1970)

 Early computer expensive, this was attempt to solve time wated

due to scheduling and setup time.
 Solution, let the machine do both.
 The monitor (a prototype OS) could both set up the machine for
typical tasks.
 And the monitor performed the scheduling function.
* Every user submitted their jobs before starting.
* These jobs are put in a queue.
* The monitor takes the next job from queue, runs till done,
saves result. Then gets next job, etc.
 As shown in gure 2.3, there are only two types of main memory
in a batch system.
* Memory used by the monitor (which is a kind of overhead,
not available to the user program).
* Memory available for the User program while running.
 In rst batch systems, if user program went into innite loop, or
overwrote the Monitors memory, everything halted.
 This lead to the realization (rst conceptual breakthrough) that
certain hardware features were necessary
* Memory Protection
* Timer, so an interrupt could be set to force control to return
back to montior if user program was buggy
* Privileged instructions
Multiprogrammed Batch Systems (1960s - 1970s)

 Even with automatic job sequencing, the CPU/system is still often idle when running a single job, because many user programs
do I/O (which is much slower than performing CPU calculations),
and must wait long periods of time (relatively speaking) for the
I/O to complete.
 So this lead to the second conceptual breakthrough that if the
monitor ran multiple user programs at the same time, it could
switch among them when one program is waiting on I/O, in order
to better utilize the system CPU.
 Multiprogramming systems are fairly sophisticated (and correspondingly more complex) because of the need to switch between
multiple programs and mangae their memory and resources.
 This requires algorithms for scheduling (deciding which program
to run next) and absolutely requires the hardware mechanisms to
be present that we previously noted.
Time-Sharing Systems (1970s)

 The most important conceptual breakthrough (3rd conceptual

breakthrough) of time-sharing is the idea that computers can be
used interactively.
 Prior to this time, all systems were used oine. You submitted
your job all at once, and picked up results later.
 Time-Sharing provides illusion that user has complete control of
a machine.
* By rapidly switching back and forth (multiprogramming) between interactive user sessions.
* Known as time slicing.

Modern desktops and server OS systems basically incorporate all of these

conceptual breakthroughs in their design and operation. They are multiprogramming time-shared systems, capable of being interacted with by the user
in real-time. They require memory protection, timers, interrupts and privileged vs. non-privileged modes of operation be supported by the CPU.
2.3 Major Achievements

The 4 major theoretical advances in OS design:

1. Processes

2. Memory management
3. Information protection and security
4. Scheduling and resource management
All of these are consequences of the evolving concepts of what make well
written OS control systems (the conceptual breakthroughs we talked about).
The Process

Is a central concept to the design of most OS. It is implied from the need
for multiprogramming, to have multiple user processes running at the same
time. Usually the process is conceptualized as a program in execution that
has been assigned a block of memory for its use that is (mostly) private from
all other running processes. As we will later see, a process can be further
broken down into separate execution threads, which all share this common
block of memory of the process.
Processes typically consist of a block of context information, and the data
and program code being run by the process. The OS maintains information
about the status and resource allocations of all running processes (often
refered to as the Process Control Block or PCB).
Memory Management

Without memory management, multiprogramming systems quickly devolve

into chaos. Whether through accident or malicious intent, when a running
process can write to any memory it wants, systems quickly become unstable.
Therefore memory management is necessary for process isolation. Even if one
program is buggy or misbehaving, if it does crash it is eectively isolated and
kept from aecting the stability and preformance of the rest of the running
processes on the system.
Memory management is an important topic in the design of OS. One
particular memory management scheme, Virtual Memory or VM, is particular important, because it allowed for programs to be run with much smaller
needs for real memory footprints since programs were given a virtual address
space, only some of which may need to be in real memory at any given time.
Information Protection and Security

With real-time interaction of time-sharing systems, it became much more

important to protect the information of users from being seen by other users

if they shouldn't. Thus most modern OS contain concepts of users, and

assing access levels or privileges to all les and resources which indicate
which users can view or use them, and which users can't.
Scheduling and Resouce Management

The need then to manage and allocate all resources fairly and eciently are
thus implied from the rst two primary goals of OS design. In particular
the CPU should be viewed as another (very important) resource, whose use
needs to be scheduled in such a way that is is as fair as possible to all running
processes needs, while being ecient and utilized as much as possible.

The book only identies the 4 theoretical advances, but I might argue for
adding a 5th, the concept of les. With the advance of larger and more powerful secondary memory storage devices, a need for organizing information
was needed. Files are the fundamental organizing concept of secondary storage device information as abstracted by the OS for use by user application
2.4 Developments Leading to Modern Operating Systems

This section goes into some of the more modern innovations occurring in
the design and implementation of OSes. We already mentioned one of these,
threading or multithreading, where the idea of a process is further broken
down into 2 or more threads of execution.
Microkernel architectures and object-oriented kernel architectures are related, and have to do with the organization of the OS components. These
are mostly software engineering concerns, that help develop more robust and
exible OS system software. Microkernels have been criticised in the past as
being more inecient than a monolithic kernel, but this may no longer be
true with more modern implementations of micorkernel architectures.
Possibly the biggest change in the past decade has been the wider need
to support multiple CPUs or CPUs with multiple computing cores. We
will discuss some of the factors driving these trends. But currently most
hardware systems you may be familiar with are likely to no longer be simple
single CPU systems, but actual SMP systems. In fact some smart phone
architectures are really pushing the envelope with things like 16 and even
more cores.

2.5 Virtual Machines and Virtualizing

We probably won't get much into virtualization and virtual machine implementations in this class. However you should still read this section, as we
will touch on it here and there. Basically I would want you to understand
that a VM is the implementation of a virtual CPU or instruction set, that
runs on top of a VM monitor. VM are very important to the development
of computing as a service and cloud computing systems, such as Amazon's
EC2 (Elastic Computer Cloud) services.
2.6 OS Design Considerations for Multiprocessor and Multicore

This section goes into more of the details of what is driving multi-CPU /
multi-core systems. It lists some of the additional issues and complexity we
msut deal with when we have more than one computing processor.