Sie sind auf Seite 1von 16

UNIVERSITY OF PANGASINAN

PHINMA Education Network


Dagupan City

COLLEGE OF ENGINEERING, ARCHITECTURE AND INFORMATION TECHNOLOGY


Department of Electronics Engineering

INSTRUCTIONAL PLAN

Course Name: ITE 076 – OPERATING SYSTEMS Credit Units: 3 No. of Hours/Week: 3hrs Lecture / week Term: SY 2010-2011

Prerequisites/Co-requisite: COMPUTER SYSTEM ORGANIZATION WITH ASSEMBLY LANGUAGE

Course Description: The course includes different policies and strategies used by an operating system. Topics include operating systems structures, process
management, storage management, file systems and distributed system.

Course Goals: Operating Systems subject seeks to provide the students the knowledge on how modern operating systems evolve from the fundamental concepts
of memory, process, storage and file system management to the most complex algorithms being used to produce high-end yet user-friendly
interfaces coupled with top-of-speed processing and tough security. It also seeks to introduce the basic principles and key factors in the design
of operating systems.

Course Objectives: At the end of the course, the students should be able to:
1) Understand the goals of an operating system,
2) Discuss the different algorithms used for CPU scheduling,
3) Describe the different memory management techniques,
4) Understand different file system implementation,
5) Discuss deadlock avoidance and resolution,
6) Know the basic concepts of distributed operating system.
COURSE CONTENT AND COURSE PLAN
Week Topics Activities
No. Assessment Assignments Text
Chapter Outline Research Work
1. Computer System Overview
1.1 Basic Elements
1.2 Processor Registers
1.2.1 User-Visible Registers
1.2.2 Control and Status Registers Result of Quizzes & Periodic
1.3 Instruction Execution Examination
1.3.1 Instruction Fetch and Execute
1.3.2 I/O Function
1.4 Interrupts
1 1.4.1 Interrupts and the Instruction Cycle
2 1.4.2 Interrupt Processing
3 1.4.3 Multiple Interrupts
4 1.4.4 Multiprogramming
1.5 The Memory Hierarchy
1.6 Cache Memory
1.6.1 Motivation
1.6.2 Cache Principles
1.6.3 Cache Design
1.7 I/O Communication Techniques
1.7.1 Programmed I/O
1.7.2 Interrupt-Driven I/O
1.7.3 Direct Memory Access

Chapter Objective
An operating system mediates among application
programs, utilities, and users, on the one hand, and
the computer system hardware on the other. To
appreciate the functionality of the operating system
and the design issues involved, one must have
some appreciation for computer organization and
architecture. This chapter provides a brief survey
of the processor, memory, and Input/Output (I/O)
elements of a computer system.

Chapter Outline
2. Operating System Overview
2.1 Operating System Objectives and
Functions
2.1.1 The Operating System as a
User/Computer Interface
2.1.2 The Operating System as
Resource Manager
2.1.3 Ease of Evolution of an
Operating System
2.2 The Evolution of Operating Systems
2.2.1 Serial Processing
2.2.2 Simple Batch Systems
2.2.3 Multiprogrammed Batch Systems
2.2.4 Time-Sharing Systems
2.3 Major Achievements
2.3.1 The Process
2.3.2 Memory Management
2.3.3 Information Protection and
Security
2.3.4 Scheduling and Resource
Management
2.3.5 System Structure
2.4 Developments Leading to Modern
Operating Systems
2.5 Microsoft Windows Overview
2.5.1 History
2.5.2 Single-User Multitasking
2.5.3 Architecture
2.5.4 Client/Server Model
2.5.5 Threads and SMP
2.5.6 Windows Objects
2.6 Traditional UNIX Systems
2.6.1 History
2.6.2 Description
2.7 Modern UNIX Systems
2.7.1 System V Release 4 (SVR4)
2.7.2 BSD
2.7.3 Solaris 10
2.8 Linux
2.8.1 History
2.8.2 Modular Structure
2.8.3 Kernel Components

Chapter Objective
The topic of operating system (OS) design covers a
huge territory, and it is easy to get lost in the
details and lose the context of a discussion of a
particular issue. This chapter provides an overview
of the objectives and functions of an operating
system. Then some historically important systems
and OS functions are described. This discussion
presents some fundamental OS design principles in
a simple environment so that the relationship
among various OS functions is clear. The chapter
next highlights important characteristics of modern
operating systems. The discussion in this chapter
alerts the students to the blend of established and
recent design approaches that must be addressed.
Finally, it presents an overview of Windows,
UNIX, and Linux; this discussion establishes the
general architecture of these systems, providing
context for the detailed discussions to follow.

Chapter Outline
3. Process Description and Control
3.1 What Is a Process?
3.1.1 Background
3.1.2 Processes and Process Control
Blocks
3.2 Process States
3.2.1 A Two-State Process Model
3.2.2 The Creation and Termination of
Processes
3.2.3 A Five-State Model
3.2.4 Suspended Processes
3.3 Process Description
3.3.1 Operating System Control Structures
3.3.2 Process Control Structures
3.4 Process Control
3.4.1 Modes of Execution
3.4.2 Process Creation
3.4.3 Process Switching
3.5 Execution of the Operating System
3.5.1 Non-process Kernel
3.5.2 Execution within User Processes
3.5.3 Process-Based Operating System
3.6 Security Issues
3.6.1 System Access Threats
3.6.2 Countermeasures

Chapter Description
The focus of a traditional operating system is the
management of processes. Each process is, at any
time, in one of a number of execution states,
including Ready, Running, and Blocked. The
operating system keeps track of these execution
states and manages the movement of processes
among the states. For this purpose the operating
system maintains rather elaborate data structures
describing each process. The operating system
must perform the scheduling function and provide
facilities for process sharing and synchronization.
This chapter looks at the data structures and
techniques used in a typical operating system for
process management.

Chapter Outline
4. Threads, SMP, and Microkernels
4.1 Processes and Threads
4.1.1Multithreading
4.1.2 Thread Functionality
4.1.3 User-Level and Kernel-Level
Threads
4.1.4 Other Arrangements
4.2 Symmetric Multiprocessing
4.2.1 SMP Architecture
4.2.2 SMP Organization
4.2.3 Multiprocessor Operating System
Design Considerations
4.3 Microkernels
4.3.1 Microkernel Architecture
4.3.2 Benefits of a Microkernel
Organization
4.3.3 Microkernel Performance
4.3.4 Microkernel Design
Chapter Description
This chapter covers three areas that characterize
many contemporary operating systems and that
represent advances over traditional operating
system design. In many operating systems, the
traditional concept of process has been split into
two parts: one dealing with resource ownership
(process) and one dealing with the stream of
instruction execution (thread). A single process
may contain multiple threads. A multithreaded
organization has advantages both in the structuring
of applications and in performance. The chapter
also examines the symmetric multiprocessor
(SMP), which is a computer system with multiple
processors, each of which is able to execute all
application and system code. SMP organization
enhances performance and reliability. SMP is often
used in conjunction with multithreading but can
have powerful performance benefits even without
multithreading. Finally, this chapter examines the
microkernel, which is a style of operating system
design that minimizes the amount of system code
that runs in kernel mode. The advantages of this
approach are analyzed.

Chapter Outline
5. Concurrency: Mutual Exclusion and
Synchronization
5.1 Principles of Concurrency
5.1.1 A Simple Example
5.1.2 Race Condition
5.1.3 Operating System Concerns
5.1.4 Process Interaction
5.1.5 Requirements for Mutual Exclusion
5.2 Mutual Exclusion: Hardware Support
5.2.1 Interrupt Disabling
5.2.2 Special Machine Instructions
5.3 Semaphores
5.3.1 Mutual Exclusion
5.3.2 The Producer/Consumer Problem
5.3.3 Implementation of Semaphores
5.4 Monitors
5.4.1 Monitor with Signal
5.4.2 Alternate Model of Monitors with
Notify and Broadcast
5.5 Message Passing
5.5.1 Synchronization
5.5.2 Addressing
5.5.3 Message Format
5.5.4 Queuing Discipline
5.5.5 Mutual Exclusion

Chapter Description
The two central themes of modern operating
systems are multiprogramming and distributed
processing. Fundamental to both these themes, and
fundamental to the technology of operating system
design, is concurrency. This chapter looks at two
aspects of concurrency control: mutual exclusion
and synchronization. Mutual exclusion refers to the
ability of multiple processes (or threads) to share
code, resources, or data in such a way that only
one process has access to the shared object at a
time. Related to mutual exclusion is
synchronization: the ability of multiple processes
to coordinate their activities by the exchange of
information. This chapter provides a broad
treatment of issues related to concurrency,
beginning with a discussion of the design issues
involved. The chapter provides a discussion of
hardware support for concurrency and then looks
at the most important mechanisms to support
concurrency: semaphores, monitors, and message
passing.

Chapter Outline
6. Concurrency: Deadlock and Starvation
6.1 Principles of Deadlock
6.1.1 Reusable Resources
6.1.2 Consumable Resources
6.1.3 Resource Allocation
6.1.4 Graphs
6.1.5 The Conditions for Deadlock
6.2 Deadlock Prevention
6.2.1 Mutual Exclusion
6.2.2 Hold and Wait
6.2.3 No Pre-emption
6.2.4 Circular Wait
6.3 Deadlock Avoidance
6.3.1 Process Initiation Denial
6.3.2 Resource Allocation Denial
6.4 Deadlock Detection
6.4.1 Deadlock Detection
6.4.2 Algorithm
6.4.3 Recovery
6.5 An Integrated Deadlock Strategy
6.6 Dining Philosophers Problem
6.6.1 Solution Using Semaphores
6.6.2 Solution Using a Monitor
Chapter Description
This chapter looks at two additional aspects of
concurrency control. Deadlock refers to a situation
in which a set of two or more processes are waiting
for other members of the set to complete an
operation in order to proceed, but none of the
members is able to proceed. Deadlock is a difficult
phenomenon to anticipate, and there are no easy
general solutions to this problem. The chapter
looks at the three major approaches to dealing with
deadlock: prevention, avoidance, and detection.
Starvation refers to a situation in which a process
is ready to execute but is continuously denied
access to a processor in deference to other
processes. In large part, starvation is dealt with as a
scheduling. Starvation is addressed in the context
that solutions to deadlock need to avoid the
problem of starvation.

Chapter Outline
7. Memory Management
7.1 Memory Management Requirements
7.1.1 Relocation
7.1.2 Protection
7.1.3 Sharing
7.1.4 Logical Organization
7.1.5 Physical Organization
7.2 Memory Partitioning
7.2.1 Fixed Partitioning
7.2.2 Dynamic Partitioning
7.2.3 Buddy System
7.2.4 Relocation
7.3 Paging
7.4 Segmentation
7.5 Security Issues
7.5.1 Buffer Overflow Attacks
7.5.2 Defending against Buffer Overflows

Chapter Description
This chapter provides an overview of the
fundamental mechanisms used in memory
management. First, the basic requirements of any
memory management scheme are summarized.
Then the use of memory partitioning is introduced.
This technique is not much used except in special
cases, such as kernel memory management.
However, a review of memory partitioning
illuminates many of the design issues involved in
memory management. The remainder of the
chapter deals with two techniques that form the
basic building blocks of virtually all memory
management systems: paging and segmentation.

Chapter Outline
8. Virtual Memory
8.1 Hardware and Control Structures
8.1.1 Locality and Virtual Memory
8.1.2 Paging
8.1.3 Segmentation
8.1.4 Combined Paging and Segmentation
8.1.5 Protection and Sharing
8.2 Operating System Software
8.2.1 Fetch Policy
8.2.2 Placement Policy
8.2.3 Replacement Policy
8.2.4 Resident Set Management
8.2.5 Cleaning Policy
8.2.6 Load Control

Chapter Description
Virtual memory, based on the use of either paging
or the combination of paging and segmentation, is
the almost universal approach to memory
management on contemporary machines. Virtual
memory is a scheme that is transparent to the
application processes and allows each process to
behave as if it had unlimited memory at its
disposal. To achieve this, the operating system
creates for each memory process a virtual address
space, or virtual memory, on disk. Part of the
virtual memory is brought into real main memory
as needed. In this way, many processes can share a
relatively small amount of main memory. For
virtual memory to work effectively, hardware
mechanisms are needed to perform the basic
paging and segmentation functions, such as
address translation between virtual and real
addresses. This chapter begins with an overview of
these hardware mechanisms. The remainder of the
chapter is devoted to operating system design
issues relating to virtual memory.

Chapter Outline
9. Uniprocessor Scheduling
9.1 Types of Professor Scheduling
9.1.1 Long-Term Scheduling
9.1.2 Medium-Term Scheduling
9.1.3 Short-Term Scheduling
9.2 Scheduling Algorithms
9.2.1 Short-Term Scheduling
Criteria
9.2.2 The Use of Priorities
9.2.3 Alternative Scheduling
Policies
9.2.4 Performance Comparison
9.2.5 Fair-Share Scheduling

Chapter Description
This chapter concerns scheduling on a system with
a single processor. In this limited context, it is
possible to define and clarify many design issues
related to scheduling. It begins with an
examination of the three types of processor
scheduling: long term, medium term, and short
term. The bulk of the chapter focuses on short-term
scheduling issues. The various algorithms that
have been tried are examined and compared.

Chapter Outline
10. Multiprocessor and Real-Time
Scheduling
10.1 Multiprocessor Scheduling
10.1.1 Granularity
10.1.2 Design Issues
10.1.3 Process Scheduling
10.1.4Thread Scheduling
10.2 Real-Time Scheduling
10.2.1 Background
10.2.2 Characteristics of Real-Time
Operating Systems
10.2.3 Real-Time Scheduling
10.2.4 Deadline Scheduling
10.2.5 Rate Monotonic Scheduling

Chapter Description
The chapter looks at two areas that are the focus of
contemporary scheduling research. The presence
of multiple processors complicates the scheduling
decision and opens up new opportunities. In
particular, with multiple processors it is possible
simultaneously to schedule for execution multiple
threads within the same process. The first part of
The chapter provides a survey of multiprocessor
and multithreaded scheduling. The remainder of
the chapter deals with real-time scheduling. Real-
time requirements are the most demanding for a
scheduler to meet, because requirements go
beyond fairness or priority by specifying time
limits for the start or finish of given tasks or
processes.

1. Introduction
1.1 Computer hardware structures
1.2 Operating systems concepts
2. Operating Systems Services
2.1 Process management
2.1.1 Process model and control
2.1.2 Threads
2.1.3 Concurrency
2.1.4 Deadlocks
2.2 Memory Management
2.2.1 Stores and store management
2.2.2 Paging and virtual memory
2.3 Scheduling
2.4 I/O management
2.5 File management
2.6 Command interpreter
3. Distributed Systems
3.1 Basic concepts
3.2 Distributed processing
3.3 Distributed process management

THE LEARNING ENVIRONMENT


Instructional Facilities: Teaching Methods:
Chalk and Blackboard, Power-point presentations, etc Lecture, Discussions, Report
Required Texts: None
Recommended Texts/Resources:
1. Operating Systems 6th Edition by William Stallings
Required Materials and Attire: Provided Materials:
Description of Learners: BS CpE 5th year students
ASSESSMENT, EVALUATION AND GRADING
Course Requirements: Attend the class regularly. Grading Procedure: Grading system of
Must pass the major examination. Participate in all class activities. the university
Perform & participate in all laboratory activities. applies.
Missed Assessment/Requirement: Students will be given incomplete grades for missed requirements Grading Scale Method:
Support Services 2hrs/wk will be allotted for students consultation Grade Posting: Class Record
GENERAL INFORMATION
Instructional Plan Amendments Last IP Revision Date
Student Conduct/Class Policies Students are required to attend the class regularly, failed to do so the Additional Information
students will be dropped from their class.
All assignments and research work must be passed on the assigned
due date. Any assignments not handed in on the due date will not be
accepted unless valid reason is presented.
All examinations must be taken at scheduled time. Incomplete grades
will be given to students for missed examination.
Must participate in all laboratory activities.

FACULTY MEMBER INFORMATION


Instructor’s Name/s: Engr. Lyndon G. Padama, CpE (MSME student)
Contact Information: 326 Pogoruac, Burgos, Pangasinan / 09086595451 / alloc_21_lyndon@yahoo.com.ph
Teaching Philosophy:

Das könnte Ihnen auch gefallen