Beruflich Dokumente
Kultur Dokumente
“All the code you didn’t write” in order to implement your application
a. Computation (CPU)
2. The OS abstracts hardware into logical resources ans well-defined interfaces to those
resources.
b. Files(disk)
c. Sockets(network)
1. Application benefits
a. Programming simplicity
2. User benfits
i. Safety
7. Reliablity: what happens if something goes wrong (either with hardware or with a program)?
10. Concurrency: how are parallel activities (computation and I/O) created and controlled?
12. Persistence: hoe do you make data last longer than program execution?
14. Accouting: how do we keep track of resource usage, and perhaps charge for it?
2. Operating system: controls and coordinates the use of the hardware among the various application
programs for the various users.
3. Applications programs: define the ways in which the system resources are used to solve the
computing problems of the users (compilers, database systems, video games, business programs)
Operating system
Computer
Hardware
2. Control program: controls the execution of user programs and operations of I/O devices.
3. Kernel: the one program running at all times (all else being application programs)
Serial processing
1. No operating system
2. Machines run from a console with display lights and toggle switches, input device and printer.
3. Schedule time
4. Setup included loading the compilier, source program, saving compilied program and loading
and linking.
1. Monitors
Multiprogramming
1. When one job needs to wait for I/O, the processor can switch to the other job
2. Several jobs are kept in main memory at the same time, and the CPU is multiplexed among
them.
Operating System
Job-1
Job-2
Job-3
Job-4
OS features needed for multiprogramming
2. Memory management – the system must allocate the memory to serval jobs.
3. CPU scheduling- the system must choose among several jobs ready to run.
4. Allocation of devices.
4. The CPU is multiplexed among several jobs that are kept in memory and on disk (the CPU is
allocated to a job only if the job is in memory).
Desktop systems
4. Can adopt technology developed for larger operating system often individuals
5. Have sole use of computer and do not need advanced CPU utilization of protection features.
Parallel systems
a. Increased throughput
b. Economial
c. Increased reliablity
i. Graceful degradation
ii. Fail-soft systems
Asymmetric multiprocessing
1. Each processor is assigned a specific task; master processor schedules and allocated work to slave
processors.
Distributed systems
2. Loosely coupled system-each processor has its own local memory; processors communicate with
one another through various communications lines, such as high-speed buses or telephone lines.
a. Resources sharing
c. Reliablity
d. Communications
Clustered systems
3. Asymmetric clustering: one server runs the application while other servers standby.
Real-time systems
1. Often used as a control device in a dedicated application such as controlling scientific experiments,
medical imaging systems, industrial control systems and some display systems.
Hard real-time
1. Secondary storage limited or absent, data stored in short term memory, or read-only memory (ROM)
Soft real-time
Handheld systems
2. Cellular telephones
3. Issues:
a. Limited memory
b. Slow processors
Authorised By
Email: help@matlabcodes.com
Engineeringpapers.blogspot.com
A process is a program in execution. A process needs certain resources, including CPU time,
memory, files, and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in connection with process
management.
Process synchronization.
Process communication.
Main-Memory Management
Memory is a large array of words or bytes, each with its own address. It is a repository of quickly
accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device. It loses its contents in the case of system failure.
The operating system is responsible for the following activities in connections with memory
management.
Keep track of which parts of memory are currently being and by whom.
File Management
A file is a collection of related information defined by its creator. Commonly, files represent
programs (both source and object forms) and data.
The operating system is responsible for the following activities in connections with file management.
Secondary-Storage Management
Since main memory (primary storage) is volatile and too small to accommodate all data and
programs permanently. The computer system must provide secondary storage to back up main
memory.
Most modern computer systems use disks as the principle on-line storage medium, for both programs
and data.
The operating system is responsible for the following activities in connection with disk management:
Storage allocation.
Disk scheduling.
A distributed system is collection processors that do not share memory or a clock. Each processor
has its own local memory.
Computation speed-up.
Increased availability.
Enhanced reliability.
Protection system
1. Protection refers to a mechanism for controlling access by prograsms, processes, or users to both
system and user resources.
Command-interpreter system
Many commands are given to the operating system by control statements which deal with:
1. Process creation and management
2. I/O handling
3. Secondary-storage management
4. Main-memory management
5. File-system access
6. Protection
7. Networking
8. The program that reads and interprets control statements is called variously
a. Command-line interpreter
b. Shell (in UNIX)
9. Its function is to get and execute the next command statement.
Additional functions exist not for helping the user, but rather for ensuring efficient system operations.
1. Resource allocation-allocating resources to multiple users or multiple jobs running at the same time.
2. Accounting-keep track of and record which users use how much and what kinds of computer
resources for account billing or for accumulating usage statistics.
3. Protection-ensuring that all access to system resources is controlled.
System calls
1. System calls provide the interface between a running program and the operating system.
2. Generally available as assembly-language instructions.
3. Languages defined to replace assembly language for systems programming allow system calls to be
made directly. (eg:C,C++)
4. Three general methods are used to pass parameters between a running program and
the operating system.
a. Pass parameters in registers.
b. Store the parameters in a table in memory, and the table address is passed as a parameter in a
register.
c. Push (store) the parameter onto the stack by the program, and pop off the stack by operating
system.
d. Passing of parameters as a table.
1. Process control
2. File management
3. Device management
4. Information maintenance
5. Communications
System programs
1. System programs provide a convenient environment for program development and execution. They
can be divided into:
a. File manipulation
b. Status information
c. File modification
d. Programming language support
e. Program loading and execution
f. Communications
g. Application programs
Most users’ view of the operating system is defined by system programs, not the actual system calls.
1. UNIX-limited by hardware functionality, the original UNIX operating system had limited
structuring. The UNIX OS consists of two separable parts.
2. Systems programs
3. The kernel
a. Consists of everything below the system-call interface and above the physical hardware.
b. Provides the file system, CPU scheduling, memory management, and other operating system
functions, a large number of functions for one level.
1. The operating system is divided into a number of layers (levels), each built on top of lower layers.
The bottom layer (layer 0), is the hardware, the highest (layer N) is the user interface.
2. With modularity, layers are selected such tha each uses functions (operations) and services of only
lower-level layers.
1. A virtual machine takes the layered approach to its logical coclusion. It treats hardware and the
operating system kernel as though they were all hardware.
2. A virtual machine provides an interface identical to the underlying bare hardware.
3. The operating system creates the illusion of nultiple processes, each executing on its own processor
with its own (virtual) memory.
4. The resources of the physical computer are shared to create the virtual machines.
a. CPU scheduling can create the appearance that users have their own processor.
b. Spooling and a file system can provide virtual card readers and virtual line printers.
c. A normal user time-sharing terminal serves as the virtual machine operator’s console.
System models
Advantages/Disadvantages of virtal machines
1. The virtual machine concept provides complete protection of system resources since each virtual
machine is isolated from all other virtual machines. This isolation, however, permits no direct
sharing of resources.
2. A virtual-machine system is a perfect vehicle for operating systems research and development.
System development is done on the virtual machine, instead of on a physical machine and so does
not disrupt normal system operation.
3. The virtual machine concept is difficult to implement due to effort required to provide an exact
duplicate to the underlying machine.
1. Compiled Java programs are platform neutral bytecodes executed by a java virtual machine (JVM).
2. JVM consists of
a. Class loader
b. Class verifer
c. Runtime interpreter
d. Just-In-Time (JIT) compiliers increases performance
1. User goals-operating system should be convenient to use, easy to learn, reliable, safe and fast.
2. System goals-OS should be easy to design, implement and maintain, as well as flexible, reliable,
error-free and efficient.
System implementation
What’s in a process?
A process consists of (at least):
An address space
The code for the running program
The data for the running program
An execution stack and stack pointer (SP)
Traces state of procedure calls made
The program counter (PC), indicating the next instruction.
A set of general-purpose processor registers and their values
A set of OS resources
Open files, network connections, sound channels ………
The process is a container for all of this state
A process is named by a process ID (PID)
Just an integer (actually, typically a short)
Process State
As a process executes, it changes state
1. New:ne The process is being created. Terminat
2. Running:
w Instructions are being executed. ed
3. Waiting: The process is waiting for some event to occur.
4. Ready: The process is waiting to be assigned to a process.
5. Terminated: The process has finished execution.
Rea Runni
dy ng
Process State Transition Diagram
Waitin
g
Process Control Block (PCB)
Information associated with each process
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information
Process
Pointer
state
Process number
Program counter
Registers
Memory limits
List of open files
-
-
-
Operating
Process P0 system Process P1
Hea
d
Tail
Hea
d
Tail PCB3
PCB14
PCB6
Hea
d
Tail
PCB5
Hea
d
Tail
Ready CP
queue U
I/O
I/O I/O queue
request
Representation of Process Scheduling
Time slice
expired
Child
execut Fork a
es child
Partially executed
swapped-out processes
Ready CP
queue U
I/ I/O waiting
O queues
Context Switch
When CPU switches to another process, the system must save the state of the old process and
load the saved state for the new process.
Context-switch time is overhead; the system does no useful work while switching.
Time dependent on hardware support.
Process Creation
Parent process creates children processes, which, in turn create other processes, forming a tree of
processes.
Resource sharing
Parent and children share all resources.
Children share subset of parent’s resources.
Parent and child share no resources.
Execution
Parent and children execute concurrently.
Parent waits until children terminate.
Address space
Child duplicate of parent.
Child has a program loaded into it.
UNIX examples
Fork system call creates new process.
Exec system call used after a fork to replace the process’ memory space with a new program.
The scheduler is the module that moves jobs from queue to queue
The scheduling algorithm determines which job(s) are chosen to run next, and which queues they
should wait on.
The scheduler is typically run when:
A job switches from running to waiting
when an interrupt occurs
especially a timer interrupt
when a job created or terminated
There are two major classes of scheduling systems
In preemptive systems, the scheduler can interrupt a job and force a context switch.
In non-preemptive systems, the scheduler waits for the running job to explicitly
(voluntarily) block
Scheduling Goals
Scheduling algorithms can have many different goals (which sometimes conflict)
maximize CPU utilization
maximize job throughput (#job/s)
minimize job turnaround time (Tfinish - Tstart)
minimize job waiting time (Avg(Twait): average time spent on wait queue)
minimize response time (Avg(Tresp): average time spent on wait queue)
Goals may depend on type of system
batch system strive to maximize job throughput and
minimize turnaround time
interactive systems: minimize response time of interactive jobs (such as editors or
web browsers)
Scheduler Non-goals
Schedulers typically try to prevent starvation
Starvation occurs when a process is prevented from making progress, because another
process has a resource it needs
A poor scheduling policy can cause starvation
-- e.g., if a high-priority process always prevents allow-priority process from running on the CPU
Synchronization can also cause starvation
CPU Scheduler
Selects from among the processes in memory that are ready to execute, allocates the CPU to one of
them.
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
Scheduling under 1 and 4 is non-preemptive.
All other scheduling is preemptive.
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
Dispatcher latency-time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria
CPU utilization–keep the CPU as busy as possible
Throughput - # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first response
is produced, not output (for time-sharing environment)
Optimization Criteria
Max CPU utilization
Max throughout
Min turnaround time
Min waiting time
Min response time
P P P
0 2 2 3
P2 P3 P1
0 3 6 30
Waiting time for P1=6; P2=0; P3=3
Average waiting time: (6+0+3)/3 = 3
Much better than previous case
Convoy effect short process behind long process
Problems of FCFS:
Average response time and turnaround time can
be large
E.g., small jobs waiting behind long ones
Results in high turnaround time
May lead to poor overlap of I/O and CPU
SJF (non-preemptive)
P1 P3 P2 P4
0 3 7 8 12 16
Average waiting time = (0+6+3+7)/4 – 4
SJF (preemptive)
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
Problems of SJF:
impossible to know size of future CPU burst
from your theory class, equivalent to the halting problem
Can you make a reasonable guess?
yes, for instance looking at past as predictor of future
But, might lead to starvation in some cases!
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer = highest priority).
Preemptive
Non-Preemptive
SJF is a priority scheduling where priority is the predicted next CPU burst time.
Problem ≡Starvation – low priority processes may never execute.
Solution ≡Aging – as time progresses increase the priority of the process.
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57 77 97 117 121 134 154 162
Typically, higher average turnaround than SJF, but better response.
0 10
6 1
0 6 10
1 9
1 3 5 7 9
0 2 4 6 8 10
1 1 1 1 1
Turnaround Time Varies With the Time Quantum
1 2 3 4 5 6 7
Multilevel Queue
Ready queue is partitioned into separate queues:
Foreground (interactive)
Background (batch)
Each queue has its own scheduling algorithm,
Foreground – RR
Background – FCFS
Scheduling must be done between the queues.
Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility
of starvation.
Time slice – each queue gets a certain amount of CPU time which it can schedule amongst
its processes; i.e., 80% to foreground in RR
20% to background in FCFS
Interactive process
Batch process
Student processes
Multilevel Feedback Queue
A process can move between the various queues; aging can be implemented this way.
Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue will enter when that process needs service
Quantum = 8
Quantum = 16
FCFS
Multiple-processor Scheduling
Real-Time Scheduling
Hard real-time systems – required to complete a critical task within a guaranteed amount of time.
Soft real-time computing – requires that critical processes receive priority over less fortunate ones
5. Process Synchronization
………………………………………………………………………………………………………
Background
#define BUFFER_SIZE 10
Typedef struct {
DATA data;
} item;
Item buffer[BUFFER_SIZE];
int in = 0; //Location of next input to buffer
int out = 0; //Location of next removal from buffer
Consumer
Producer
#define BUFFER_SIZE 10
Typedef struct {
DATA data;
} item;
Item buffer[BUFFER_SIZE]
Int in = 0
Int out = 0;
do { Algorithm
flag[i]:= true;
while (flag[i]); 2
critical section
flag [i] = false; Are the three critical
remainder section Section Requirements Met?
} while (1);
Shared variables
Boolean flag[2];
Initially flag [0] = flag [1] = false.
flag [i]= true pi ready to enter its critical section
do { Algorithm
flag[i]:= true;
turn = j; 3
while (flag[j]) and turn = j);
Are the three Critical
critical section Section Requirements Met?
flag [i] = false;
remainder section
} while (1);
Semaphores:
PURPOSE:
We want to able to write more complex constructs and so need a language to do so. We thus define
semaphores which we assume are atomic operations:
WAIT ( S ): SIGNAL ( S ):
While ( S <= 0); S = S + 1;
S = S-1;
As given here, these are not atomic as written in” macro code”. We define these operations,
however, to be atomic (Protected by a hardware lock.)
FORMAT:
Wait ( mutex ); Mutual exclusion: mutex init to 1.
CRITICAL SECTION
Signal( mutex );
REMAINDER
Semaphores can be used to force synchronization (precedence) if the preceeder does a signal at the
end, and the follower does wait at beginning. For example, here we want P1 to execute before P2.
P1: P2:
Statement 1; wait ( synch );
Signal ( synch ); statement 2;
Typedef struct {
int value;
struct process *list; /* linked list of PTBL waiting on S */
} SEMAPHORE;
SEMAPHORE s; SEMAPHORE s;
wait (s) { Signal (s) {
s.value = s.value – 1; s.value = s.value – 1;
if ( s.value < 0) { if ( s.value <= 0) {
add this process to s.L; remove a process P
block; from s.L;
} wakeup(p);
} }
}
It’s critical that these be atomic- in uniprocessors we can disable interrupts, but in multiprocessors other
mechanisms for atomicity are needed.
Popular incarnations of semaphores are as “event counts” and “lock managers”.
DEADLOCKS
May occur when two or more processes try to get the same multiple resources at the same time.
P1: P2:
Wait(S); Wait(Q)
Wait(Q); wait(S)
…. …..
Signal(S) Signal(Q)
Signal(Q) Signal(S)
Void consumer(void)
{
Int item;
While(TRUE){ /* repeat forever*/
Down(&full);
Down(&mutex);
Item=remove_item(item);
up(&mutex);
up(&empty);
consume_item(item);
}
}
Critical regions
Regions referring to the same shared variable exclude each other in time.
When a process tries to execute the region statement, the Boolean expression B is
evaluated. If B is true, statement S is executed. If it is false, the process is delayed until B
becomes true and no other process is in the region associated with v.
Example-Bounded Buffer
Shared data:
Struct buffer {
Int pool[n];
Int count, in, out;
}
Implementation
Keep track of the number of processes waiting on first-delay and second-delay, with
first count and second-count respectively.
The algorithm assumes a FIFO ordering in the queuing of processes for a semaphore.
For an arbitrary queuing discipline, a more complicated implementation is required.
Monitors
High-level synchronization construct that allows the safe sharing of an abstract data
type among concurrent processes.
Monitor is a software module.
Chief characteristics
o Local data variables are accessible only by the monitor.
o Process enters monitor by invoking one of its procedures.
o Only one process may be executing in the monitor at a time.
Local data
Condition
variables
Procedure1
Procedure N
Initiazation code
Syntax
Monitor monitor-name
{
Shared variable declarations
Procedure body P1 (…) {
}
Procedure body P2 (…) {
}
Procedure body Pn(…) {
}
{
Initiazation code
}
}
To allow a process to wait within the monitor, a condition variable must be declared, as
condition x,y.
Condition variable can only be used with the operations wait and signal
o The operation x.wait();
Means that the process invoking this operation is suspended until another
process invokes x.signal();
Shared
data
Operation
s
Initializati
on code
Monitor with condition variables
Shared
data
Operation
s
Initializati
on code
P0 P1
Wait (A); Wait (B)
Wait (B); Wait (A)
System Model
Resource types R1, R2,….Rm
CPU cycles, memory space, I/O devices
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
• Request
• Use
• Release
Deadlock Characterization
Mutual exclusion: only one process at a time can use a resource.
Hold and wait: a process holding at least one resource is waiting to acquire additonal
resources held by other processes.
No preemption: a resource can be released only voluntarily by the process holding it, after
that process has completed its task.
Circular wait: there exists a set {P0, P1,…… Po} of waiting processes such that P0 is
waiting for a resorce that is held by P1. P1 is waiting for a resource that is held by
P2……Pn-1 is waiting for a resource that is held by
Pn and P0 is waiting for a resource that is held by P0.
Resource-Allocation Graph
V is partitioned into two types:
• P= {P1, P2 …Pn}, the set consisting of all the processes in the system.
• R = {R1, R2……Rn}, the set consisting of all resource types in the system.
Request edge-directed edge P1-Rj
Assignment edge-directed edge Rj-Pi
Process
P
i
P
i
P
i
P P P
1 2 3
Resource-Allocation Graph Algorithm
Claim edge Pi →R j indicated that process Pi may request resource Rj; represented by a
dashed line.
Claim edge converts to request edge when a process requests a resource.
When a resource is released by a process, assignment edge reconverts to a claim edge.
Resources must be claimed a priori in the system.
R1
P P
1 2
R2
R1
P P
1 2
R2
Banker’s Algorithm
Multiple instances
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resource it must return them in a finite amount of time.
Safety algorithm
1. Let work and finish be vectors of length m and n respectively. Initialize:
Work=available
Finish[i] =false for i=1, 3 …n
2. Find and I such that both:
a. Finish[i]=false
b. Needi<=work
If no such I exists, go to step 4
3. Work=work+allocation
Finish[i]=true
Go to step 2.
4. If finish[i]==true for all I, then the system is in a safe state.
Relocati
on
Register
1400 Physic
log ical 0 al
address Addre
CPU + ss
1434
Memor
y
346
6
Dynamic Loading
Dynamic Linking:
Linking postponed until execution time.
Small piece of code, stub, used to locate the appropriate memory-resident library routine.
Stub replaces itself with the address of the routine, and executes the routine.
Operating system needed to check if routine is in processes memory address.
Dynamic linking is particularly useful for libraries.
Overlays:
Keep in memory only those instructions and data that are needed at any given time.
Needed when process is larger than amount of memory allocated to it.
Implemented by user, no special support needed from operating system, programming design
of overlay structure is complex.
Sysmbo
l table 20
k
Commo 30
n k
70 Overlay 10
80
k driver k
k
Swapping:
1. A process can be swapped temporarily out of memory to a backing store and then brought
back into memory for continued execution.
2. Backing store: fast disk large enough to accommodate copies of all memory images for all
users; must provide direct access to these memory images.
3. Roll out, roll in—swapping variant used for priority based scheduling algorithms, lower-
priority process is swapped out so higher priority process can be loaded and executed.
4. Major part of swap time is transfer time: total transfer time is directly proprtional to the
amount of memory.
5. Modified versions of swapping are found on many systems i.e. UNIX, Linux ans Windows.
Proces
s P2
Swap in
User
Space
Backing store
Main memory
Contiguous allocation:
1. Main memory usually into two partitions:
a. Resident operating system. Usually held in low memory with interrupt vector.
b. User processes then held in high memory.
2. Single-partition allocation
a. Relocation-register scheme used to protect user processes from each other, and from
changing operating system code and data.
b. Relocation register contains value of smallest physical address:limit register contains
range of logical addresses-each logical address must be less than the limit register.
Limit Relocati
Registe on
r register
Physical
Address
Logical
ye
Address
s
CPU < +
No
Memory
Trap; adressing
error
1. Fixed partitioning
2. Equal-size partitions:
a. Any process whose size is less than or equal to the partition size can be loaded into
available partition.
b. If all partitions are full, the operating system can swap a process out of a partition.
c. A program may not fit in a partition. The programmer must design the program with
overlays.
3. Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This is called internal fragmentation.
system
Operating
Operating
system
8M
8M
8M 2M
8M 4M
8M 6M
8M 8M
8M 12M
8M
16M
8M
Dynamic partitioning:
1. Partitions are of variable length and number
2. Process is allocated exactly as much memory as required.
3. Eventually get holes in the memory. This is called external fragmentation.
4. Must use compaction to shift processes so they are contiguous and all free memory is in one
block.
Operatin Operatin Operatin
g 8M g g
system system system
Process2 14M
56M 36M
8M
12M
22M
18M
8M
14M
36M
BUDDY SYSTEM:
1. Entire space available is treated as a single block of 2U.
2. If a request of size is such that 2U-1<s<=2U, entire block is allocated
a. Otherwise block is split into two equal buddies
b. Process continues until smallest block greater than or equal to s is generated
1Mbyte
block 1M
request
100k A=128K 128K 256K 512K
request B=25
240k A=128K 128K 6K 512K
B=25
request 64k A=128K C=64K 64K 6K 512K
B=25 D=256
release B A=128K C=64K 64K 6K K 256K
B=25 D=256
release 75K A=128K C=64K 64K 6K K 256K
B=25 D=256
release C A=128K C=64K 64K 6K K 256K
D=256
release E 512K K 256K
release D 1M
C=6
A=128K 4K 64K 256K 256K 256K
RELOCATION:
1. When program loaded into memory the actual memory locations are determined
2. A process may occur different partitions which means different absolute memory
locations during execution.
3. Compaction will also cause a program to occupy a different partition which means
different absolute memory locations.
Addresses:
1. Logical Process control
block
a. Reference to a memory location independent of the current
Adder asignment of data to
memory.
b. Translation must be madeAdder
to the physical address. Program
2. Relative
a. Address expressed as a location relative to some known point.
3. Physical
Comparato Comparato
a. The absolute address or actual
r location in main memory
r
Data
Stack
Paging:
1. Logical address space of a process can be noncontiguous; process is allocated physical
memory whenever the latter is available.
2. Divide physical memory into fixed-sized blocks called frames.
3. Divide logical memory into blocks of same size called pages.
4. Keep track of all free frames.
5. To run a program of size n pages, need to find n free frames and load program.
6. Setup a page table to translate logical to physical addresses.
7. Internal fragmentation.
10000……
0000
p d f d
CPU
11111……
1111
Frame
Page 0 Numbe
Page 1
r
1 Page 0
Page 2
4
Page 3
3 Page 2
Logical
7 Page 1
memor
y
Page
table
Page 3
Physic
al
memor
0 a
1 b
y
0
2 c
3 d
5
4 e I
5 f 6 J
6 g 4
K
7 h 1 l
8 i 2 M
9 j
N
10 k 8
O
11 l
p
Page table
12 m
13 n 12
14 o
15 p
16
Logical
A
memory B
20
C
d
E
F
24
G
h
28
Physical
memory
13
13
Page 1
14
14
Page 0
15 15
Page 0 Page 0
Page 1 16 Page 1 16
Page 2 Page 2
Page 3 Page 3
17 17
New New 18
process 18
process Page 2
19 19
14
20
20
13 Page 3
21 18 21
20
Associate Memory:
Associate memory – parallel search
Page # Frame #
Address translation
If ‘A’ is in associate register, get frame #out.
Otherwise get frame# from page table in memory.
CPU p d
Authorised By
Email: help@matlabcodes.com
f d
Engineeringpapers.blogspot.com
Memory Protection:
d d
p r
y y
Physical
Hash Memory
functi
on q s p r
Hash
table
CPU pid p d i d
Physical
Memory
pid p
Pagetabl
e
Shared Pages:
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compliers,
window systems).
Shared code must appear in same location in the logical address space of all processes.
Private code and data:
Each process keeps a separate copy of the code and data.
The pages for the private code and data can appear anywhere in the logical address space.
Shared Pages Examples:
Ed 1
3
Ed 2 Data 1
4
Ed 3 Ed 1 3 Data 3
6
Data1 Ed 2 4 Ed 1
1
Ed 3 6 Ed 2
Data2 7
Ed 3
Data2
Ed 1 3
Ed 2 4
Ed 3 6
Data3 2
Segmentation
Memory-management scheme that supports user view of memory.
A program is a collection of segments. A segment is a logical unit such as:
main program,
procedure,
function,
method,
object,
local variables, global variables,
common block,
stack,
symbol table, arrays
Subroutine Stack
Symbol
table
Sqrt Main
program
4
1
4
2
Segmentation Architecture:
Logical address consists of a two tuple:
<segment-number, offset>,
Segment table – maps two-dimensional physical addresses; each table entry has:
base-contains the starting physical address where the segments reside in memory.
Limit – specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program;
Segment number s is legal if s < STLR.
Relocation.
dynamic.
by segment table
Sharing.
shared segments
same segment number
Allocation.
first fit/best fit
external fragmentation
Segmentation Hardware:
limi bas
t e
CPU s d
< +
Physical
Memory
Example of segmentation:
Stac
Subroutine k Segment
limit base 0
1000 1400
400 6300
Symbol 400 4300
table 1100 3200
sqrt 1000 4700 Segment
3
Segment
2
Main
Segment
program
4
Segment
1
Sharing of segments
editor
Limit Base Editor
Segment 0
Data 1
Data 1 25286 43062
4425 68348
Segment 1
Data 2
Limit Base
editor
Segment 0 25286 43062
8850 90003
Data 2
Segment 1
Segmentation with paging – MULTICS
The MULTICS system solved problems of external fragmentation and lengthy search times by
paging the segments.
Solution differs from segmentation in that the segment-table entry contains not the base address
of the segment, but rather the base address of a page table for this segment.
Authorised By
Email: help@matlabcodes.com
Engineeringpapers.blogspot.com