Sie sind auf Seite 1von 23

Memory Management

Chapter 7
Background
Operating system responsibilities
Bring program into memory and place it in a process
for it to be executed.
Ensure adequate storage for processes
Maintain an input queue (collections of processes on
disk waiting to be brought into main memory)
Bind instructions and data to memory addresses
At compile time- if memory location is known
At loading time- if memory location is not known at compile
time, the compiler generates relocatable code bound at load
time &
Execution time- if process can be moved between segments
during execution.
Dynamic loading (DL)

Load routine only when invoked, unused routine is


never loaded
Better memory space utilization (use when needed)
Useful when large amounts of code are needed to
handle infrequently occurring cases e.g. error
routines
No special support from operating system is
required, implementation through program design.
However OS might provide library routines to
implement DL.
Dynamic Linking
Execution time linking
Uses stubs (small pieces of code) to
locate memory resident library routine to
be executed.
Stub replaces itself with the address of the
address and executes the routine.
OS checks if routine is in process memory
address.
Overlays
Keeping in memory only those instructions
and data needed at any given time.
Implemented when process is larger than
amount of allocated memory.
Large programs divided into executable
overlays
User specific and complex to design.
Memory addressing
Logical/ Virtual
Generated by CPU (only seen by user)
Physical
Seen by the memory unit (loaded into the
address register of the memory)
Both are same in compile time & load time
binding schemes, but differ in execution
time binding scheme
Memory Management Unit (MMU)
Hardware device that maps virtual to
physical address, referred to as physical
address space. Or
Set of logical addresses generated by
program in the logical space and the set of
all physical addresses corresponding to
these logical addresses.
Swapping
A process can be swapped temporarily out of
memory to a backing store & brought back into
memory for continued execution.
Backing store fast disk large enough to
accommodate copies of all memory images for all
users.
For multiprogramming environment with round
robin scheduling algorithm.
Roll-out Roll-in swapping variant used for priority
scheduling algorithms.
Major part of swap time is transfer times
Contiguous Allocation

Main Memory= resident OS + user processes.


Resident OS usually in low memory with interrupt
vectors. User processes in high memory
Partitioned into blocks. Partitioning determines
number of processes in main memory, influencing the
degree of multiprogramming, swapping e.t.c.
Many programs in memory, require fixed or variable
partitions
Memory partition
Operating
System

User
Single Partition Allocation
To protect the operating system from changes by
user processes, we implement relocation
registers with a limit register.
The relocation register contains the smallest
value of the smallest physical address.
The limit register contains the range of logical
addresses.
Each logical address must be less than the limit
register.
MMU maps the logical address dynamically by
adding the value in the relocation register
Hardware support for Relocation &
Limit registers.

Limit Relocation
register register

Memory
yes Physical
Logical
CPU address < +
address

Trap
Multiple Partition Allocation
Fixed partitions
Divide memory into a number of fixed-sized partitions.
Each partition contains exactly one process
When a partition is free a process small enough to fit
the hole is selected from the input queue and loaded,
on completion it frees the partition for other
processes.
Hole is a block of available memory.
List of available holes
Disadvantages
Process smaller then partitions waste space.
Some partitions may never be used too small or too big
Variable Partions
Varying number of processes and
partitions.
Can fit many processes in a single hole
Complicated allocation and deallocation
Difficult to keep track of memory
Results in creation of many unusable
holes.
Solution use compaction
Compaction
Shuffling memory contents to place all free
memory together in one large block.
Possible only if relocation is dynamic and
is done at execution time
Keeping Track of Memory
Use
Bit Maps
Linked Lists- Dynamic Storage allocatio
Buddy System
Bit Maps
Memory divided into small units
1 bit in a map per unit
0 means unit not occupied
1 means unit occupied
If a process requests n units , memory manager looks for
consecutive 0s in the map.
Example
0 0 1 1 1
0 1 1 0 0
0 0 0 1 0
1 0 0 1 1
Linked Lists
Problem how to satisfy a request of size n from a list of
free holes?
Keep a linked list of allocated & free memory blocks
First-Fit: allocate the first hole big enough.
Next-Fit : Scan memory region list from point of last
allocation to next fit.
Best-Fit: allocate the smallest hole big enough.
Worst-Fit: allocate the largest hole.
Quick fit- maintains list for each common size requests
List for 8k holes, list for 16k holes, list for 32k holes.
Determine suitable list and fit process
Example
Given the following, use the best, worst and first fit
algorithms to allocate memory to the different
processes.
Process
Memory need
500k 212K
200K 417K
300K 112K
600K 426K
First Fit

memory process Best Fit


500K 212K
memory process
200K -----------
300K 112K
500k 417K
600K 417K
200K 112K
Worst Fit
300K 212K
Memory Process
600K 426K
500K 417K
200k
300K 112K
600K 212K
Buddy system
Takes advantage of the binary addressing
used by computers.
Memory manager keeps a list of all free
blocks different byte sizes
External Fragmentation
The algorithms described above suffer from
external fragmentation.
External fragmentation exists when enough total
memory exists to satisfy a request, but it is not
contiguous. May be between processes
The free memory space is fragmented into
pieces, neither one of which is large enough to
satisfy the new memory request.
Solution is compaction
Internal Fragmentation
Internal fragmentation exists when there is
a difference between the total memory
requested and the total memory allocated.
Pieces of unallocated memory exist within
allocated holes

Das könnte Ihnen auch gefallen