Sie sind auf Seite 1von 67

Operating System

Memory Management
Chapter 4 BCT III/II
By: Shayak Raj Giri@IOE,Pulchowk Campus

Introduction
Programs are getting bigger faster than memories. The part of OS that manages the memory hierarchy is called memory manager.
Keeps track of used/unused part of memory. Allocate/de-allocate memory to processes. Manages swapping between main memory and disk.

Memory Hierarchy
.

Cost/bit increases Storage access speed increases Storage capacity decreases

Cache Memory

Main Memory

Secondary Memory

Memory Hierarchy
Ideally programmers want memory that is
large fast non volatile

Memory hierarchy
small amount of fast, expensive memory cache some medium-speed, medium price main memory gigabytes of slow, cheap disk storage

Memory manager handles the memory hierarchy


4

Monoprogramming Model

Only one program at a time in main memory; can use whole of the available memory.

Multiprogramming with Fixed Partitions

Fixed memory partitions

separate input queues for each partition single input queue


6

Modeling Multiprogramming
(Probabilistic viewpoint of CPU usage) Let p=the fraction of time waiting for I/O to complete.
n= no. of processes in the memory at once. The probability that all n processes are waiting for I/O(CPU idle time) =pn So, CPU utilization = 1- pn

Following diagram shows the CPU utilization as a function of n(degree of multiprogramming)

Modeling Multiprogramming

Degree of multiprogramming

CPU utilization as a function of number of processes in memory


8

Modeling Multiprogramming
Example Let, total memory = 1M =100K Memory space occupied by OS = 200K Memory space taken by an user program = 200K NOW
Number of processes n= (1000-200)/200 =4 [n= total user space/size of an user program] CPU utilization = 1- (0.8)4 =60%

Modeling Multiprogramming
Add another 1M memory, then
n= (2000-200)/200 =9 CPU utilization = 1- (0.8)9 =87% Improvement = (87-60)/60 = 45%

Again add another 1M, then


n= (3000-200)/200 =14 CPU utilization = 1- (0.8)14 =96% Improvement = (96-87)/87 = 10% improvement Conclusion: addition of last 1M is not logical

Base and Limit Registers


A form of dynamic relocation Base contains beginning address of program Limit contains length of program Program references memory, adds base address to address generated by process. Checks to see if address is larger then limit. If so, generates fault Disadvantage-addition and comparison have to be done on every instruction Used in the CDC 6600 and the Intel 8088

How to run more programs then fit in main memory at once


Cant keep all processes in main memory Too many (hundreds) Too big (e.g. 200 MB program) Two approaches Swap-bring program in and run it for awhile Virtual memory-allow program to run even if only part of it is in main memory

How to run more programs then fit in main memory at once


Cant keep all processes in main memory Too many (hundreds) Too big (e.g. 200 MB program) Two approaches Swap-bring program in and run it for awhile Virtual memory-allow program to run even if only part of it is in main memory

Multiprogramming with Variable Partitions


It is solution to the wastage in fixed partition. It allows the jobs to occupy as much space as they need.
Job-C Job-B Job-A OS Job-C Job-B
D-IN A-OUT

Job-C
Job-B A Job-D OS

OS

Multiprogramming with Variable Partitions


.

Multiprogramming with Variable Partitions


A process can grow if there is an adjacent hole. Otherwise the growing process is moved to the hole large enough for it or swap out one or more processes to disk. It has also some degree of waste.

When a process finishes and leaves hole, the hole may not be large
enough to place new job. Thus, variable partition multiprogramming, waste does occur.

Following two activities should be taken place, to reduce wastage of


memory: (a) Coalescing (b) Compaction

Multiprogramming with Variable Partitions


(a) Coalescing The process of merging two adjacent holes to form a single larger hole is called coalescing.
OS 20K hole 10K hole Process-A FREE 10K
20+10=30K hole

OS

Process-A FREE 10K

Multiprogramming with Variable Partitions


(b) Compaction Even when holes are coalesced, no individual hole may be large enough to hold the job, although the sum of holes is larger than the storage required for a process. It is possible to combine all the holes into one big one by moving all the processes downward as far as possible; this technique is called memory compaction.

Compaction
.
OS 20K hole Process-A
. . .

OS Process-A Process-B FREE 20+10+10 =40K

10K hole Process-B FREE 10K

Drawbacks of Compaction
Reallocation info must be maintained. System must stop everything during compaction. Memory compaction requires lots of CPU time. For example: On a 256MB machine that can copy 4 bytes in 40nses, it takes 2.7sec to compact all of memory.

Swapping
If there is not enough main memory to hold all the currently active processes, the excess processes must be kept on the disk and brought in to run dynamically. Swapping consists of moving processes from main memory and disk. Relocation may be required during swapping.

Swapping, a picture

Can compact holes by copying programs into holes This takes too much time

Memory management With Bitmap

Memory management With Bitmap


With bitmap, memory is divided up into allocation units(possibly from few words to several KB). Corresponding to each allocation unit is a bit in the bitmap(0 indicates free and 1 indicates occupied). The smaller the allocation units, the larger the bitmap. Disadvantage: Searching a bitmap for a run of a process(having given length, say k) is a slow operation.

Memory management with Linked Lists

Memory management with Linked Lists


Each entry in the list specifies a hole(H) or process(P). This is the address at which it starts, the length, and a pointer to the next entry.

Memory management with Linked Lists

Four neighbor combinations for the terminating process X


27

Memory management with Linked Lists


In the above diagram: (a) Replace process X by hole H. (b) and (c) Two entries are coalesced. (d) Three entries are merged.

Storage placement Strategies


1. First fit:
The memory manager allocates the first hole that is big enough. It stops the searching as soon as it finds a free hole that is large enough. Advantages: It is a fast algorithm because it searches as little as possible. Disadvantages: Not good in terms of storage utilization.

2. Next fit It works the same way as first fit, except that it keeps track of where it is whenever it finds a suitable hole.

Storage placement Strategies


3. Best fit:
Allocate the smallest hole that is big enough. Best fit searches the entire list and takes the smallest hole that is big enough to hold the new process. Best fit try to find a hole that is close to the actual size needed.

Advantages: more storage utilization than first fit. Disadvantages: slower than first fit because it requires searching whole list at time.

Storage placement Strategies


4. Worst fit:
Allocate the largest hole. It search the entire list, and takes the largest hole, rather than creating a tinny hole, it produces the largest leftover hole, which may be more useful.

Advantages: some time it has more storage utilization than first fit and best fit. Disadvantages: not good for both performance and utilization.
5. Quick fit: Keeps list of common sizes requested.

Virtual Memory-the history


Keep multiple parts of programs in memory Swapping is too slow (100 Mbytes/sec disk transfer rate=>10 sec to swap out a 1 Gbyte program) Overlays-programmer breaks program into pieces which are swapped in by overlay manager Ancient idea-not really done Too hard to do-programmer has to break up program

Virtual Memory
Programs address space is broken up into fixed size pages Pages are mapped to physical memory If instruction refers to a page in memory, fine Otherwise OS gets the page, reads it in, and re-starts the instruction While page is being read in, another process gets the CPU

Memory Management Unit(MMU)


Memory Management Unit generates physical address from virtual address provided by the program.

Memory Management Unit

MMU maps virtual addresses to physical addresses and puts them on memory bus

Pages and Page Frames


Virtual addresses divided up into units, called pages and the corresponding units in physical memory are called page frames. 512 bytes-64 KB range The pages and page frames are always the same size. Transfer between RAM and disk is in whole pages

Mapping of pages to page frames

16 bit addresses, 4 KB pages 32 KB physical memory, 16 virtual pages and 8 page frames

Example:
MOV REG,0 -> Virtual address o is sent to MMU.The MMU sees that this virtual address falls in page 0(0 to 4095), which is mapped to page frame 2(8192 to 12287). ->Thus it transforms the address to 8192 & outputs 8192 onto the bus. Similarly, MOV REG 8192 is effectively transformed into MOV REG, 24576.

Page Fault Processing


Present/absent bit tells whether page is in memory. What happens If address is not in memory? Trap to the OS OS picks page to write to disk Brings page with (needed) address into memory Re-starts instruction

Page Table

Virtual address={virtual page number, offset} Virtual page number used to index into page table to find page frame number If present/absent bit is set to 1, attach page frame number to the front of the offset, creating the physical address which is sent on the memory bus

Mapping/Paging Mechanism
Let us see how the incoming virtual address 8196(0010000000000100 in binary) is mapped to physical address 24580.
Incoming 16-bit virtual address is split into 4-bit page number and 12-bit offset. With 4-bits, we can have 16 pages and with 12-bits for the offset, we can address 212 =4096 bytes within a page. The page number is used as an index into the page table, gives number of virtual pages. Page table contains Present/Absent bit also.

Mapping Mechanism

Structure of Page Table Entry

Frame number: The actual page frame number. Present (1) / Absent (0) bit: Defines whether the virtual page is currently mapped or not. Protection bit: Kinds of access permission; read/write/execution. Modified bit: Defies the changed status of the page since last access. Referenced bit: Used for replacement strategy. Caching disabled: Used for the system where the mapping into device registers rather than memory.

Problems for paging


Virtual to physical mapping is done on every memory reference The page table can be extremely large. The mapping must be fast If the virtual address space is large, the page table will be large. For example, Modern computers use virtual addresses of at least 32-bits. With say, a 4-KB page size, a 32-bit address space has 1 million pages.

Multi-level page tables


Want to avoid keeping the entire page table in memory because it is too big Hierarchy of page tables The hierarchy is a page table of page tables

Multilevel Page Tables

(a) A 32-bit address with two page table fields. (b) Two-level page tables.

Page Replacement Algorithms


When a page fault occurs, the operating system has to choose a page to be remove from memory to make room for the page to be brought in. Page fault forces choice which page must be removed make room for incoming page If the page has been modified, it must be rewritten back to the disk(secondary memory). Problem is to make correct decision that, which page is to remove.

List of Page Replacement Algorithms


1) 2) 3) 4) 5) 6) 7) 8) Optimal page replacement algorithm Not recently used page replacement(NRU) First-in, first-out page replacement(FIFO) Second chance page replacement Clock page replacement Least recently used page replacement(LRU) Working set page replacement WSClock page replacement

Optimal Page Replacement Algorithm

Pick the one which will not used before the longest time. Not possible unless know when pages will be referenced. Used as ideal reference algorithm. To obtain optimum performance, the page to replace is one that will not be used again for a long time. If one page will not be used for r8 million instructions and another page will not be used for 6 million instructions; choosing the former for replacement avoids the page fault for longer time. Difficult to realize.

Not Recently Used(NRU)


Pages not used recently are not likely to be used in the near future. Use two status bits: Referenced( R) and Modified (M), to collect the useful statistics about the pages. Initially all R and M bits are set to o. When the page is referenced, R is set to 1 and when page is modified, M is set to 1. Bits are periodically updated. When page fault occurs, the OS inspects all the pages and divides into four categories based in on the current values of R and M.
Class 0: not referenced, not modified Class 1: not referenced, modified Class 2: referenced, not modified Class 3: referenced, modified

Pick lowest priority page to evict.

First-In, First-Out (FIFO)


The OS maintains list of all pages currently in memory, with the page at the head of the list the oldest one and page at the tail, the most recent arrival. On a page fault, the page at the head is removed and the new page added to the tail of the list. Problem with FIFO is likely to replace heavily(constantly) used pages and the are still needed for further processing.

Second Chance
A simple modification to FIFO. Avoids the problem of throwing out a heavily used page by inspecting the R bit. If it is 0, the page is both old and unused, so it is replaced immediately. If the R bit is 1, the bit is then cleared, the page is put onto the end of the pages, and load time is updated as though it is arrived in the memory.

Second Chance
In the following diagram, pages A-H are sorted in the list by their arrived in memory.

The Clock Page Replacement Algorithm

The Clock Page Replacement Algorithm Page frames are kept on circular list, to remove unnecessary movements that occurred in second chance. When page fault occurs, the page being pointed to by the hand is inspected. If its R bit is 0, then the page is removed, new page is inserted into the clock in its place and the hand is advanced one position. If R bit is 1, it is cleared and the hand is advanced to next page.

Least Recently Used(LRU)


A good approximation to the optimal algorithm is based on the observation that pages have been heavily used in the last few instructions will probably be heavily used again in the next few. Conversely, pages that have not been used for ages will probably remain unused for a long time. This idea suggests a realizable algorithm:LRU

Least Recently Used(LRU)


When the page fault occurs, throw out the page that has been unused for the longest time. Although LRU is theoretically realizable, it is not cheap. To fully implement LRU, it is necessary to maintain a linked list of all the pages in memory.

Least Frequently Used(LFU)


The page to replace is that page, that is least frequently used or least intensively used. The referenced bit status is put into a counter and every access increments that counter Not perfect because, new page brought into the memory would also be the least frequently used.

Random Page Replacement


A low overhead strategy for the page replacement. Does not discriminate for any particular. All pages in the main memory thus have equal likelihood of being selected for replacement. May select the page which is the next page to be referenced and which is worst selection. This algorithm is rarely used.

Working Set Model


Demand paging-bring a process into memory by trying to execute first instruction and getting page fault. Continue until all pages that process needs to run are in memory (the working set) Try to make sure that working set is in memory before letting process run (pre-paging) Thrashing-memory is too small to contain working set, so page fault all of the time

One dimensional address space

. In a one-dimensional address space with growing tables, one table may bump into another.

Segmentation

A segmented memory allows each table to grow or shrink independently of the other tables.

Segmentation
Same as variable partitioning and mechanisms with freedom of contiguous memory requirement restriction. Many completely independent address spaces, called segments. Different segments have different lengths any size from 0 to the maximum allowed. Segments are logical entity, which the programmer is aware of and uses as a single logical entity.

Advantages of Segmentation
Simplifies handling of data structures which are growing and shrinking Address space of segment n is of form (n,local address) where (n,0) is starting address Can compile segments separately from other segments Can put library in a segment and share it Can have different protections (r,w,x) for different segments

Paging vs Segmentation

Comparison of paging and segmentation.

Implementation of Pure Segmentation

(a)-(d) Development of checkerboarding. (e) Removal of the checkerboarding by compaction.

Ref: Modern Operating System, A. S. Tanenbaum

Next class

Deadlock

Das könnte Ihnen auch gefallen