Sie sind auf Seite 1von 21

Memory Management

In a mono programming or uni programming system, the main memory is divided into two parts,
one part for operating system and another for a job which is currently executing, consider the
figure1 for better under standing

Figure 1 Main memory partition


Partition 2 is allowed for user process. But some part of the partition 2 wasted, it is
indicated by Blacked lines in the figure. In the multiprogramming environment the user space is
divided into number of partitions. Each partition is for one process. The task of sub division is
carried out dynamically by the operating system, this task is known as "Memory Management".
The efficient memory management is possible with multiprogramming. A few process are in main
memory, the remaining process are waiting for I/O and the processor will be idle. Thus memory
needs to be allocated efficiently to pack as many process into memory as possible.
LOGICAL VERSUS PHYSICAL ADDRESS SPACE
An address generated by CPU is called logical address, where as an address generated by memory
management unit is called physical address. For example J1 is a program, written by a user, the
size of program is 100KB. But program loaded in the main memory from 2100 to 2200 KB, this
actual loaded address in main memory is called physical address. The set of all logical address are
generated by a program is referred to as a "Logical Address Space". The set of physical address
corresponding these logical address referred to as a "Physical Address Space". In our example
from 0 to 100 KB is the logical address space from 2100 to 2200 KB is the physical address space
therefore Physical address space = Logical address space +Contents of the relocation register. 2200
= 100 + 2100 (Contents of the relocation register) consider the figure 2

Figure 2

The runtime mapping from logical to physical address is done by the memory management
unit (MMU), which is not a hardware device. The base register is also called the relocation register.
The value in the relocation register is added to every address generated by a user process at the
time it is sent to the memory .For example the base is at 2100, then an attempt by the user to
address location 0 is dynamically related to location 2100. An access to location 100 is mapped to
location 2200.
SWAPPING
Swapping is a method to improve the main memory utilization. For example the main memory
consisting of 10 process, assume that it is the maximum capacity, and the CPU currently executing
the process no: 9 in the middle of the execution the process no: 9 need an I/O, then the CPU sitch
to another job and process no: 9 is moved to disk and another process is loaded in main memory
in the place of process no:9 . when the process no.:9 is completed its I/O operation then the
processor moved in to main memory from the disk. Switch a process from main memory to disk
is said to be Swap out and switch from disk to main memory is said to be Swap in. This of
mechanism is said to be Swapping. We can achieve that efficient memory utilization with
swapping.
Swapping require Backing Store. The backing store is commonly a fast disk. It must be
large enough to accommodate the copies of all process images for all users, when a process is
swapped out, its executable image is copied into backing store. When it is swapped in, it is copied
in to the new block allocated by memory manager.
Memory Management Requirements

There are so many methods, policies for memory management to observe these methods suggests
5 requirements.

Relocation
Protection
Sharing
Logical Organization
Physical Organization

Relocation:
Relocation is a mechanism to convert the logical address in to physical address. An address is
generated by CPU is said to be logical address. An address is generated by memory manager is
said to be physical address.
Physical address= Contents of Relocation register + logical address.
Relocation is necessary at the time of swap in a process from one backing store to main
memory. All most all times the process occupies the same location at the time of swap in. But
sometimes it is not possible, at the time of relocation is required.

Figure 3 Relocation

Protection
The word protection means provide security from unauthorized usage of memory. The operating
system can protect the memory with the help of base and limit registers. Base registers consisting

the starting address of the next process. The limit register specifies the boundary of that job, so the
limit register is also said to be fencing register. For better understanding consider the figure.

Figure 4 Main memory protection

The base register holds the srriit1Rfs legal physical memory address, the limit register
contains the size of the process. Consider the figure 6.6 it depicts the hardware protection
mechanism with base and limit registers. If the logical address is greater than the contents of base
register it is the authorized access, otherwise it is trap to operating system. The physical address
(Logical + Base) is less than the base limit it causes no problems, otherwise it is trap to the
operating system.

Figure 5 Memory protection with base and limit

Sharing
Generally protection mechanisms are required at the
time of access the same portion of main memory by
several processes. Accessing the same portion of main
memory by number of process is said to be "Sharing".
Suppose to say if number of process are executing same
program, it is advantageous to allow each process to
access same copy of the program rather than its have its
own copy. If each process maintain a separate copy of
that program, a lot of main memory will waste.
For example, 3 users want to take their resumes
using word processor, then three processing users share
the word processor in the server, in the place of
individual copy of word processor. Consider the figure,
it depicts the sharing of word processor.
Figure 6 Memory Sharing

Logical organization
We know that main memory in a computer system is organized as a linear, or one-dimensional, address
space that consists of a sequence of bytes or words. Secondary memory at its physical level, is similarly
organized, although this organization closely mirrors the actual machine hardware, it does not correspond
to the way in which programs are typically constructed. Most programs are organized into modules, some
of which are unmodifiable (Read only, execute-only) and some of which contain data that may be modified.
If the operating system and computer hardware can effectively deal with user programs and data in the form
of modules of some sort, then a number of advantages can be realized.
Physical organization
A computer memory is organized in to at least two levels: main memory and secondary memory. Main
memory provides fast access at relatively high cost. In addition, main memory is volatile: that is, it-does
not provide permanent storage. Secondary memory is slower and cheaper than main memory, and it is
usually not volatile. Thus secondary memory of large capacity can be provided to allow for long-tern
storage of programs and data, while a smaller main memory holds programs and data currently in use.
DYNAMIC LOADING AND DYNAMIC LINKING
The word 'loading' means load the program (or) Module from secondary storage devices (disk) to main
memory. Loadings are two types. One is the compile time loading and second one is runtime loading. All
the routines are loaded in the main memory at the time of compilation is said to be Static loading (or)
Compile time loading. The routines are loaded in the main memory at the time of execution (or) Running
is said to be dynamic loading. For example consider a small program in 'C Language.
#include<stdio.h>
main ()
{

printf(Welcome to the world of O.S.);


}
Suppose the program occupies 1KB in secondary storage. But it occupies 100s of KBs in main memory,
because, some routines and header files (ex: - stdio.h) should be loaded in main memory to execute that
program. Generally this loading is done at the time of execution. We can say that meaning of dynamic
loading in a single statement is "With dynamic Loading, a routine is not loaded until it is called". The main
advantage of dynamic loading is that an unused routine is never loaded, so we can obtain better memory
space utilization. This scheme i particularly useful when large amounts of code are needed to handled
frequently occurring cases.

Figure 7 Step by step of user program

Linking the library files before the execution is said to be static linking, at the time of execution is
said to be dynamic linking. You can observe these linking in the figure 7. Most operating systems supports
only static linking. We can say the definition of dynamic linking in a single statement that is "The linking
is postponed until execution time".
MEMORY ALLOCATION METHOD
The main memory must be accommodate both operating system and
various user processes. The operating system place in either low
memory (or) high memory. It is common to place the operating system
in low memory. Consider the below figure.
There are so many methods available for memory allocation. We are
going to discuss about these methods detailed.

Figure 8

Single partition allocation


In this memory allocation method, the operating system reside in the low memory, and the remain memory
is treated as a single partition. This single partition is available for user space. only one job can be loaded
in this user space. Consider the figure 9. it depicts the single partition allocation.
Memory allocation for single partitioned
The short term scheduler (or) CPU scheduler selects a job from the ready queue, for execution, the
dispatcher loads that job into main memory and connect the CPU to that job. The main memory consisting
of only one process at a time, because the user space treated as a single partition.
Advantages
The main advantage of this scheme is its simplicity, it does not required great expertise to understand.
Disadvantages
The main disadvantages of this scheme is, the memory is not utilized fully. A lot of memory will waste in
this scheme, and some more disadvantages are
1. Poor utilization of processors ( Waiting for I/O)
2. User's job being limited to the size of available main memory.
3. Poor utilization of memory
A flowchart of single partitioned memory allocation is depicted in fig.9

Figure 9 Flow Chart for Single Partition memory

Memory management function


Memory management concerned with four functions. These are
i.
ii.
iii.
iv.
1.
2.
3.
4.

Keeping track of memory,


Determining factor on memory policy,
Allocation of memory.
De allocation of memory.
Now we can observe these four functions to the single partitioned memory management.
Keeping track of memory: Total memory allocated to the Job
Determining factor on memory policy: The job gets all memory when scheduled.
Allocation of memory: All of it is allocated to the job.
De allocation of memory: When the job is done, the total memory will free.

Multiple partitions memory management


This method can be implemented in 3 ways. These are
i.
ii.
iii.

Fixed equal multiple partitions memory management,


Fixed variable multiple partitions memory management,
Dynamic multiple partitions memory management.

Now we can discuss about these schemes one by one.


Fixed equal multiple partitions MMS: In this memory management scheme the operating system occupies
the low memory, and the rest of main memory is available for user space. The user space is divided in to
fixed partitions. The partition sizes are depending on operating system. For example the total main memory
size is 6MB, 1MB, occupied by operating system. The remaining 5MB is partitioned in to 5 equal fixed
partitions (5X1 MB), J1, J2, J3, J4, J5 are the 5jobs to be loaded in to main memory, these sizes are
Job
J1
J2
J3
J4
J5
Consider the figure 10 for better understanding.

Sizes
450KB
1000KB
1024KB
1500KB
500KB

Internal and external fragmentation


Job 1 is loaded into partition 1 the maximum size of the partition 1 is 1024 KB, the size of J1 is 450 KB. So
1024 - 450 = 574 KB space wasted, this wasted memory is said to be 'Internal fragmentation'. But there is
no enough to load Job 4, because the size of Job 4 is greater than all partitions, so the entire partition (that is
partition 5) was wasted. This wasted memory is said to be 'External fragmentation'.
Therefore the total internal fragmentation is
= (1024 - 450) + (1024 - 1000) + (1024 - 500)
= 524 +24 + 524
=1122 KB

Figure 10 Main memory allocation

Therefore, the external fragmentation for this scheme is = 1024 KB


A partition of main memory is wasted with in a partition is said to be internal fragmentation and
the wastage of an entire partition is said to be external fragmentation.
Advantages
1.
2.
3.
4.

This scheme supports multi programming.


Efficient utilization of the processor and I/O devices.
It requires no special costly hardware.
Simple and easy to implement.

Disadvantages
1.
2.
3.
4.

This scheme suffers from internal as well as external fragmentation.


The single free area may not be large enough for a partition.
It does require more memory than a single partition method.
A jobs partition size is limited to the size of physical memory.

Fixed variable partitions MMS: In this scheme the user space of main memory is divided into number of
partitions, but the partition sizes are different length. The operating system keep a table indicating which
partition of memory-are available and which are occupied. When a process arrives and needs memory. We
search for a partition large enough for this process, if we find one, allocate the partition to that process. For
example assume that we have 4000KB of main memory available, and operating system occupies 500KB.
The remaining 3500KB leaves for user processes as shown in figures below

Job
J1
J2
J3
J4
J5

Job Queue
Size
825KB
600KB
1200KB
450KB
650KB
J4

Arrival time
10 MS
5
20
30
15
J3

J5

Partitions
Partition
Size
P1
700KB
P2
400KB
P3
525KB
P4
900KB
P5
350KB
P6
625KB
J1

J2

Ready queue
1. Out of 5 jobs J2 arrives first (5ms), the size of J2 is 600KB, so search the all partitions from low
memory to high memory for large enough partition P1 is greater than J2, so load the J2 in P1.
2. In rest of jobs, J1 arriving next (10 ms). The size is 825 KB, So search the enough partition, P4 is
large partition, so load the J1 in to P4.
3. J5 arrives next, the size is 650 KB, and there is no large enough partition to load that job, so J5 has
to wait until enough partition is available.
4. J3 arrives next, the size is 1200 KB there is no large enough partition to load that one.
5. J4 arrives last, the size is 450KB, and partition P3 is large enough to load that one. So load J4 in

Figure 11 Memory Allocation

to P3. For better understanding consider the figure

Partitions P1, P2, P5, P6 are totally free, there is no processes in these partitions. These
wasted memory is said to be external fragmentation. The total external fragmentation is 1375. (400
+ 350 + 625). The total internal fragmentation is
(700 - 600) + (525 - 450) + (900 - 825) = 250.
The readers may have a doubt, that is size of J2 is 600 KB, it is loaded in partition P1 in the figure.
Then the internal fragmentation is 100KB. But if it loaded in P6, then internal fragmentation is only 25 KB.
Why it is loaded in P1 three algorithms are available to answer this questions. These are First-fit, Best- fit,
Worst-fit algorithms.
First-fit Allocate the partition that is big enough. Searching can start either from low memory or
high memory, we can stop searching as soon .as we find a free partition that is large enough.
Best-fit Allocate the smallest partition that is big enough (or) select a partition, which having the
least internal fragmentation.
Worst-fit Search the entire partitions and select a partition which is the largest of all. (or) select a
partition which having the maximum internal fragmentation.
Advantages
1. This scheme supports multiprogramming.
2. Efficient processor utilization and memory utilization is possible.
3. Simple and easy to implement.
Disadvantages
1. This scheme is suffering from internal and external fragmentation.
2. Possible for large external fragmentation.
Dynamic partitions memory management systems: To eliminate the some of the problems with fixed
partitions, an approach known as Dynamic partitioning developed. In this method partitions are created
dynamically, so that each process is loaded in to partition of exactly the same size at that process. In this
scheme the entire user space is treated as a 'Big hole'. The boundaries of partitions are dynamically changed,
and the boundaries are depend on the size of the processes. Consider the previous example.
Job Queue
Size
825KB
600KB
1200KB
450KB
650KB

Job
J1
J2
J3
J4
J5

Arrival time
10 MS
5
20
30
15

The job queue is.


J4

J3

J5

J1

J2

Job J2 arrives first, so load the J2 into memory. Next J1 arrives. Load the J1 into memory, next
J5, J3, J4. Consider the figure above for better understating.

In figure 6.15 (a), (b), (c), (d) jobs J2, J1, J5, J3 are loaded. The last job is J4, the size of J4 is
450KB, but the available memory is 225KB. It is not enough to load J4, so the job J4 has to wait until the
memory available. Assume that after sometime J5 has finished and it is released its memory. After the
available memory is 225 + 650 = 875 KB. It is enough to load J4. Consider the figure 6.15(e) & (f)

Advantages
1. In this scheme partitions are changed dynamically.
So that scheme doesn't suffer from internal fragmentation.
2. Efficient memory and processor utilization.
Disadvantages
1. This scheme suffering from external fragmentation.
Compaction
Compaction is a technique of collect all the free spaces together in one block, this block (or) partition can
be allotted to some other jobs. For example consider the figure, it is the memory allocation figure for
previous example.
The total internal fragmentation in this scheme is (100 + 75 + 75 = 250). The total external
fragmentation is 400 + 350 + 625 = 1375KB. Collect the internal and external fragmentation together in
one block (250 + 1375 = 1625KB). This type of mechanism is said to be compaction. Now the Compacted
memory is 1625 KB. Consider the figure.
Now the scheduler can load job3 (1200) in compacted memory, So efficient memory utilization is
possible using compaction.

Figure 12

Paging
Paging is an efficient memory management scheme because it is non-contiguous memory allocation
method. The previous (Single partition, Multiple partition) methods support the continues memory
allocation, the entire process loaded in the same partition. But in the paging the process is divide in to small
parts, these are loaded into elsewhere in the main memory.
The basic idea of paging is the physical memory (Main memory) is divided into fixed sized blocks
called frames, the logical address space (User job) is divided into fixed sized blocks, called pages, but page
size and frame size should be equal. The size of the frame or a page is depending on operating system.
Generally the page size is 4KB.
In this scheme the operating system maintains a data structure, that is page table, it is used for
mapping purpose. The page table specifies some useful information, it tells which frames are allocated,
which frames are available, and how many total frames are there and so on. The general page table
consisting of two fields, one is page number and other one is frame number. Each operating system has its
own methods for storing page tables, most allocate a page table for each process.
Every address generated by the CPU divided in to two parts, one is 'Page number' and second is
'Page offset' or displacement. The page number is used index in page table. Better understanding consider
the figure 6.17 .The logical address space that is CPU generated address space is divided in to pages, each
page having the page number (P) and Displacement (D). The pages are loaded in to available free frames
in the physical memory.

Figure 13 Structure of Paging Scheme

The mapping between page number and frame number is done by page map table. The page map
table specifies which page is loaded in which frame, but displacement is common. For better understanding
consider the below example. There are two jobs in the ready queue, the jobs sizes are 16KB and 24KB, the
page size is 4KB. The available main memory is 72 KB (18 frames), so job I divided into 4 pages and job2
divided into 6 pages. Each process maintain a program table. Consider the figure for better understanding.

Figure 14 Example for paging

Four pages of job l are loaded in different locations in main memory. The O.S provides a page table
for each process, the page table specifies the location in main memory. The capacities of main memory in
our example is 18 frames. But the available Jobs are Two (10 Pages), so the remaining 8 frames are free.
The scheduler can use this free frames to some other jobs.
Advantages
1.
2.
3.
4.

It supports the time sharing system.


It does not effect from fragmentation.
It supports virtual memory.
Sharing of common code is possible.

Disadvantages
1. This scheme may suffer 'Page breaks'. For example the logical address space is 17KB, the page
size is 4KB. So this job requires 5 frames. But the fifth frame consisting of only one KB. So the
remaining 3KB is wasted, It is said to be page breaks.

2. If the number of pages are high, it is difficult to maintain page tables.


Shared pages
In multiprogramming environment, it is possible to share the common code by number of processes
at a time, instead of maintain the number of copies of same code. The logical address space is divided in to
pages, these pages can share by number of processes at a time. The page which are shared by the number
of processes is said to be 'Shared pages'. For example consider a multi programming environment. It
consisting of 10 users. Out of 10 users, 3 users wish to executes a text editor, they want to take their bio
data's in text editor. Assume that text editor requires 150 KB and the user bio data occupies 50 KB of data
space. So they would need 3*(150 + 50) = 600KB. But in shared paging the text editor to all jobs, So it
requires 150KB and the user files (Bio data requires 50*3 = 150 KB).
Therefore (150 + 150) 300 KB enough to manage these three jobs instead of 600KB. So this 'Shared
pages' mechanism save 300 KB of space. Consider the figure for better understanding.
The page size and the frame size is 50 KB. So 3 frames are required for text editor and 3 frames
are required for user files. Each process (P1, P2, P3) has a page table, the page table shows the frame
numbers, the first 3 frame numbers in each page table i.e. 3, 8, 0 are common. It means the three processes
shared 'the three pages. The main advantage of shared pages is efficient memory utilization is possible.

Figure 15 Sharing code in a paging environment

Segmentation

Figure 16 Segmented address space

A segment can be defined as a logical grouping of instructions, such as a subroutine, array, or a data area.
Every program (job) is a collection of these segments. Segmentation is the technique for managing these
segments. For example consider the figure.
Each segment has a name and a length. The address of the segment specifies both segment name
and the offset with in the segment. For example the length of the segment 'Main' is 100 KB, 'Main' is the
name of the segment. The operating system search the entire main memory for free space to load a segment,
this mapping done by segment table. The segment table is a table, each entry of the segment table has a
segment' Base' and a segment 'Limit'.
The segment base contains the starting physical address where the segment resides in memory. The
segment limit specifies the length of the segment. Figure for segmentation hardware.

Figure 17 Segmentation hardware

The logical address consists of 2 parts: a segment number(s), and an offset in to that segment (d). The
segment number(s) is used as an index into the segment table. For example consider the figure.
The logical address space (a job) is divided in to 4 segments, Numbered from 0 to 3. Each segment has an
entry in the segment table. The limit specifies the size of the segment and the' Base' specifies the starting
address of the segment. For example segment '0' loaded in the main memory from 1500 KB to 2500 KB,
so 1500 KB is the base and 2500-1500= 1000 KB is the limit.
Advantages of segmentation

1. Eliminate fragmentation: By moving segments around, Fragmented memory space can be


combined in to a single free area.
2. Segmentation supports virtual memory.
3. Allow dynamically growing segments.
4. Facilitates shared segments.
5. Dynamic linking and loading of the segments.

Figure 18 Example of Segmentation

Segmentation with paging


Both paging and segmentation have their advantages and disadvantages, it is better possible to combine
these two schemes to improve on each. The combined scheme was 'Page the segments'. Each segment
in this scheme divided in to pages, and each segment maintain a page table. So the logical address
divided in to 3 parts <S,P,D>. One is the segment number(s), second is the page number(P), another is
displacement (or) offset(D). For better understanding consider figure.
The logical address space divided in to 3 segments numbered from 0 to 2 each segment maintains
a page table, the mapping between page and frame done by page table. For example the frame number
8 shows the address (1, 3), 1 stands for segment number and 3 stand for page number. The main
advantage of this scheme is to avoid fragmentation, and it supports 'user view of memory. The hardware
support of this combined scheme is shown in figure.

Figure 20 Paged segmentation

Figure 19 Paged segmentation Memory Management Schemes

Protection
The word 'protection' means security from unauthorized references from main memory. To provide
protection the operating system follows different type of mechanisms. In paging memory
management scheme, the operating system provide protection bits that are associated with each
frame. Generally these bits are kept in page table. Sometimes these bits are said to be valid/not
valid bit.
Consider the below figure for better understanding.
The page map table consists a specified field that is valid/in valid bit. If the bit shows valid it
means the particular page loaded in main memory. It shows invalid it means the page is not in the
processes logical address space. Illegal addresses are trapped by using the valid / invalid bit. The
operating system sets this bit for each page to allow or disallow access to that page.

Figure 21 Protection in paging

COMPARISONS BETWEEN PAGING AND SEGMENTATION


Paging and segmentation both scheme having advantages and disadvantages, sometimes paging is
useful and sometimes segmentation is useful. Consider the below table for better compression.
Sr. No.
1
2

3
4

6
7

Paging
The main memory partitioned into
frames (or) blocks.
The logical address space divided by
compiler (or) memory management unit
(MMU)
This scheme suffering from internal
fragmentation or page.
The operating system maintains a free
frames list need not to search for free
frame.
The operating system maintains a page
map table for mapping segment map
table for mapping between frames and
pages. purpose.
This scheme does not support user view
of memory.
Processor uses page no. and
displacement to calculate the absolute
address (P, D)
Multilevel paging is possible

Segmentation
The main memory partitioned into
segments.
The logical address space divided into
segments, specified by the programmer.
This scheme suffering from external
fragmentation.
The operating system maintains the
particulars of available memory.
The operating system maintains a map
table for mapping purpose.

Its support user view of memory


Processor uses the segment no. and
displacement to calculate the absolute
address (S, D)
Multiple segmentation is possible but no
use.

Das könnte Ihnen auch gefallen