Beruflich Dokumente
Kultur Dokumente
Memory Management
Background
Program must be brought (from disk) into memory and placed within a process for it to be run
Main memory and registers are only storage CPU can access directly
Memory unit only sees a stream of addresses + read requests, or address + data and write requests
Page 3
A pair of base and limit registers define the logical address space
CPU must check every memory access generated in user mode to be sure it is between base and
limit for that user
Page 4
Page 5
Address Binding
Programs on disk, ready to be brought into memory to execute form an input queue
Page 6
Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must
recompile code if starting location changes
Load time: Must generate relocatable code if memory location is not known at compile time
Execution time: Binding delayed until run time if the process can be moved during its execution
from one memory segment to another
Need hardware support for address maps (e.g., base and limit registers)
Page 7
Page 8
The concept of a logical address space that is bound to a separate physical address space is central to
proper memory management
Logical and physical addresses are the same in compile-time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in execution-time address-binding scheme
Logical address space is the set of all logical addresses generated by a program
Physical address space is the set of all physical addresses generated by a program
Page 9
To start, consider simple scheme where the value in the relocation register is added to every address
generated by a user process at the time it is sent to memory
The user program deals with logical addresses; it never sees the real physical addresses
Page 10
Page 11
Dynamic Linking
Static linking system libraries and program code combined by the loader into the binary program image
Small piece of code, stub, used to locate the appropriate memory-resident library routine
Stub replaces itself with the address of the routine, and executes the routine
Page 12
Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought
back into memory for continued execution
Total physical memory space of processes can exceed physical memory
Backing store fast disk large enough to accommodate copies of all memory images for all
users; must provide direct access to these memory images
Roll out, roll in swapping variant used for priority-based scheduling algorithms; lower-priority
process is swapped out so higher-priority process can be loaded and executed
Major part of swap time is transfer time; total transfer time is directly proportional to the amount
of memory swapped
System maintains a ready queue of ready-to-run processes which have memory images on disk
Does the swapped out process need to swap back in to same physical addresses?
Depends on address binding method
Plus consider pending I/O to / from process memory space
Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)
Page 13
Page 14
If next processes to be put on CPU is not in memory, need to swap out a process and swap in target process
Can reduce if reduce size of memory swapped by knowing how much memory really being used
System calls to inform OS of memory use via request memory() and release memory()
Pending I/O cant swap out as I/O would occur to wrong process
Page 15
Contiguous Allocation
Resident operating system, usually held in low memory with interrupt vector
Relocation registers used to protect user processes from each other, and from changing operating-system
code and data
Limit register contains range of logical addresses each logical address must be less than the limit
register
Can then allow actions such as kernel code being transient and kernel changing size
Page 16
Page 17
Multiple-partition allocation
Hole block of available memory; holes of various size are scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to accommodate it
OS
OS
OS
OS
process 5
process 5
process 5
process 5
process 9
process 9
process 8
process 2
process 10
process 2
process 2
process 2
Page 18
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
Worst-fit: Allocate the largest hole; must also search entire list
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Page 19
Fragmentation
External Fragmentation total memory space exists to satisfy a request, but it is not contiguous
Internal Fragmentation allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used
First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to fragmentation
Page 20
Fragmentation (Cont.)
Shuffle memory contents to place all free memory together in one large block
I/O problem
Page 21
Segmentation
Page 22
Page 23
1
2
3
4
user space
Page 24
Segmentation Architecture
Segment table maps two-dimensional physical addresses; each table entry has:
base contains the starting physical address where the segments reside in memory
Segment-table base register (STBR) points to the segment tables location in memory
Page 25
Protection
read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level
Page 26
Segmentation Hardware
Page 27
Paging
Physical address space of a process can be noncontiguous; process is allocated physical memory
whenever the latter is available
To run a program of size N pages, need to find N free frames and load program
Page 28
Page number (p) used as an index into a page table which contains base address of each page in
physical memory
Page offset (d) combined with base address to define the physical memory address that is sent to
the memory unit
page number
page offset
p
m-n
d
n
Page 29
Paging Hardware
Page 30
Page 31
Paging Example
Page 32
Paging (Cont.)
Page 33
Free Frames
Before allocation
Courtesy of Operating System Concepts by Silberschatz, Galvin and Gagne
After allocation
Page 34
One for the page table and one for the data / instruction
The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory
or translation look-aside buffers (TLBs)
Some TLBs store address-space identifiers (ASIDs) in each TLB entry uniquely identifies each process to provide addressspace protection for that process
On a TLB miss, value is loaded into the TLB for faster access next time
Page 35
Page 36
Hit ratio =
Hit ratio percentage of times that a page number is found in the associative registers; ratio related to
number of associative registers
Consider = 80%, = 20ns for TLB search, 100ns for memory access
Consider = 80%, = 20ns for TLB search, 100ns for memory access
Consider more realistic hit ratio -> = 99%, = 20ns for TLB search, 100ns for memory access
Page 37
Memory Protection
Memory protection implemented by associating protection bit with each frame to indicate if read-only or
read-write access is allowed
valid indicates that the associated page is in the process logical address space, and is thus a legal
page
invalid indicates that the page is not in the process logical address space
Page 38
Page 39
Memory structures for paging can get huge using straight-forward methods
If each entry is 4 bytes -> 4 MB of physical address space / memory for page table alone
Hierarchical Paging
Page 40
Page 41
Page 42
A logical address (on 32-bit machine with 1K page size) is divided into:
Since the page table is paged, the page number is further divided into:
page number
page offset
p1
p2
12
10
d
10
where p1 is an index into the outer page table, and p2 is the displacement within the page of the inner page
table
Page 43
Address-Translation Scheme
Page 44
Page 45
This page table contains a chain of elements hashing to the same location
Each element contains (1) the virtual page number (2) the value of the mapped page frame (3) a pointer to
the next element
Virtual page numbers are compared in this chain searching for a match
Similar to hashed but each entry refers to several pages (such as 16) rather than 1
Especially useful for sparse address spaces (where memory references are non-contiguous and
scattered)
Page 46
Page 47
Virtual Memory
Background
Page 49
Background
Logical address space can therefore be much larger than physical address space
Demand paging
Demand segmentation
Page 50
Page 51
Demand Paging
Faster response
More users
Lazy swapper never swaps a page into memory unless page will be needed
Page 52
Page 53
Valid-Invalid Bit
Frame #
valid-invalid bit
v
v
v
v
i
.
i
i
page table
Page 54
Page 55
Page Fault
If there is a reference to a page, first reference to that page will trap to operating system:
page fault
1.
2.
3.
4.
5.
Page 56
OS sets instruction pointer to first instruction of process, non-memory-resident -> page fault
Actually, a given instruction could access multiple pages -> multiple page faults
Instruction restart
Page 57
Instruction Restart
block move
Page 58
Page 59
1.
2.
3.
4.
Check that the page reference was legal and determine the location of the page on the disk
5.
Wait in a queue for this device until the read request is serviced
2.
3.
6.
7.
8.
Save the registers and process state for the other user
9.
10. Correct the page table and other tables to show page is now in memory
11. Wait for the CPU to be allocated to this process again
12. Restore the user registers, process state, and new page table, and then resume the interrupted instruction
Page 60
if p = 0 no page faults
Page 61
p < .0000025
Page 62
Demand page in from program binary on disk, but discard rather than paging out when freeing frame
Page 63
Page replacement find some page in memory, but not really in use, page it out
Performance want an algorithm which will result in minimum number of page faults
Page 64
Page Replacement
Prevent over-allocation of memory by modifying page-fault service routine to include page replacement
Use modify (dirty) bit to reduce overhead of page transfers only modified pages are written to disk
Page replacement completes separation between logical memory and physical memory large virtual
memory can be provided on a smaller physical memory
Page 65
Page 66
2.
3.
Bring the desired page into the (newly) free frame; update the page and frame tables
4.
Continue the process by restarting the instruction that caused the trap
Note now potentially 2 page transfers for page fault increasing EAT
Page 67
Page Replacement
Page 68
Page-replacement algorithm
Evaluate algorithm by running it on a particular string of memory references (reference string) and
computing the number of page faults on that string
Repeated access to the same page does not cause a page fault
Page 69
Page 70
4 0 7
2 1 0
15 page faults
3
1
0
3 2 1
Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
Beladys Anomaly
Page 71
Page 72
Page 73
Optimal Algorithm
Replace page that will not be used for longest period of time
Page 74
Page 75
Replace page that has not been used in the most amount of time
Page 76
Counter implementation
Every page entry has a counter; every time page is referenced through this entry, copy the clock into
the counter
When a page needs to be changed, look at the counters to find smallest value
Stack implementation
Page referenced:
LRU and OPT are cases of stack algorithms that dont have Beladys Anomaly
Page 77
Page 78
Working-Set Model
Page 79
Storage Management
Page 81
Magnetic Disks
Performance
1/(RPM * 60)
(From Wikipedia)
Page 82
Access Latency = Average access time = average seek time + average latency
Average I/O time = average access time + (amount to transfer / transfer rate) + controller overhead
For example to transfer a 4KB block on a 7200 RPM disk with a 5ms average seek time, 1Gb/sec transfer rate with
a .1ms controller overhead =
Page 83
Solid-State Disks
Less capacity
Busses can be too slow -> connect directly to PCI for example
Page 84
Disk Scheduling
The operating system is responsible for using hardware efficiently for the disk drives, this means having
a fast access time and disk bandwidth
Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request
for service and the completion of the last transfer
Page 85
SCAN and C-SCAN perform better for systems that place a heavy load on the disk
Less starvation
The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to
be replaced with a different algorithm if necessary
Page 86
File
Page 87
File Attributes
Time, date, and user identification data for protection, security, and usage monitoring
Information about files are kept in the directory structure, which is maintained on the disk
Page 88
File Operations
Create
Write
Read
Delete
Truncate
Open(Fi) search the directory structure on disk for entry Fi, and move the content of entry to memory
Close (Fi) move the content of entry Fi in memory to directory structure on disk
Page 89
Grouping logical grouping of files by properties, (e.g., all Java programs, all games, )
Page 90
Single-Level Directory
Naming problem
Grouping problem
Page 91
Two-Level Directory
Path name
Can have the same file name for different user
Efficient searching
No grouping capability
Page 92
Efficient searching
Grouping Capability
cd /spell/mail/prog
type list
Page 93
Delete a file
rm <file-name>
mail
prog
copy
prt
exp
count
Page 94
Acyclic-Graph Directories
Page 95
Entry-hold-count solution
Page 96
Page 97
Garbage collection
Every time a new link is added use a cycle detection algorithm to determine whether it is OK
Page 98
Protection
by whom
Types of access
Read
Write
Execute
Append
Delete
List
Page 99
a) owner access
b) group access
c) public access
RWX
111
RWX
110
RWX
001
Ask manager to create a group (unique name), say G, and add some users to the group.
owner
chmod
group
761
public
game
game
Page 100
File-System
What is the fundamental difference between different file system like NTFS, ext4?
Page 101
Page 102
Goals of Protection
Each object has a unique name and can be accessed through a well-defined set of operations
Protection problem - ensure that each object is accessed correctly and only by those processes that are
allowed to do so
Page 104
Principles of Protection
Programs, users and systems should be given just enough privileges to perform their tasks
Rough-grained privilege management easier, simpler, but least privilege now done in large chunks
For example, traditional Unix processes either have abilities of the associated user, or of root
File ACL lists, RBAC
Page 105
Access Matrix
Page 106
If a process in Domain Di tries to do op on object Oj, then op must be in the access matrix
User who creates object can define access column for that object
owner of Oi
Page 107
Mechanism
If ensures that the matrix is only manipulated by authorized agents and that rules are strictly
enforced
Policy
Page 108
Page 109
Page 110
Page 111
Page 112
System secure if resources used and accessed as intended under all circumstances
Unachievable
Page 113
Breach of confidentiality
Breach of integrity
Theft of service
Breach of availability
Page 114
Replay attack
Man-in-the-middle attack
Intruder sits in data flow, masquerading as sender to receiver and vice versa
Session hijacking
Page 115
Page 116
Impossible to have absolute security, but make cost to perpetrator sufficiently high to deter most intruders
Physical
Human
Operating System
Network
Page 117
Program Threats
Trojan Horse
Exploits mechanisms for allowing programs written by users to be executed by other users
Trap Door
Page 118
Logic Bomb
Write past arguments on the stack into the return address on stack
Page 119
Thanks