Sie sind auf Seite 1von 27

Operating System

Shadow tables:
The operating system can sometimes be thought of as an extension of the abstractions
provided by the hardware. However, when the table format is defined by the hardware (such
as for a page table entry), you cannot change that format.
The general is a technique that is sometimes called a shadow table.
The idea of a shadow is simple (and familiar to Fortran programmers!):
● Consider the hardware defined data structure as an array.
● For the new information that you want to add, define a new (shadow) array.
● There is one entry in the shadow array for each entry in the hardware array.
● For each new item you want to add to the data structure, you add a new data member
to the shadow array.

Memory Hierarchy
Memory can be generalized into five hierarchies based upon intended use and speed

a) Registers: Registers are typically Static RAM in the processor that hold a data word,
which on modern processors is typically 64 or 128 bits. The most important register,
found in all processors, is the program counter.

b) Cache: Cache is also usually found within the processor, but occasionally it may be
another chip. Divided into levels, cache holds frequently used chunks of data from
main memory. The more times that piece of data is used, its latency when accessing it
approaches cache speed.

c) Main Memory: Main memory is relatively fast and holds most of the data and
instructions that are needed by currently running programs. This used to be a precious
resource up until the mid to late 2000s.memory is so plentiful that modern-day
operating systems will use unused portions of main memory to store the core parts of
programs that are used a lot (which is also called caching) so that when opening that
program, it appears to load a lot faster.

d) Secondary Memory: Secondary memory is where data can be permanently stored,


usually a hard disk drive (HDD) or solid state disk (SSD). The unfortunate thing is
that secondary memory has a huge gap of latency, and there's a lot of work done to
close this.

e) Removable memory: Data that's intended to move around resides on removable


memory. Examples include floppy disks, CDs and DVDs, and USB thumb disks.

Page Faults
A page fault occurs when a program attempts to access a block of memory that is not stored
in the physical memory.
Handling of a Page Fault
1. Check the location of the referenced page in the page map table
2. If a page fault occurred, call on the operating system to fix it
3. Using the frame replacement algorithm, find the frame location
4. Read the data from disk to memory
5. Update the page map table for the process
6. The instruction that caused the page fault is restarted when the process resumes
execution.

Effective access time(EAT)

The formula of Effective Access time is

(EAT) = (1-p) * memory access + p(page fault overhead + swap page out + swap page in +
restartoverhead)

Page selection and Replacement:

First In First Out


This is the simplest page replacement algorithm. In this algorithm, operating system keeps
track of all pages in the memory in a queue, oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.

For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in  memory so —> 0 Page Faults.
Then 5 comes, it is not available in  memory so it replaces the oldest page slot i.e 1. —>1
Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —
>1 Page Fault.

Belady’s anomaly
Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm. 
For example, if we consider reference string      3     2     1     0     3     2     4     3     2     1     0
4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

 Optimal Page replacement


In this algorithm, pages are replaced which are not used for the longest duration of time in the
future.

Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Least Recently Used


In this algorithm page will be replaced which is least recently used.

Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2.

Initially we have 4 page slots empty.


Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.

1. The minimum number of frames to be allocated to a process is decided by the :


a) the amount of available physical memory
b) operating System
c) instruction set architecture
d) none of the mentioned

Answer: c)

2. When a page fault occurs before an executing instruction is complete :


a) the instruction must be restarted
b) the instruction must be ignored
c) the instruction must be completed ignoring the page fault
d) none of the mentioned

Answer a)

3. Consider a machine in which all memory reference instructions have only one
memory address, for them we need at least _____ frame(s).
a) one
b) two
c) three
d) none of the mentioned
Answer: b)
Explanation: At least one frame for the instruction and one for the memory reference.

4. The maximum number of frames per process is defined by :


a) the amount of available physical memory
b) operating System
c) instruction set architecture
d) none of the mentioned

Answer: a)

5.The minimum number of page frames that must be allocated to a running process in a
virtual memory environment is determined by (GATE CS 2004)
a) the instruction set architecture
b) page size
c) physical memory size
d) number of processes in memory
Answer (a)
Each process needs minimum number of pages based on instruction set architecture. Example
IBM 370: 6 pages to handle MVC (storage to storage move) instruction
Instruction is 6 bytes, might span 2 pages.
2 pages to handle from.
2 pages to handle to.

6.A demand paging system takes 100 time units to service a page fault and 300 time
units to replace a dirty page. Memory access time is 1-time unit. The probability of a
page fault is p. In case of a page fault, the probability of page being dirty is also p. It is
observed that the average access time is 3 time units. Then the value of p is
(A) 0.194
(B) 0.233
(C) 0.514
(D) 0.981

Answer:a)

Page fault service time =100


page fault service time =300 (if there is dirty page)
probability of page fault =p
probability of being dirty is =p
so total page fault service time(ps) = p(300)+ (1-p)(100) = 200p+100
now given EAT =3 so
EAT = p(page fault service time +memory access time)+(1-p) memory access time
3= p*page fault service time + memory access time
3 =p(200p+100)+1= 200p2+100p+1
p=0.0194

7.Consider a main memory with five page frames and the following sequence of page
references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. which one of the following is true with
respect to page replacement policies First-In-First-out (FIFO) and Least Recently Used
(LRU)?

A. Both incur the same number of page faults


B. FIFO incurs 2 more page faults than LRU
C. LRU incurs 2 more page faults than FIFO
D. FIFO incurs 1 more page faults than LRU
Solution:
Answer: (A)
Explanation:
3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3

In both FIFO and LRU, we get following after


considering 3, 8, 2, 3, 9, 1, 3 8 2 9 1

FIFO
6 replaces 3
82916

3 replaces 8
29163

8 replaces 2
91638

2 replaces 9
16382

No more page faults

LRU
6 replaces 8
32916

8 replaces 2
39168

2 replaces 1
39682

1 replaces 8
39621

8.Consider a virtual memory system with FIFO page replacement policy. For an
arbitrary page access pattern, increasing the number of page frames in main memory
will (GATE CS 2001)
a) Always decrease the number of page faults
b) Always increase the number of page faults
c) Some times increase the number of page faults
d) Never affect the number of page faults
Answer: ©
Incrementing the number of page frames doesn’t always decrease the page faults (Belady’s
Anomaly)

9. Which of the following statements is false? (GATE 2001)


a) Virtual memory implements the translation of a program’s address space into physical
memory address space
b) Virtual memory allows each program to exceed the size of the primary memory
c) Virtual memory increases the degree of multiprogramming
d) Virtual memory reduces the context switching overhead
Answer: (d)
In a system with virtual memory context switch includes extra overhead in switching of
address spaces.

10.Suppose the time to service a page fault is on the average 10 milliseconds, while a
memory access takes 1 microsecond. Then a 99.99% hit ratio results in average memory
access time of (GATE CS 2000)
(a) 1.9999 milliseconds
(b) 1 millisecond
(c) 9.999 microseconds
(d) 1.9999 microseconds
Answer: (d)
Explanation:
Average memory access time =
     [(% of page miss)*(time to service a page fault) +
                 (% of page hit)*(memory access time)]/100

So, average memory access time in microseconds is.


(99.99*1 + 0.01*10*1000)/100 = (99.99+100)/1000 = 199.99/1000 =1.9999 µs

11.. Consider the virtual page reference string


1, 2, 3, 2, 4, 1, 3, 2, 4, 1
On a demand paged virtual memory system running on a computer system that main
memory size of 3 pages frames which are initially empty. Let LRU, FIFO and
OPTIMAL denote the number of page faults under the corresponding page
replacements policy. Then
(A) OPTIMAL < LRU < FIFO
(B) OPTIMAL < FIFO < LRU
(C) OPTIMAL = LRU
(D) OPTIMAL = FIFO
Answer (B)
The OPTIMAL will be 5, FIFO 6 and LRU 9.

12.Let the page fault service time be 10ms in a computer with average memory access
time being 20ns. If one page fault is generated for every 10^6 memory accesses, what is
the effective access time for the memory?
(A) 21ns
(B) 30ns
(C) 23ns
(D) 35ns
Answer (B)
Let P be the page fault rate
Effective Memory Access Time = p * (page fault service time) + (1 - p) * (Memory access
time)
13.A system uses FIFO policy for page replacement. It has 4 page frames with no pages
loaded to begin with. The system first accesses 100 distinct pages in some order and
then accesses the same 100 pages but now in the reverse order. How many page faults
will occur? (GATE CS 2010)

(A) 196
(B) 192
(C) 197
(D) 195
Answer (A)
Access to 100 pages will cause 100 page faults. When these pages are accessed in reverse
order, the first four accesses will node cause page fault. All other access to pages will cause
page faults. So total number of page faults will be 100 + 96.

14.A multilevel page table is preferred in comparison to a single level page table for
translating virtual address to physical address because
(A) It reduces the memory access time to read or write a memory location.
(B) It helps to reduce the size of page table needed to implement the virtual address space of
a process.
(C) It is required by the translation lookaside buffer.
(D) It helps to reduce the number of page faults in page replacement algorithms.
Answer (B)
The size of page table may become too big. That is why page tables are typically divided in
levels.

15.Which of the following page replacement algorithms suffers from Belady’s anomaly?
(A) FIFO
(B) LRU
(C) Optimal Page Replacement
(D) Both LRU and FIFO

Answer: (A)

Explanation: Belady anamoly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm.

16.What is the swap space in the disk used for?


(A) Saving temporary html pages
(B) Saving process data
(C) Storing the super-block
(D) Storing device drivers

Answer: (B)

17.Determine the number of page faults when references to pages occur in the following
order : 1, 2, 4, 5, 2, 1, 2, 4. Assume that the main memory can accommodate 3 pages and
the main memory already has the pages 1 and 2, with page 1 having been brought
earlier than page 2. (LRU algorithm is used)
(A) 3
(B) 5
(C)4
(D)None of these

Answer c)

18.In a paged memory, the page hit ratio is 0.35. The time required to access a page in
secondary memory is equal to 100 ns. The time required to access a page in primary
memory is 10 ns. The average time required to access a page is
a)3.0ns
b)68.0 ns
c)68.5 ns
d)78.5 ns

Asnwer
c)               
Explanation:            
0.35 x 10 + (1 - 0.35) x 100 = 68.5 ns

19.The first-fit, best-fit and the worst-fit algorithm can be used for
a)contiguous allocation of memory
b)linked allocation of memory
c)indexed allocation of memory
d)all of the above

Answer:a)

20.Consider following page trace :


4,3,2, 1,4,3,5,4,3,2, 1,5

Number  of page faults that would occur if FIFO page replacement algorithm is used
with Number of frames for the Job M=3, will be

a)8
b)9
c)10
d)12

Aswer :c)10

21.A page fault occurs when


A. the Deadlock happens
B. the Segmentation starts
C. the page is found in the memory
D. the page is not found in the memory
Answer: d)

22.Which of the following concept is best to preventing page faults?


A. Paging
B. The working set
C. Hit ratios
D. Address location resolution

Answer: B)

23.Consider the virtual page reference string


1, 2, 3, 2, 4, 1, 3, 2, 4, 1
On a demand paged virtual memory system running on a computer system that main memory
size of 3 pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the
number of page faults under the corresponding page replacements policy. Then
(A) OPTIMAL < LRU < FIFO (B) OPTIMAL < FIFO < LRU (C) OPTIMAL = LRU
(D) OPTIMAL = FIFO

Answer: (B)

23.A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page
frame size of 4 Kbytes. Each page table entry is of size 4 bytes. A three level page table
is used for virtual to physical address translation, where the virtual address is used as
follows
• Bits 30-31 are used to index into the first level page table
• Bits 21-29 are used to index into the second level page table
• Bits 12-20 are used to index into the third level page table, and
• Bits 0-11 are used as offset within the page
The number of bits required for addressing the next level page table (or page frame) in
the page table entry of the first, second and third level page tables are respectively
(A) 20, 20 and 20
(B) 24, 24 and 24
(C) 24, 24 and 20
(D) 25, 25 and 24
Answer (D)
Virtual address size = 32 bits
Physical address size = 36 bits
Physical memory size = 2^36 bytes
Page frame size = 4K bytes = 2^12 bytes
No. of bits required to access physical memory frame = 36 - 12 = 24
So in third level of page table, 24 bits are required to access an entry.
9 bits of virtual address are used to access second level page table entry and size of pages in
second level is 4 bytes. So size of second level page table is (2^9)*4 = 2^11 bytes. It means
there are (2^36)/(2^11) possible locations to store this page table. Therefore the second page
table requires 25 bits to address it. Similarly, the third page table needs 25 bits to address it.

24.A virtual memory system uses First In First Out (FIFO) page replacement policy and
allocates a fixed number of frames to a process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the
page fault rate.
Q: Some programs do not exhibit locality of reference. Which one of the following is
TRUE?
(A) Both P and Q are true, and Q is the reason for P
(B) Both P and Q are true, but Q is not the reason for P.
(C) P is false, but Q is true
(D) Both P and Q are false.
Answer (B)
P is true. Increasing the number of page frames allocated to process may increases the no. of
page faults
Q is also true, but Q is not the reason for-P as Belady’s Anomaly occurs for some specific
patterns of page references.

Allocation of Frames:
Demand paging necessitates the development of a page-replacement algorithm and a frame
allocation algorithm.

Frame allocation algorithms


The two algorithms commonly used to allocate frames to a process are:
1.Equal allocation: In a system with x frames and y processes, each process gets equal
number of frames,

2.Proportional allocation: Frames are allocated to each process according to the process
size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S is the
sum of the sizes of all the processes and m is the number of frames in the system

Global vs Local Allocation


The number of frames allocated to a process can also dynamically change depending on
whether you have used global replacement or local replacement for replacing pages in case
of a page fault.

a) Local replacement: When a process needs a page which is not in the memory, it can bring
in the new page and allocate it a frame from its own set of allocated frames only.

b) Global replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from the set of all frames, even if that frame is
currently allocated to some other process; that is, one process can take a frame from another.

1.The algorithm in which we split m frames among n processes, to give everyone an


equal share, m/n frames is known as :
a) proportional allocation algorithm
b) equal allocation algorithm
c) split allocation algorithm
d) none of the mentioned

Answer: b)

2.The algorithm in which we allocate memory to each process according to its size is
known as :
a) proportional allocation algorithm
b) equal allocation algorithm
c) split allocation algorithm
d) none of the mentioned

Answer: a)

3. The minimum number of page frames that must be allocated to a running process in
a virtual memory environment is determined by
(A) the instruction set architecture
(B) page size
(C) physical memory size
(D) number of processes in memory

Answer:(A)

Explanation:
There are two important tasks in virtual memory management: a page-replacement strategy
and a frame-allocation strategy. Frame allocation strategy says gives the idea of minimum
number of frames which should be allocated. The absolute minimum number of frames that a
process must be allocated is dependent on system architecture, and corresponds to the number
of pages that could be touched by a single (machine) instruction.
So, it is instruction set architecture i.e. option (A) is correct answer.

4. Consider a machine in which all memory reference instructions have only one
memory address, for them we need at least _____ frame(s).
a) one
b) two
c) three
d) none of the mentioned

Answer: b
Explanation: At least one frame for the instruction and one for the memory reference.

5. _________ replacement allows a process to select a replacement frame from the set of
all frames, even if the frame is currently allocated to some other process.
a) Local
b) Universal
c) Global
d) Public

Answer: c

6. One problem with the global replacement algorithm is that :


a) it is very expensive
b) many frames can be allocated to a process
c) only a few frames can be allocated to a process
d) a process cannot control its own page – fault rate

Answer: d

Clock Algorithm, Thrashing


Clock algorithm: keep "use" bit for each page frame, hardware sets the appropriate bit on
every memory reference. The operating system clears the bits from time to time in order to
figure out how often pages are being referenced. Introduce clock algorithm where to find a
page to throw out the OS circulates through the physical frames clearing use bits until one is
found that is zero. Use that one. Show clock analogy.
If all pages from all processes are lumped together by the replacement algorithm, then it is
said to be a global replacement algorithm. Under this scheme, each process competes with all
of the other processes for page frames.
A per process replacement algorithm allocates page frames to individual processes: a page
fault in one process can only replace one of that process' frames. This relieves interference
from other processes.
A per job replacement algorithm has a similar effect (e.g. if you run vi it may cause your
shell to lose pages, but will not affect other users). In per-process and per-job allocation, the
allocations may change, but only slowly.

Thrashing:
 If a process cannot maintain its minimum required number of frames, then it must be
swapped out, freeing up frames for other processes. This is an intermediate level of CPU
scheduling.
 But what about a process that can keep its minimum, but cannot keep all of the frames
that it is currently using on a regular basis? In this case it is forced to page out pages that
it will need again in the very near future, leading to large numbers of page faults.
 A process that is spending more time paging than executing is said to be thrashing

1.In the working-set strategy, which of the following is done by the operating system to
prevent thrashing?

1. It initiates another process if there are enough extra frames.


2. It selects a process to suspend if the sum of the sizes of the working-sets exceeds
the total number of available frames.

 
(A) I only
(B) II only
(C) Neither I nor II
(D) Both I and II
Answer: (D)

Explanation: According to concept of thrashing,

 I is true because to prevent thrashing we must provide processes with as many frames
as they really need “right now”. If there are enough extra frames, another process can
be initiated.
 II is true because The total demand, D, is the sum of the sizes of the working sets for
all processes. If D exceeds the total number of available frames, then at least one
process is thrashing, because there are not enough frames available to satisfy its
minimum working set. If D is significantly less than the currently available frames,
then additional processes can be launched.

2. Thrashing
(A) reduces page I/O
(B) decreases the degree of multiprogramming
(C) implies excessive page I/O
(D) improves the system performance

Answer: (C)

3.Thrashing means
a)a high paging activity is called thrashing
b)a high executing activity is called thrashing
c)a extremely long process is called thrashing
d)an extremely long virtual memory is called thrashing

Answer:a)

I/O and file system:

I/O software is often organized in the following layers −

 User Level Libraries − This provides simple interface to the user program to perform
input and output. For example, stdio is a library provided by C and C++ programming
languages.

 Kernel Level Modules − This provides device driver to interact with the device
controller and device independent I/O modules used by the device drivers.

 Hardware − This layer includes actual hardware and hardware controller which
interact with the device drivers and makes hardware alive.
Device drivers:
A device driver performs the following jobs
 To accept request from the device independent software above to it.
 Interact with the device controller to take and give I/O and perform required error
handling
 Making sure that the request is executed successfully

Interrupt handlers
An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software
or more specifically a call back function in an operating system or more specifically in a
device driver.

Device-Independent I/O Software:


Following is a list of functions of device-independent I/O Software
 Uniform interfacing for device drivers
 Device naming - Mnemonic names mapped to Major and Minor device numbers
 Device protection
 Providing a device-independent block size
 Buffering because data coming off a device cannot be stored in final destination.
 Storage allocation on block devices
 Allocation and releasing dedicated devices

User-Space I/O Software


These are the libraries which provide richer and simplified interface to access the
functionality of the kernel or ultimately interactive with the device drivers. Most of the user-
level I/O software consists of library procedures with some exception like spooling system
which is a way of dealing with dedicated I/O devices in a multiprogramming system.

Kernel I/O Subsystem


Kernel I/O Subsystem is responsible to provide many services related to I/O. Following are
some of the services provided

a) Scheduling − Kernel schedules a set of I/O requests to determine a good order in


which to execute them. When an application issues a blocking I/O system call, the
request is placed on the queue for that device.

b) Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that
stores data while they are transferred between two devices or between a device with
an application operation.

c) Caching − Kernel maintains cache memory which is region of fast memory that holds
copies of data. Access to the cached copy is more efficient than access to the original.

d) Spooling and Device Reservation − A spool is a buffer that holds output for a
device, such as a printer, that cannot accept interleaved data streams. The spooling
system copies the queued spool files to the printer one at a time

e) Error Handling − An operating system that uses protected memory can guard
against many kinds of hardware and application errors.

I/O Hardware
An I/O system is required to take an application I/O request and send it to the physical
device, then take whatever response comes back from the device and send it to the
application. I/O devices can be divided into two categories

 Block devices − A block device is one with which the driver communicates by
sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key
etc.

 Character devices − A character device is one with which the driver communicates
by sending and receiving single characters (bytes, octets). For example, serial ports,
parallel ports, sounds cards etc.

Device Controllers:
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
Synchronous vs asynchronous I/O

 Synchronous I/O − In this scheme CPU execution waits while I/O proceeds

 Asynchronous I/O − I/O proceeds concurrently with CPU execution

Communication to I/O Devices

The CPU must have a way to pass information to and from an I/O device. There are three
approaches available to communicate with the CPU and Device.

 Special Instruction I/O


 Memory-mapped I/O
 Direct memory access (DMA)

Special Instruction I/O


This uses CPU instructions that are specifically made for controlling I/O devices. These
instructions typically allow data to be sent to an I/O device or read from an I/O device.

Memory-mapped I/O:
When using memory-mapped I/O, the same address space is shared by memory and I/O
devices. The device is connected directly to certain main memory locations so that I/O device
can transfer block of data to/from memory without going through CPU.

Direct Memory Access(DMA):


Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write
to memory without involvement. DMA module itself controls exchange of data between main
memory and the I/O device. CPU is only involved at the beginning and end of the transfer
and interrupted only after entire block has been transferred.

A computer must have a way of detecting the arrival of any type of input. There are two ways
that this can happen, known as polling and interrupts.
Polling I/O:Polling is the simplest way for an I/O device to communicate with the processor.
The process of periodically checking status of the device to see if it is time for the next I/O
operation, is called polling.

Interrupts I/O:An alternative scheme for dealing with I/O is the interrupt-driven method. An
interrupt is a signal to the microprocessor from a device that requires attention.

File Systems
A file is a collection of related information that is recorded on secondary storage or file is a
collection of logically related entities. From user’s perspective a file is the smallest allotment
of logical secondary storage.

File Directories:
Collection of files is a file directory. The directory contains information about the files,
including attributes, location and ownership. Much of this information, especially that is
concerned with storage, is managed by the operating system. The directory is itself a file,
accessible by various file management routines.

Information contained in a device directory are:


 Name
 Type
 Address
 Current length
 Maximum length
 Date last accessed
 Date last updated
 Owner id
 Protection information

Operation performed on directory are:


 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system

Advantages of maintaining directories are:


 Efficiency: A file can be located more quickly.
 Naming: It becomes convenient for users as two users can have same name for
different files or may have different name for same file.
 Grouping: Logical grouping of files can be done by properties e.g. all java programs,
all games etc.

Single-Level Directory:
In this a single directory is maintained for all the users.
 Naming problem: Users cannot have same name for two files.
 Grouping problem: Users cannot group files according to their need.

Two--Level Directory:
In this separate directories for each user is maintained.
 Path name: Due to two levels there is a path name for every file to locate that file.
 Now, we can have same file name for different user.
 Searching is efficient in this method.

Tree-Structured-Directory:
Directory is maintained in the form of a tree. Searching is efficient and also there is grouping
capability.

FILE ALLOCATION METHODS


1.Continuous Allocation:
 A single continuous set of blocks is allocated to a file at the time of file creation.
 Thus, this is a pre-allocation strategy, using variable size portions.
 The file allocation table needs just a single entry for each file, showing the starting
block and the length of the file.
 This method is best from the point of view of the individual sequential file.
 Multiple blocks can be read in at a time to improve I/O performance for sequential
processing. It is also easy to retrieve a single block.

Disadvantage
 External fragmentation will occur, making it difficult to find contiguous blocks of
space of sufficient length.
 Compaction algorithm will be necessary to free up additional space on disk.
 Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.

2. Linked Allocation(Non-contiguous allocation) :


 Allocation is on an individual block basis. Each block contains a pointer to the next
block in the chain.
 Again the file table needs just a single entry for each file, showing the starting block
and the length of the file.
 Although pre-allocation is possible, it is more common simply to allocate blocks as
needed.
 Any free block can be added to the chain. The blocks need not be continuous.
 Increase in file size is always possible if free disk block is available.
 There is no external fragmentation because only one block at a time is needed but
there can be internal fragmentation but it exists only in the last disk block of file.

Disadvantage:
 Internal fragmentation exists in last disk block of file.
 There is an overhead of maintaining the pointer in every disk block.
 If the pointer of any disk block is lost, the file will be truncated.
 It supports only the sequencial access of files.

3.Indexed Allocation:
 It addresses many of the problems of contiguous and chained allocation.
 In this case, the file allocation table contains a separate one-level index for each file:
 The index has one entry for each block allocated to the file.
 Allocation may be on the basis of fixed-size blocks or variable-sized blocks.
 Allocation by blocks eliminates external fragmentation, whereas allocation by
variable-size blocks improves locality.
 This allocation technique supports both sequential and direct access to the file and
thus is the most popular form of file allocation

Disk Free Space Management


The following are the approaches used for free space management.
1) Bit Tables : This method uses a vector containing one bit for each block on the disk. Each
entry for a 0 corresponds to a free block and each 1 corresponds to a block in use.
For example: 00011010111100110001.

2)Free Block List : In this method, each block is assigned a number sequentially and the list
of the numbers of all free blocks is maintained in a reserved block of the disk.

1.A file system with 300 GByte uses a file descriptor with 8 direct block address. 1
indirect block address and 1 doubly indirect block address. The size of each disk block
is 128 Bytes and the size of each disk block address is 8 Bytes. The maximum possible
file size in this file system is
(A) 3 Kbytes
(B) 35 Kbytes
(C) 280 Bytes
(D) Dependent on the size of the disk
Answer (B)
Total number of possible addresses stored in a disk block = 128/8 = 16
Maximum number of addressable bytes due to direct address block = 8*128
Maximum number of addressable bytes due to 1 single indirect address block = 16*128
Maximum number of addressable bytes due to 1 double indirect address block = 16*16*128
The maximum possible file size = 8*128 + 16*128 + 16*16*128 = 35KB

                            = ( 1/(10^6) )* 10 * (10^6) ns + (1 - 1/(10^6)) * 20 ns


                            = 30 ns (approx)

2.A computer system supports 32-bit virtual addresses as well as 32-bit physical
addresses. Since the virtual address space is of the same size as the physical address
space, the operating system designers decide to get rid of the virtual memory entirely.
Which one of the following is true?
(A) Efficient implementation of multi-user support is no longer possible
(B) The processor cache organization can be made more efficient now
(C) Hardware support for memory management is no longer needed
(D) CPU scheduling can be made more efficient now
Answer (C)
For supporting virtual memory, special hardware support is needed from Memory
Management Unit. Since operating system designers decide to get rid of the virtual memory
entirely, hardware support for memory management is no longer needed

3..Which of the following is major part of time taken when accessing data on the disk?
(A) Settle time
(B) Rotational latency
(C) Seek time
(D) Waiting time

Answer: (C)

Explanation: Seek time is time taken by the head to travel to the track of the disk where the
data to be accessed is stored.

Explanation: Swap space is typically used to store process data.

4.Increasing the RAM of a computer typically improves performance because:


(A) Virtual memory increases
(B) Larger RAMs are faster
(C) Fewer page faults occur
(D) Fewer segmentation faults occur

Answer: (C)

Explanation: When there is more RAM, there would be more mapped virtual pages in
physical memory, hence fewer page faults. A page fault causes performance degradation as
the page has to be loaded from secondary device.

5.A computer system supports 32-bit virtual addresses as well as 32-bit physical
addresses. Since the virtual address space is of the same size as the physical address
space, the operating system designers decide to get rid of the virtual memory entirely.
Which one of the following is true?
(A) Efficient implementation of multi-user support is no longer possible
(B) The processor cache organization can be made more efficient now
(C) Hardware support for memory management is no longer needed
(D) CPU scheduling can be made more efficient now

Answer: (C)

Explanation: For supporting virtual memory, special hardware support is needed from
Memory Management Unit.Since operating system designers decide to get rid of the virtual
memory entirely, hardware support for memory management is no longer needed

6.Identify the form of communication best describes the IO mode amongst the
following:
Source:
(A) Programmed mode of data transfer
(B) DMA
(C) Interrupt mode
(D) Polling

Answer: (D)

7.From amongst the following given scenarios determine the right one to justify
interrupt mode of data-transfer:
(A) Bulk transfer of several kilo-byte
(B) Moderately large data transfer but more that 1 KB
(C) Short events like mouse action
(D) Key board inputs

Answer: (C) (D)

8..Virtual memory is
(A) Large secondary memory
(B) Large main memory
(C) Illusion of large main memory
(D) None of the above

Answer: (C)

Explanation: Virtual memory is illusion of large main memory

9.Normally user programs are prevented from handling I/O directly by I/O instructions
in them. For CPUs having explicit I/O instructions, such I/O protection is ensured by
having the I/O instructions privileged. In a CPU with memory mapped I/O, there is no
explicit I/O instruction. Which one of the following is true for a CPU with memory
mapped I/O? (GATE CS 2005)
(A) I/O protection is ensured by operating system routine(s)
(B) I/O protection is ensured by a hardware trap
(C) I/O protection is ensured during system configuration
(D) I/O protection is not possible

Answwer (a)
Memory mapped I/O means, accessing I/O via general memory access as opposed to
specialized IO instructions. An example,
unsigned int volatile const *pMappedAddress const = (unsigned int *)0x100;

So, the programmer can directly access any memory location directly. To prevent such an
access, the OS (kernel) will divide the address space into kernel space and user space. An
user application can easily access user application. To access kernel space, we need system
calls (traps).

10.A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses,
1 indirect block address and 1 doubly indirect block address. The size of each disk block
is 128 Bytes and the size of each disk block address is 8 Bytes. The maximum possible
file size in this file system is
(A) 3 Kbytes
(B) 35 Kbytes
(C) 280 Bytes
(D) Dependent on the size of the disk

Answer: (B)
Total number of possible addresses stored in a disk block = 128/8 = 16
Maximum number of addressable bytes due to direct address block = 8*128
Maximum number of addressable bytes due to 1 single indirect address block = 16*128
Maximum number of addressable bytes due to 1 double indirect address block = 16*16*128
The maximum possible file size = 8*128 + 16*128 + 16*16*128 = 35KB

11.In MS-DOS, relocatable object files and load modules have extensions
a).OBJ and .COM or .EXE respectively
b).COM and .OBJ respectively
c).EXE and .OBJ respectively
d).DAS and .EXE respectively
Answer: a)
The Relocatable Object Module Format (OMF) is an object file format used primarily for
software intended to run on Intel 80x86 microprocessors. It was originally developed by Intel
under the name Object Module Format, and is perhaps best known to DOS users as an .OBJ
file.

12.A file sometimes called a


a)collection of input data
b)data set
c)temporary place to store data
d)program

Answer: b)

13. A program P reads and processes 1000 consecutive records from a sequential file F
stored on device D without using any file system facilities. Given the following:
(i) Size of each record = 3200 bytes.
(ii) Access time of D = 10 m secs.
(iii) Data transfer rate of D = 800 x 103 bytes/sec
(iv) CPU time to process each record = 3 m secs.
What is the elapsed time of P if F contains unblocked records and P does not use buffering?

a)12 sec
b)14 sec
c)17 sec
d)21 sec
Answer: c)       
       
Explanation:            
In case P uses one 'Read ahead' buffer the processing and transferring of records can be
overlapped.

Elapsed time =(Access time+ Transfer time + Processing Time )x (Number of records)
Here Access time = 10ms (given)
Transfer time = (800 x 103 )/3200 sec = 0.004 sec = 4 ms
Therefore Elapsed Time =  (10 + 4 +3 ) * 1000 m sec = 17 sec.

Here processing time is less than transfer time.

14.The file structure that redefines its first record at a base of zero uses the term
a)relative organization
b)key fielding
c)dynamic reallocation
d)hashing

Answer:a)

15.Which structure prohibits the sharing of files and directories ?


a)tree structure
b)one level structure
c)two level  structure
d)none of these
Answer: a)

16. The activity of a file


a)is a low percentage of number of records that are added to or deleted from a file
b)is a measure of the percentage of existing records updated during a run
c)refers to how closely the files fit into the allocated space
d)is a measure of the number of records added or deleted from a file compassed with the
original number of records

Answer: a)

17.Number of minimal set of required file operations are


a)two
b)four
c)five
d)six
Answer: d)   
       
Explanation:            
Six basic file operations.
1. Creating a file.
2. Writing a file.
3. Reading a file.
4. Repositioning within a file.
5. Deleting a file.
6. Truncating a file.

18.Access time is the highest in the case of


a)floppy disk
b)cache
c)swapping devices
d)magnetic disks

Answer: d)   

19. Access to moving head disks requires three periods of delay before information is
brought into memory. The response that correctly lists the three time delays for the
physical access of data in the order of the relative speed from the slowest to the fastest is
a)latency time, cache overhead time, seek tim
b)transmission time, latency time, seek time
c)seek time, latency time, transmission time
d)cache overhead time, latency time, seek time

Aswer :c)
       
Explanation:            
Seek time is the time required to move the disk arm to the required track. Rotational
delay or latency is the time it takes for the beginning of the required sector to reach the
head. Sum of seek time (if any) and latency is the access time. Time taken to actually
transfer a span of data is transfer time or transmission time.

20.Which of the following is major part of time taken when accessing data on the disk?
(A) Settle time
(B) Rotational latency
(C) Seek time
(D) Waiting time

Answer: (C)

Explanation: Seek time is time taken by the head to travel to the track of the disk where
the data to be accessed is store

21.Normally user programs are prevented from handling I/O directly by I/O
instructions in them. For CPUs having explicit I/O instructions, such I/O protection is
ensured by having the I/O instructions privileged. In a CPU with memory mapped I/O,
there is no explicit I/O instruction. Which one of the following is true for a CPU with
memory mapped I/O? (GATE CS 2005)
(A) I/O protection is ensured by operating system routine(s)
(B) I/O protection is ensured by a hardware trap
(C) I/O protection is ensured during system configuration
(D) I/O protection is not possible

Answer (a)
Memory mapped I/O means, accessing I/O via general memory access as opposed to
specialized IO instructions. An example,
unsigned int volatile const *pMappedAddress const = (unsigned int *)0x100;

So, the programmer can directly access any memory location directly. To prevent such an
access, the OS (kernel) will divide the address space into kernel space and user space. An
user application can easily access user application. To access kernel space, we need system
calls (traps).
Thanks to Venki for providing the above explanation.

22.Poor response time are caused by


a)Process or busy
b)High I/O rate
c)High paging rates
d)All of these

Answer: d)

23.A certain moving arm disk storage with one head has following specifications :
Number of tracks I recording surface 100 Disk rotation speed 2400 rpm Track storage
capacity= 62500 bits
The average latency time (assume the head can move from one track to another only by
traversing the entire track) is
a)2.5 s
b)2.9 s
c)3.1 s
d)3.6 s
       
       
Explanation:            
In 60 seconds, the disk rotates 2400 times. So it takes 25ms to rotate it one time.
There are 200 tracks. On average, 100 tracks need to be traversed to reach a specific track.
So 25ms*100 tracks= 2.5s time is taken on average to reach a particular track

24.A certain moving arm disk storage with one head has following specifications :
Number of tracks I recording surface 100 Disk rotation speed 2400 rpm Track storage
capacity= 62500 bits
The transfer rate will be
a)2.5 Mbits/s
b)4.25 Mbits/s
c)1.5 Mbits/s
d)3.75 Mbits/s

Answer: a)       


       
Explanation:            
In 25 ms, one rotation is completed and all the 62500 bits are transferred. So in 1ms, 2500 bits are
transferred. Accordingly, in 1s, 2.5Mb are transferred. Thats why the speed is 2.5Mb/s

MB - Mega Bytes,
Mb-Mega bits.

Das könnte Ihnen auch gefallen