Beruflich Dokumente
Kultur Dokumente
• Contact US
Anna University Engineering Question Bank
Top of Form
Bottom of Form
• Unit - I
• Unit - II
• Unit - III
• Unit - IV
• Unit - V
PART - A
PART-A
• Transition 2 occurs when the scheduler decides that the running process has run long
enough and it is time to let another process have CPU time.
• Transition 3 occurs when all other processes have had their share and it is time for the first
process to run again
• Transition 4 occurs when the external event for which a process was waiting (such as
arrival of input) happens.
This state transition is:
Send
Bottom of Form
aximize Toolbar
Part-A
1. What is Concurrent Processes?
The concurrent process is use for several processes between starting and finishing.
2. What are the uses of Resource Allocation?
The use of resource allocation are:
1. We will focus on shared variables
2. Printers are actually controlled by one process called the printer server or printer daemon.
Modern operating systems do not use mutual exclusion techniques for controlling the
printer.
3. List out the Low-Level Techniques for Mutual Exclusion.
• In UNIX, if the operating system is about to make changes to its kernel data structures
(e.g., create a new process by changing the process table)
• Turn off interrupts; therefore, a process may get more time on the processor
• Process cannot lose the processor because the short term scheduler is run in response to a
timer interrupt
• Change data structure
• Keep the code very short without bugs
• Usually only one or two changes are needed here
• Allow interrupts again
• Used on Hercules
• Great for single-processor machines
4. Define Critical Section
The key to preventing trouble involving shared storage is find some way to prohibit more
than one process from reading and writing the shared data simultaneously. That part of the
program where the shared memory is accessed is called the Critical Section. To avoid race
conditions and flawed results, one must identify codes in Critical Sections in each thread.
The characteristic properties of the code that form a Critical Section are
• Codes that reference one or more variables in a “read-update-write” fashion while any of
those variables is possibly being altered by another thread.
• Codes that alter one or more variables that are possibly being referenced in “read-update-
write” fashion by another thread.
• Codes use a data structure while any part of it is possibly being altered by another thread.
• Codes alter any part of a data structure while it is possibly in use by another thread.
5. What is the Solution to the Critical Section Problem?
The solution to the critical section problem:
Mutual exclusion¬
Progress
Bounded waiting
6. Define Semaphores.
A semaphore is a protected variable whose value can be accessed and altered only by the
operations P and V and initialization operation called \'Semaphoiinitislize\'.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also
called general semaphores can assume only nonnegative values.
7. List out the type of semaphores.
• Counting Semaphores
• Binary Semaphores
7. What are the Techniques for Critical Section Problem?
Software
Peterson\'s Algorithm: Based on busy waiting
Semaphores: General facility provided by operating system
Monitors: Programming language technique
Hardware
Exclusive access to memory location: Always assumed
Interrupts that can be turned off: Must have only one processor for mutual exclusion
Test-and-Set: Special machine-level instruction
Swap: atomically swaps contents of two words
8. What do you meant by Monitors?
Monitors are a higher-level construct, used in .NET (C#\'s lock) and Java (synchronized) that
allow operation declarations inside of them. These operations are all mutually exclusive
(they are all the same critical section).
9. List out the Functions of monitors.
The two functions monitors offer to the operation declarations are wait (condition) and
signal (condition), which act as a mutex\'s down and up operations, both of which are only
accessible from inside the monitor. Both of those internal operations can act on any of the
conditions defined by the monitor.
10. What are the problems in monitors?
The problems in monitors are:
Bounded Buffer
Producer-Consumer
11. Define Deadlock
A set of process is in a deadlock state if each process in the set is waiting for an event that
can be caused by only another process in the set. In other words, each member of the set of
deadlock processes is waiting for a resource that can be released only by a deadlock
process. None of the processes can run, none of them can release any resources, and none
of them can be awakened. It is important to note that the number of processes and the
number and kind of resources possessed and requested are unimportant.
12. What is meant by Preemptable and Non-Preemptable Resources?
Resources come in two flavors: Preemptable and non-Preemptable. A Preemptable resource
is one that can be taken away from the process with no ill effects. Memory is an example of
a Preemptable resource.
A non-Preemptable resource is one that cannot be taken away from process (without
causing ill effect).
For example, CD resources are not Preemptable at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve Preemptable resources.
Deadlocks that involve non-Preemptable resources are difficult to deal with.
13. What are the necessary and sufficient deadlock conditions?
The necessary and sufficient deadlock conditions are:
Mutual Exclusion Condition
Hold and Wait Condition
No-Preemptive Condition
Circular Wait Condition
14. What is the Dealing with Deadlock Problem?
There are four strategies of dealing with deadlock problem:
The Ostrich Approach
Deadlock Detection and Recovery
Deadlock Avoidance
Deadlock Prevention
15. List out the Deadlock Prevention condition.
The Deadlock prevention conditions are:
Elimination of “Mutual Exclusion” Condition
Elimination of “Hold and Wait” Condition
Elimination of “No-preemption” Condition
Elimination of “Circular Wait” Condition
16. What does Deadlock Avoidance mean?
This approach to the deadlock problem anticipates deadlock before it actually occurs. This
approach employs an algorithm to access the possibility that deadlock could occur and
acting accordingly. This method differs from deadlock prevention, which guarantees that
deadlock cannot occur by denying one of the necessary conditions of deadlock.
If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by
being careful when resources are allocated. Perhaps the most famous deadlock avoidance
algorithm, due to Dijkstra [1965], is the Banker’s algorithm. So named because the process
is analogous to that used by a banker in deciding if a loan can be safely made.
17. What is mean by Deadlock Detection?
Deadlock detection is the process of actually determining that a deadlock exists and
identifying the processes and resources involved in the deadlock.
The basic idea is to check allocation against resource availability for all possible allocation
sequences to determine if the system is in deadlocked state a. Of course, the deadlock
detection algorithm is only half of this strategy.
18. What are the needs to recover, when the deadlock detected?
The needs to recover the deadlock detection are:
• Temporarily prevent resources from deadlocked processes.
• Back off a process to some check point allowing preemption of a needed resource and
restarting the process at the checkpoint later.
• Successively kill processes until the system is deadlock free.
19. What is Circular Wait Condition?
Requesting process hold already, resources while waiting for requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it
while waiting for additional resource that are currently being held by other process
20. What is Hold and Wait Condition?
The processes in the system form a circular list or chain where each process in the list is
waiting for a resource held by the next process in the list.
21. What is a thread?
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other
threads belonging to the same process its code section, data section, and operating system
resources such as open files and signals.
22. What are the benefits of multithreaded programming?
The benefits of multithreaded programming can be broken down into four major categories:
Responsiveness¬
Resource sharing
Economy¬
Utilization of multiprocessor architectures¬
23.Compare user threads and kernel threads.
User threads Kernel threads
User threads are supported above the kernel and are implemented by a thread library at the
user level Kernel threads are supported directly by the operating system
Thread creation & scheduling are done in the user space, without kernel intervention.
Therefore they are fast to create and manage Thread creation, scheduling and management
are done by the operating system. Therefore they are slower to create & manage compared
to user threads
Blocking system call will cause the entire process to block If the thread performs a blocking
system call, the kernel can schedule another thread in the application for execution
24. Define thread cancellation & target thread.
The thread cancellation is the task of terminating a thread before it has completed. A thread
that is to be cancelled is often referred to as the target thread. For example, if multiple
threads are concurrently searching through a database and one thread returns the result,
the remaining threads might be cancelled.
25. What are the different ways in which a thread can be cancelled?
Cancellation of a target thread may occur in two different scenarios:
Asynchronous cancellation: One thread immediately terminates the target¬ thread is called
asynchronous cancellation.
Deferred cancellation: The target thread can periodically check if it¬ should terminate,
allowing the target thread an opportunity to terminate itself in an orderly fashion.
26. Write some classical problems of synchronization?
The Bounded-Buffer Problem
The Readers-Writers Problem
The Dining Philosophers Problem
27. When the error will occur when we use the semaphore?
i. When the process interchanges the order in which the wait and signal operations on the
semaphore mutex.
ii. When a process replaces a signal (mutex) with wait (mutex).
iii. When a process omits the wait (mutex), or the signal (mutex), or both.
28. Define the term critical regions?
Critical regions are small and infrequent so that system through put is largely unaffected by
their existence. Critical region is a control structure for implementing mutual exclusion over
a shared variable.
29.What are the drawbacks of monitors?
1.Monitor concept is its lack of implementation most commonly used programming
languages.
2.There is the possibility of deadlocks in the case of nested monitors calls.
30. What are the two levels in threads?
Thread is implemented in two ways.
1.User level
2.Kernel level
31.What are the methods for handling deadlocks?
The deadlock problem can be dealt with in one of the three ways:
Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlock state.
Allow the system to enter the deadlock state, detect it and then♣ recover.
Ignore the problem all together, and pretend that deadlocks never occur in♣ the system.
32. What are a safe state and an unsafe state?
A state is safe if the system can allocate resources to each process in some order and still
avoid a deadlock. A system is in safe state only if there exists a safe sequence. A sequence
of processes <P1,P2,….Pn> is a safe sequence for the current allocation state if, for each Pi,
the resource that Pi can still request can be satisfied by the current available resource plus
the resource held by all the Pj, with j<i. if no such sequence exists, then the system state is
said to be unsafe.
33. What is banker’s algorithm?
Banker’s algorithm is a deadlock avoidance algorithm that is applicable to a resource-
allocation system with multiple instances of each resource type. The two algorithms used for
its implementation are:
Safety algorithm: The algorithm for finding out whether or not a system is in a safe state.
Resource-request algorithm: if the resulting resource-allocation is safe, the transaction is
completed and process Pi is allocated its resources. If the new state is unsafe Pi must wait
and the old resource-allocation state is restored.
34. Differentiate deadlock and starvation.
A set of processes is in deadlock state when every process in the set is waiting for an event
that can be caused only by the other process in the set.
Starvation or indefinite blocking is a situation where processes wait indefinitely within the
semaphore.
PART-B
1. Give a detailed description about deadlocks and its characterization
2. Explain about the methods used to prevent deadlocks
3. Write in detail about deadlock avoidance.
4. Explain the Banker’s algorithm for deadlock avoidance.
5. Give an account about deadlock detection.
6. What are the methods involved in recovery from deadlocks?
7. Explain what semaphores are, their usage, implementation given to avoid busy waiting
and binary semaphores.
8. Explain the classic problems of synchronization.
9. Write about critical regions and monitors.
9. Explain about Dining Philosopher’s problem of Synchronization
Explain- Dinning philosopher Problem, algorithm
10. Explain about Producer Consumer Problem and algorithm
Toolbar
Top of Form
Top of Form
Part-A
1. What is Memory Management Storage Hierarchy ?
Medium-term scheduler (used in time-sharing systems)
• Manages processes waiting for resources
o Examples: waiting to be placed in main memory, waiting for i/o, or waiting to be placed in
the ready list of the short tem scheduler.
• Integrated with memory management routines (especially processes waiting for main
memory)
2. What is Volatile Storage?
• volatile - contents lost if power is interrupted
• non-volatile - can withstand power failures and system crashes
3. What is meant by Cache Memory?
• Source: FutureShop.ca web site
• A cache is a very fast block of memory that speeds up the performance of another device.
Frequently used data are stored in the cache. The computer looks in the cache first to see if
what it needs is there.
• Level 1 Cache is located directly inside the CPU itself, and stores frequently used data or
commands. Although relatively small, Level 1 Cache has the most direct effect on overall
performance.
• Level 2 Cache is located on the motherboard. It stores frequently used data from the
computer\'s main memory (RAM). In Intel Pentium chips, Advanced Transfer Cache is an
improved version of the Level 2 Cache, in which the cache memory operates at the same
speed as the processor, which is as much as four times the speed of a standard Level 2
Cache
4. Define Contiguous Memory Allocation.
Straight-forward method to allocate memory to several processes
Memory is divided into two partitions: Resident OS in lower part, and user processes in
upper part
Assumes a single hunk of memory per process
Relocation-register scheme used to protect user processes from each other, and from
changing operating-system code and data
5. List out the Dynamic Storage Allocation Problem
First fit – allocate first fitting hole
Best fit – allocate best fitting hole
Worst fit – allocate largest hole
First and best fit better than worst fit
Methods cause external fragmentation
o – 50-percent rule gives 33% memory loss
May be solved by compaction
o – Code must be relocated at execution
o – Time consuming
6. What is Swapping?
Move entire processes between main and secondary memory
Informally, the term \"swapping\" is also applied to the movement of partial processes
There are more processes than will fit in main memory
A simple memory management policy
Done by moving a complete process in or out of memory
o Process data moved include:
Process control block (pcb)
Data variables (i.e. heap, stack)♣
Instructions (in machine language)
7. What is Segmentation?
Divide a process up into logically connected chunks which vary in size
- e.g. segment: a logically connected chunk of a process\' address space
- it is the virtual address space that is being divided up
8. What is Virtual Memory Operating System?
OS takes care of all aspects of address translation
Gives the user process the illusion that it has its own address space starting from 0
The user process address space can be larger than main memory
Thus, the actual size of the user process ( 7 pages in our previous example) can be larger
than main memory
Load the pieces of the process that are needed immediately during execution
Other parts of the process can be stored in the swap space (backing store)
9. What are the advantages of Virtual Memory?
Can have processes larger than available memory
OS, not the programmer, manages the details
Performance is satisfactory when we have \"locality of reference\"
10.Write the disadvantage of virtual memory?
Can be slow because the running process stops whenever a required page is in the swap
space
11. What is Static binding?
Bind before execution time
Process must execute in the same area of memory (loader loads it into this area)
i) Absolute code - compiler generates code with the physical address
ii) Relocatable code - loader translates the address in the object module into the physical
address
12.What is Dynamic Binding?
Delay address binding until execution time
Use relocation registers (called segment base registers ; based on the beginning of the
program or page)
Binds a unit of code every time it re-enters main memory
Sets the base register
All addresses have the value from the base register added on before being used
13. What does Swap Space mean?
Divided into pages
Assume the complete process is first loaded into the swap space
More pages go back and fo rth between swap space and main memory
When a program is loaded, put it into the swap space
Memory and page size is always a power of 2
14. What is Page Fault?
When a process tries to access an address whose page is not currently in memory
Process must be suspended, process leaves the processor and the ready list, status is
now ``waiting for main memory\'\'
15.List out the Paging Operation.
Fetch policy
o Demand fetching
o Anticipatory fetching
Placement Policy
Replacement Policy
16. List out the Page Replacement Algorithms.
RAND (Random)
MIN (minimum) or OPT (optimal) :
FIFO (First In, First Out):
LRU (Least Recently Used):
NRU (Not Recently Used):
17. What is Pool Method?
Once a page has been selected for replacement/removal, it joins a pool of pages waiting
for removal
If referenced while in the pool, it is immediately reactivated and removed from the pool
Dirty pages in the pool are written to disk whenever an i/o device is available,
If a page has an up to date copy on disk, it is called a clean page
The \"page cleaning process\" is the process that writes the dirty pages to disk
When the pool is full and a page is needed, remove any clean page from the pool and
reallocate it to another process
Only have to stop the running process and wait for a page out if there are no clean pages
Page faults take twice as long if we have to write dirty pages to disk
Often save a number of pages and write them out together in order of disk address
Used in Windows NT and Windows 2000.
18. What is Shared Pages?
Pages can be marked as READ-ONLY, READ-WRITE, etc. according to what segment they
are in
Processes can share pages that are marked as \"READ-ONLY\"
For example, the pages containing the executable code will be the same for several users
who are all running the same program (\"vi\")
a page is kept if any process needs it
Other pages are marked as \"READ-WRITE\" or \"COPY-ON-WRITE\", which means that a
separate copy will be made for any sharing process that changes the page
With COPY-ON-WRITE pages, if any process tries to change a page, then a copy is made,
that process\'s page table is updated, and then execution continues
19. What is Paged Segmentation?
Divide big segments into pages
Described in some more detail under Segmentation
20. What is Segmentation?
Divide processes\' data into logical units, each called a segment
Better for error checking because all the data in one segment is of the same type .E.g.
instructions - cannot be changed (read-only)
E.g. array (not in UNIX, but in VAX) - checking bounds on an array[ 1 ... 100 ]
Make a segment large enough to hold exactly 100 entries
Then an array reference of array[101] gives a segmentation fault
Internal fragmentation - none - each segment can be defined to be the proper size. e.g.
word address segment boundary, say divide by 256
External fragmentation can be a serious problem
Table overhead and transportation costs are similar to paging
21. What is Segmentation Placement?
Placement: where to locate (place) a segment
best fit: smallest hole (contiguous space) the segment can fit in
first fit: first hole (contiguous space) the segment can fit in
worst fit: largest hole (contiguous space) the segment can fit in
- this idea is perhaps good, but simulation experiments show that after a while, it tends to
eliminate all large holes and only be able to satisfy requests for small holes
22. What is Replacement?
Replacement: which segment to choose for removal when there is inadequate space
available
Compaction is an alternative action (no disk operations and relatively fast)
segment of a terminated process
segment of a blocked process
segment of a ready process♣
Could design LRU, FIFO, NRU, etc. algorithms
Paging is the winning approach, wastes little space with little management overhead
Paged segmentation is most common in current Operating Systems
First divide process into relatively large segments -- read/write/etc
Now divide segments into pages
Wastes part of a page at the end of every segment
Acts like a paging algorithm
Part B
1. Explain the segmentation placement.
2. Discuss the page segmentation.
3. Explain about the share pages.
4. Write note on Pool method?
5. Explain about the Page Replacement Algorithms
6. Discuss about the Effective Memory Access Time
7. Write briefly about the Memory Addressing?
8. Explain about the Swapping
9. Write note on Fixed Partitions
10. Explain the Variable partitions
Bottom of Form
Top of Form
Bottom of Form
Part- A
1. Define Disks Scheduling
The ideal storage device is
o Fast
o Big (in capacity)
o Cheap
o Impossible
Disks are big and cheap, but slow.
2. List out the components of Disk Hardware
Show a real disk opened up and illustrate the components
Platter
Surface
Head
Track
Sector
Cylinder
Seek time
Rotational latency
Transfer time
3. What is mean by Error Handling?
Disks error rates have dropped in recent years. Moreover, bad block forwarding is done by the
controller (or disk electronic) so this topic is no longer as important for OS.
4. Define Track Caching
Often the disk/controller caches a track, since the seek penalty has already been paid. In fact
modern disks have megabyte caches that hold recently read blocks. Since modern disks cheat
and don\'t have the same number of blocks on each track, it is better for the disk electronics (and
not the OS or controller) to do the caching since it is the only part of the system to know the true
geometry.
5. What is Ram Disks?
Fairly clear. Organize a region of memory as a set of blocks and pretend it is a disk.
A problem is that memory is volatile.
Often used during OS installation, before disk drivers are available (there are many types of
disk but all memory looks the same so only one ram disk driver is needed).
6. What is Memory-Mapped Terminals?\\
Less dated. But it still discusses the character not graphics interface.
Today, the idea is to have the software write into video memory the bits to be put on the
screen and then the graphics controller converts these bits to analog signals for the monitor
(actually laptop displays and very modern monitors are digital).
But it is much more complicated than this. The graphics controllers can do a great deal of
video themselves (like filling).
This is a subject that would take many lectures to do well.
7. What is Terminal Hardware?
Quite dated. It is true that modern systems can communicate to a hardwired ASCII terminal, but
most don\'t. Serial ports are used, but they are normally connected to modems and then some
protocol (SLIP, PPP) is used not just a stream of ASCII characters.
8. List out the File structure.
Byte stream
Record stream
Varied and complicated beast
9. List out the File types
(Regular) files.
Directories: studied below.
Special files (for devices). Uses the naming power of files to unify many actions.
dir # prints on screen
dir > file # result put in a file
dir > /dev/tape # results written to tape
``Symbolic\'\' Links (similar to ``shortcuts\'\'): Also studied below.
``Magic number\'\': Identifies an executable file.
There can be several different magic numbers for different types of executables.
unix: #!/usr/bin/perl
10. What is File access?
There are basically two possibilities, sequential access and random access (a.k.a. direct access).
Previously, files were declared to be sequential or random. Modern systems do not do this.
Instead all files are random and optimizations are applied when the system dynamically
determines that a file is (probably) being accessed sequentially.
With Sequential access the bytes (or records) are accessed in order (i.e., n-1, n, n+1,..).
Sequential access is the most common and gives the highest performance. For some devices (e.g.
tapes) access ``must\'\' be sequential.
With random access, the bytes are accessed in any order. Thus each access must specify
which bytes are desired.
11. List out the File operations
Create: Essential if a system is to add files. Need not be a separate system call (can be merged
with open).
Delete: Essential if a system is to delete files.
Open: Not essential. An optimization in which the translation from file name to disk locations
is perform only once per file rather than once per access.
Close: Not essential. Free resources.
Read: Essential. Must specify filename, file location, number of bytes, and a buffer into
which the data is to be placed. Several of these parameters can be set by other system calls and in
many OS\'s they are.
Write: Essential if updates are to be supported. See read for parameters.
Seek: Not essential (could be in read/write). Specify the offset of the next (read or write)
access to this file.
Get attributes: Essential if attributes are to be used.
Set attributes: Essential if attributes are to be user settable.
Rename: Tanenbaum has strange words. Copy and delete is not acceptable for big files.
Moreover copy-delete is not atomic. Indeed link-delete is not atomic so even if link (discussed
below) is provided, renaming a file adds functionality.
12. What is Memory mapped files
Conceptually simple and elegant. Associate a segment with each file and then normal memory
operations take the place of I/O. Thus copyfile does not have fgetc/fputc (or read/write). Instead
it is just like memcopy
while ( (dest++)* = (src++)* );
13. What is Path Names?
You can specify the location of a file in the file hierarchy by using either anabsolute versus or a
Relative path to the file
An absolute path starts at the (or a if we have a forest) root.
A relative path starts at the current (a.k.a working) directory.
The special directories . and .. represent the current directory and the parent of the current
directory respectively.
14. List out the Directory operations
Create: Produces an ``empty\'\' directory. Normally the directory created actually contains .
and .., so is not really empty
Delete: Requires the directory to be empty (i.e., to just contain . and ..). Commands are
normally written that will first empty the directory (except for . and ..) and then delete it. These
commands make use of file and directory delete system calls.
Opendir: Same as for files (creates a ``handle\'\')
Closedir: Same as for files
Readdir: In the old days (of unix) one could read directories as files so there was no special
readdir (or opendir/closedir). It was believed that the uniform treatment would make
programming (or at least system understanding) easier as there was less to learn.
However, experience has taught that this was not a good idea since the structure of directories
then becomes exposed. Early unix had a simple structure (and there was only one). Modern
systems have more sophisticated structures and more importantly they are not fixed across
implementations.
Rename: As with files
Link: Add a second name for a file; discussed below.
Unlink: Remove a directory entry. This is how a file is deleted. But if there are many links
and just one is unlinked, the file remains.
Courses
Top of Form
Bottom of Form
PART - A
PART A
1.What is UNIX?
UNIX is also called as operating system, which nowadays runs on most computer systems.
An operating system is merely a computer program through which the user interacts with
the computer and its components and peripheral devices (processor, processes, files, disks,
terminals, printers, plotters, etc.).
2. What is mean by kernel?
The kernel is very small and its always resides in the main memory. It consists of about
10000 lines of C codes and about 1000 lines of assembly code. The size of the kernel makes
it easy to understand, debug or enhance it.
UNIX kernel is supposed to be divided only into two parts:
Information Management
Process Management
3.List out different types of file in UNIX.
• Ordinary Files
• Directory Files
• Special Files
• FIFO Files
4.Define Mounting file system
Mounting usually done at the time of booting UNIX by the system administrator. If all users
with all their directories were to be supported all the time in one file system, it would be
difficult task.
The Mounting facility provides the system manager the flexibility to change or tune his file
system as per the need. It also allows security.
5. What is the logical layout of the file system?
The logical layout of the file system are
Part 1 - Boot Block
Part 2 - Super Block♣
Part 3 - Inode Block
Part 4 - Data Block
6. Define OPEN System Call
When a process wants to perform any operation on a file, it has to open it first. The format of
this system call is as follow:
fd=open(pathname, mode, flag, permissions)
fd -file descriptor♣
pathname - pathname of the file♣
mode - the file to be open in read or write mode♣
flag - Indicator
Permissions - Access right to be given for read or write the file♣
7. What is mean by lseek?
lseek is also called as random seek. The read and write system call allow sequential reading
or writing of bytes with respect to the offset for the file maintained in the appropriate file
table entry. The system call for random seek allow this. The syntax of this call is:
Position= lseek (fd, offset, reference)
fd - File descriptor♣
Offset - The new relative byte number (RBN)
Reference - Its an indicator of offset♣
8. List out the data structures maintained by UNIX.
The data structures maintained by UNIX are:
• Process Table (PT)
• u- area
• Per Process Region Table (Pregion)
• Region Table(RT)
• Page Map Tables (PMT)
• Kernel Stack(KS)
9.What is mean by swapping in UNIX?
A swap device is a part of the disk. Only the kernel can read data from the swap device or
write it back. The kernel allocates one block at a time to ordinary files but in the case of
swap device, this allocation is done contiguously to achieve higher speed as for the I/O while
performing the functions of swapping in or out
10. Define Demand Paging.
Demand paging is the process image is divided into equal sized pages and the physical
memory is divided into same sized page frames.
The process image resides on the disk in the “executable file”. The blocks allocated to this
file by the kernel need not be contiguous. When the process starts executing, depending
upon the free physical page frames, an equal number of pages s are loaded and the
execution.
PART- B
1. What happens when
• A file is opened?
• A file with three links is deleted?
• A file requests for additional blocks?
• A shell program is invoked?
2. Describe the salient features of the file system of UNIX.
3. How does UNIX provide file protection?
4. Explain the merits and demerits of file protection.
5. Explain MOUNT / UNMOUNT in UNIX. What is its purpose?