Sie sind auf Seite 1von 6

SUBMITTED TO SUBMITTED BY

Ms. RAJDEEP KAUR Kamal


Lect. CSE Dept

B.TECH IT
Q1: Is disk scheduling other than FCFS scheduling useful in single
user environment? Justify your answer.

Ans:
As we know in case of a single-user environment, Test environment can be
divided into work and load conditions in single area and the adjacent areas. Plot
test of Single-user and multi-user area the input output queue usually is empty.
Requests generally arrive from a single process for one block or for a sequence
of consecutive blocks. In these cases, FCFS is an economical method of disk
scheduling. But LOOK is nearly as easy to program and will give much better
performance when multiple processes are performing concurrent I/O, such as
when a Web browser retrieves data in the background while the operating
system is paging and another application is active in the foreground.

Q2: What are the tradeoffs involved in rereading code pages from the file
system verses using swap space to store them?

Ans:
The tradeoffs involved in rereading code pages from the file system verses
using swap space to store them are as following, If code pages are stored in
swap space, they can be transferred more quickly to main memory (because
swap space allocation is tuned for faster performance than general file system
allocation).
Using swap space can require start up time if the pages are copied there at
process invocation rather than just being paged out to swap space on demand.
Also, more swap space must be allocated if it is used for both code and data
pages.

Q3: If we go ahead and use shared stack for parameter passing. Is


there any security threats. Taking an example justify your answer?

Ans:

When we go ahead and use shared stack for parameter passing.then there any
security threats. Based on the RPC (remote procedure call) protocol, network
file system was originally created by Sun Microsystems in the 1980's to share
files on disparate Unix systems.

NFS is a client/server implementation that makes remote disks transparently


available on a local client. It utilizes several daemons and configuration files to
enable file sharing. By default, this process is all undertaken without any
separate authentication, which makes NFS a security risk .

it work, NFS runs on the UDP protocol, which is a connectionless protocol


because it does not require any acknowledgement of packet delivery. NFS tries
to make up for this by forcing an acknowledgement of every command it sends.
If the acknowledgement occurs, it continues sending data. If not received in a
certain amount of time, then the data is retransmitted.

NFS involves not only the NFS protocol, but also the MOUNT protocol. These
protocols are implemented in NFS in the form of the daemons: rpc.mountd, nfsd
and portmap on the server end. Mounted tells nfsd what file systems are
available to be mounted on the local hosts. Portmap handles the rap-based
services.

On the client end, nest is employed through the biod, rpc.statd and rpc.lockd
daemons. Biod does read-ahead and write-behind performance optimizations for
the client, running in multiple instances. Rpc.statd and rap. Locked maintain the
file locking and lock recovery. Key files include the /etc/exports file, which
defines what shares are available and to whom, and /etc/fstab, which maintains
the mounted file system list for the client.

Q4: If the users are given the privilege of using Input and output
operations on their own. Will the system remain protected?

Ans:

when the users are given the privilege of using Input and output operations on
their own then the system will be less protected as we can see that there is not
restriction on the users and they can perform various functions which in some
cases can harm the system. Network Information Service, formerly known as
the Yellow Pages, is a distributed database system that centralizes commonly
accessed UNIX files like /etc/passwd, /etc/group, or /etc/hosts. The master
server maintains the files, while the clients seamlessly access the information
across the network.

The information accessed in NIS is housed in files called maps. In addition to


the central master server, where all maps are maintained, and the clients that
access them, slave servers exist.

These slaves can handle client requests for map access, but no changes to the
maps are made on the slaves. Changes are made only at the master server, and
then distributed through the master

Clients know to access the NIS maps when a + is placed in their local files.
Also, the /etc/nsswitch.conf file specifies which order to look up name service
information in: through DNS (domain name services), NIS or local files.

NIS is implemented through several daemons as well. Ypserv is the daemon on


the server side, and ypbind on the client side for making NIS requests. Maps are
transferred manually to slaves after updates are made (using yppush) or through
ypxfrd automatically (slaves check timestamps on the master and update
accordingly).

So it is better to have some restriction on public computers .like in our


university we have separate accounts for students and teachers

Q5: Discuss a means by which managers of systems connected to the


internet could have designed there systems to limit or eliminate the
damage done by the worm?

Ans:
Here are the means by which managers of systems connected to the internet
should be designed their systems to limit or eliminate the damage done by the
worm Operating systems have less vulnerability that can lead to massive
Internet worms. For instance, during 2002-2005, Microsoft Windows worms
like Blaster, Nachi, Sasser and Zotob infected a large number of systems on the
Internet. There have not been any new large-scale worms targeting Windows
services since 2005.
On the other hand, vulnerabilities found anti-virus, backup or other application
software, can result in worms. Most notable was the worm exploiting the
Symantec anti-virus buffer overflow flaw last year.
Users who are allowed by their employers to browse the Internet have become a
source of major security risk for their organizations. A few years back securing
servers and services was seen as the primary task for securing an organization.
Today it is equally important, perhaps even more important, to prevent users
having their computers compromised via malicious web pages or other client-
targeting attacks.
Attackers are finding more creative ways to obtain sensitive data from
organizations. Therefore, it is now critical to check the nature of any data
leaving an organization's boundary.

Q6: There are many RAID levels available for the user. Which raid
level suits the user maximum and why?

Ans:
There are many RAID levels available for the user but according to
me Raid 0 suits users to the maximum

• As in case of Raid 0,it is the simplest level, as it just involves striping.


Data redundancy is not even present in this level, so it is not
recommended for applications where data is critical.
• This level offers the highest levels of performance out of any single level.
It also offers the lowest cost since no extra storage is involved. At least 2
hard drives are required, preferably identical, and the maximum depends
on the controller.
• None of the space is wasted as long as the hard drives used are identical.
This level has become popular with the mainstream market for it's
relatively low cost and high performance gain.
• This level is good for most people that don't need any data redundancy.
There are many SCSI and IDE/ATA implementations available. Finally,
it's important to note that if any of the hard drives in the array fails, you
lose everything.
• RAID 1 This level is usually implemented as mirroring. Two identical
copies of data are stored on two drives.
• When one drive fails, the other drive still has the data to keep the system
going. Rebuilding a lost drive is very simple since you still have the
second copy. This adds data redundancy to the system and provides some
safety from failures. Some implementations add an extra controller to
increase the fault tolerance even more.
• It is ideal for applications that use critical data. Even though the
performance benefits are not great, some might just be concerned with
preserving their data. The relative simplicity and low cost of
implementing this level has increased its popularity in mainstream
controllers. Most controllers nowadays implement some form of RAID.

Das könnte Ihnen auch gefallen