Sie sind auf Seite 1von 6

Name : Kaiwalya A. Kulkarni Reg.No.: 2015BCS019 Roll no.

: A-18

Assignment - 4
Q1: Describe how connectionless communication between a client and a server proceeds when
using sockets.

Ans:
Both the client and the server create a socket, but only the server binds the socket to a local
endpoint. The server can then subsequently do a blocking read call in which it waits for incoming
data from any client. Likewise, after creating the socket, the client simply does a blocking call to
write data to the server. There is no need to close a connection.
Connectionless sockets do not establish a connection over which data is transferred. Instead,
the server application specifies its name where a client can send requests.Connectionless sockets
use User Datagram Protocol (UDP) instead of TCP/IP.
The following figure illustrates the client/server relationship of the socket APIs used in the
examples for a connectionless socket design.

Socket flow of events: Connectionless server


The following sequence of the socket calls provides a description of the figure and the
following example programs. It also describes the relationship between the server and client
application in a connectionless design. Each set of flows contains links to usage notes on specific
APIs. If you need more details on the use of a particular API, you can use these links. The first
example of a connectionless server uses the following sequence of API calls:
1. The socket() API returns a socket descriptor, which represents an endpoint. The statement
also identifies that the Internet Protocol address family (AF_INET) with the UDP transport
(SOCK_DGRAM) is used for this socket.
2. After the socket descriptor is created, a bind() API gets a unique name for the socket. In this
example, the user sets the s_addr to zero, which means that the UDP port of 3555 is bound
to all IPv4 addresses on the system.
3. The server uses the recvfrom() API to receive that data. The recvfrom() API waits
indefinitely for data to arrive.
4. The sendto() API echoes the data back to the client.
5. The close() API ends any open socket descriptors.
Socket flow of events: Connectionless client
The second example of a connectionless client uses the following sequence of API calls.
1. The socket() API returns a socket descriptor, which represents an endpoint. The statement
also identifies that the Internet Protocol address family (AF_INET) with the UDP transport
(SOCK_DGRAM) is used for this socket.
2. In the client example program, if the server string that was passed into the inet_addr() API
was not a dotted decimal IP address, then it is assumed to be the host name of the server. In
that case, use the gethostbyname() API to retrieve the IP address of the server.
3. Use the sendto() API to send the data to the server.
4. Use the recvfrom() API to receive the data from the server.
5. The close() API ends any open socket descriptors.

Q2: An alternative definition for a distributed system is that of a collection of independent


computers providing the view of being a single system, that is, it is completely hidden from
users that there even multiple computers. Give an example where this view would come in
very handy.

Ans:
What immediately comes to mind is parallel computing. If one could design programs that
run without any serious modifications on distributed systems that appear to be the same as no
distributed systems, life would be so much easier. Achieving a single-system view is by now
considered virtually impossible when performance is in play.

Q3: Assume a client calls an asynchronous RPC to a server, and subsequently waits until the
server returns a result using another asynchronous RPC. Is this approach the same as
letting the client execute a normal RPC? What if we replace the asynchronous RPCs with
one-way RPCs?

Ans:
No, this is not the same. An asynchronous RPC returns an acknowledgement to the
caller, meaning that after the first call by the client, an additional message is sent across
the network. Likewise, the server is acknowledged that its response has been delivered to
the client. Two one-way RPCs may be the same, provided reliable communication is guaranteed.
This is generally not the case.

Q4: Suppose that you could make use of only transient asynchronous communication
primitives, including only an asynchronous receive primitive.How would you implement
primitives for transient synchronous communication?

Ans:
Consider a synchronous send primitive. A simple implementation is to send a message to the
server using asynchronous communication, and subsequently let the caller continuously poll for an
incoming acknowledgment or response from the server. If we assume that the local operating
system stores incoming messages into a local buffer, then an alternative implementation is to block
the caller until it receives a signal from the operating system that a message has arrived, after which
the caller does an asynchronous receive.
Q5: What is the difference between a network operating system and a distributed operating
system?

Ans:

Sr.No. Network Operating System Distributed Operating System


1. A network operating system is made up A distributed operating system is an
of software and associated protocols ordinary centralized operating system but
that allow a set of computer network to runs on multiple independent CPUs.
be used together.
2. Environment users are aware of Environment users are not aware of
multiplicity of machines. multiplicity of machines.
3. Control over file placement is done It can be done automatically by the system
manually by the user. itself.
4. Performance is badly affected if It is more reliable or fault tolerant i.e
certain part of the hardware starts distributed operating system performs
malfunctioning. even if certain part of the hardware starts
malfunctioning.
5. Remote resources are accessed by Users access remote resources in the same
either logging into the desired remote manner as they access local resources.
machine or transferring data from the
remote machine to user's own
machines.

Q6: In the text, we described a multithreaded file server, showing why it is better than a
single-threaded server and a finite-state
machine server. Are there any circumstances in which a single-threaded server might be
better? Give an example.

Ans:
Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It may
just add unnecessary complexity. As an example, consider a telephone directory assistance number
for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, the
entire database takes 64 megabytes, and can easily be kept in the server’s memory to provide fast
lookup.

Q7: What is a three-tiered client-server architecture?

Ans:
A 3-tier architecture is a type of software architecture which is composed of three “tiers” or
“layers” of logical computing. They are often used in applications as a specific type of client-server
system. 3-tier architectures provide many benefits for production and development environments by
modularizing the user interface, business logic, and data storage layers. Doing so gives greater
flexibility to development teams by allowing them to update a specific part of an application
independently of the other parts. This added flexibility can improve overall time-to-market and
decrease development cycle times by giving development teams the ability to replace or upgrade
independent tiers without affecting the other parts of the system.
For example, the user interface of a web application could be redeveloped or modernized
without affecting the underlying functional business and data access logic underneath. This
architectural system is often ideal for embedding and integrating 3rd party software into an existing
application. This integration flexibility also makes it ideal for embedding analytics software into
pre-existing applications and is often used by embedded analytics vendors for this reason. 3-tier
architectures are often used in cloud or on-premises based applications as well as in software-as-a-
service (SaaS) applications.
Presentation Tier - The presentation tier is the front end layer in the 3-tier system and
consists of the user interface. This user interface is often a graphical one accessible through a web
browser or web-based application and which displays content and information useful to an end user.
This tier is often built on web technologies such as HTML5, JavaScript, CSS, or through other
popular web development frameworks, and communicates with others layers through API calls.
Application Tier - The application tier contains the functional business logic which drives an
application’s core capabilities. It’s often written in Java, .NET, C#, Python, C++, etc.
Data Tier - The data tier comprises of the database/data storage system and data access layer.
Examples of such systems are MySQL, Oracle, PostgreSQL, Microsoft SQL Server, MongoDB, etc.
Data is accessed by the application layer via API calls.

Q8: Explain general design issues of client and server.

Ans:
Server Design Issues:
A server can be stateless or stateful. A stateless server does not maintain any information or
state about the clients. However a stateful server accumulates client information to function
properly. In the case of a stateless server crash, the client comes to know about it and can retry to
contact it The server can be just restarted and functions normally. However if a stateful server
crashes in the middle of its operation, the server alone has the information to know where to resume
operation. Server crash recovery can be complicated. A stateful server also needs to know about a
client crash so that it can clean up the client information held with it. An accelerator controls device
server that sends back a number of replies for a single client request needs to remember the client
address and therefore is an example of a stateful server. However a display server is a stateless
server since it does not have to remember any client information.
Another issue in server design is security. Should the server need to identify the client before
accepting the request? If the server does employ some identification checking scheme, it should
report security faults to some authority.Accelerator control facilities that give control system access
to a large user community tend to have some kind of security scheme built in their system.
The issue of heterogeneity is important in the server design. Several kinds of heterogeneity
need to be considered: machine architecture independence, operating system independence,
software vendor implementation independence and server release independence. Different machine
architectures have different data representations. Using higher level languages can solve this
problem. Using standards and portable compilers give the operating system independence.The
server release independence implies that the client should be able to run independently of what
version of the service is available. Vendor dependencies must be eliminated to increase the
portability of the application.
Accelerator controls applications can be written using C or C++ languages to achieve the
machine architecture independence. Use of a portable compiler such as GNU (provided by Open
Software Foundation) C or C++ compiler gives operating system independence. Use of standard
libraries such as POSIX gives vendor independence. In accelerator control applications it is
common that a server and/or a client needs to be updated after it has been released. This need may
be because of added functionality or a bug fix in the server code. It is often desirable that the old
and new versions of server should coexist such that the new server can service the requests from the
old or the new clients.The clients should be prepared to use the new server if it exists or should try
the old one. Error reporting is one of the important features of die server. A server needs to return
the good or bad status of the service executed. A well defined interface to define all the service
related errors is crucial.

Client Design Issues:


There are some issues to consider while determining the timeout values for the client.
Servers are likely to take varying amounts of time to service individual requests, depending on
factors such as server load, network routing and network congestion. The client should be prepared
for the worst conditions or for a variation of service timeouts.
A client can fail to communicate to a server for various reasons. For example, client may not
find the address of the server, or the network between the server and client may not be operational,
or the machine on which the server runs may not be up, or the server itself may not be running. The
client needs to detect and report these errors in a well-defined fashion.
A client and server running on two computers having different architectures pose a data
interpretation problem. To overcome such a problem various strategies can be used. The client can
filter the data into a machine-independent format before sending it to the server. The server on
receiving the request filters it in its native format. When sending the reply back to the client the
server filters the data in the machine independent format and the client filters it back into the native
format.
A second strategy could be that the server always makes the data right after receiving and
before sending. This strategy assumes that the server knows about its native architecture data
formats as well as the client's architecture data format. Another strategy is that the client always
makes the data conversions before sending and after receiving. In this case the client has to know
about its native as well as servers's architecture data format. It is also possible to have the receiver
always making the data right. In such a case both client and server have to know the architecture of
the machine from which the data came. Accelerator control applications can choose from one of the
above mentioned techniques that is suitable for their environment. However the technique that
converts the data to machine independent format or receiver always making the data right are
supported by the standard industry tools such as RPC.

Q9: Would it make sense to limit the number of threads in a server process?

Ans:
Yes, for two reasons.
1) First, threads require memory for setting up their own private stack. Consequently, having many
threads may consume too much memory for the server to work properly.
2) Another, more serious reason, is that, to an operating system, independent threads tend to operate
in a chaotic manner. In a virtual memory system it may be difficult to build a relatively stable
working set, resulting in many page faults and thus I/O. Having many threads may thus lead to a
performance degradation resulting from page thrashing.

Q10: Having only a single lightweight process per process is also not such a good idea. Why
not?

Ans:
In this scheme, we effectively have only user-level threads, meaning that any blocking
system call will block the entire process.