Beruflich Dokumente
Kultur Dokumente
UNIT-II
Contents
• Introduction
• Communication between Distributed Objects
• Remote Procedure call
• Events and Notifications
• Case Study: Java RMI
Communication between Distributed Objects
Object model:
Object references: Objects can be accessed via object references.
For example, in Java, a variable that appears to hold an object
actually holds a reference to that object. To invoke a method in an
object, the object reference and method name are given, together
with any necessary arguments. The object whose method is
invoked is sometimes called the target and sometimes the
receiver.
Interfaces: An interface provides a definition of the signatures of a
set of methods (that is, the types of their arguments, return values
and exceptions) without specifying their implementation.
Communication between Distributed Objects
Each process contains a collection of objects, some of which can receive both
local and remote invocations, whereas the other objects can receive only local
invocations, as shown in Figure.
We refer to objects that can receive remote invocations as remote objects. In
Figure 5.12, the objects B and F are remote objects
Communication between Distributed Objects
Exceptions: Any remote invocation may fail for reasons related to the invoked
object being in a different process or computer from the invoker
Design issues for RMI:
Two design issues are
• Choice of invocation semantics
• The level of transparency that is desirable for RMI
RMI Invocation Semantics:
The main choices are
• Retry request message: whether to retransmit the request message until
either a reply is received or the server is assumed to have failed.
• Duplicate filtering: when retransmissions are used, whether to filter out
duplicate requests at the server
• Retransmission of results: whether to keep a history of result messages to
enable lost results to be retransmitted without re-executing the operations
at the server.
Communication between Distributed Objects
With maybe semantics, the remote procedure call may be executed
once or not at all. Maybe semantics arises when no fault-tolerance measures
are applied
and can suffer from the following types of failure:
• omission failures if the request or result message is lost;
• crash failures when the server containing the remote operation fails.
At-least-once semantics: With at-least-once semantics, the invoker receives
either a result, in which case the invoker knows that the procedure was
executed at least once, or an exception informing it that no result was received.
At-least-once semantics can be achieved by the retransmission of request
messages, which masks the omission failures of the request or result message.
At-least-once semantics can suffer from the following
types of failure:
• crash failures when the server containing the remote procedure fails;
• arbitrary failures – in cases when the request message is retransmitted, the
remote server may receive it and execute the procedure more than once,
possibly causing wrong values to be stored or returned.
Communication between Distributed Objects
Transparency :
location and access transparency, hiding the physical location of the (potentially
remote) procedure and also accessing local and remote procedures in the same
way.
Communication between Distributed Objects
Implementation of RMI:
Dispatcher: A server has one dispatcher and one skeleton for each class
representing a remote object. In our example, the server has a dispatcher and a
skeleton for the class of remote object B. The dispatcher receives request
messages from the communication module. It uses the operationId to select the
appropriate method in the skeleton, passing on the request message. The
dispatcher and the proxy use the same allocation of operationIds to the
methods of the remote interface.
Skeleton: The class of a remote object has a skeleton, which implements the
methods in the remote interface. They are implemented quite differently from
the methods in the servant that incarnates a remote object. A skeleton method
unmarshals the arguments in the request message and invokes the
corresponding method in the servant. It waits for the invocation to complete
and then marshals the result, together with any exceptions, in a reply message
to the sending proxy’s method.
Communication between Distributed Objects
Generation of the classes for proxies, dispatchers and skeletons • The classes
for the proxy, dispatcher and skeleton used in RMI are generated automatically
by an interface compiler
Remote Procedure Call
A Client program calls a procedure in another program running in a server
process is called Remote Procedure Call.
Design Issues of RPC:
Similar to RMI
Implementation of RPC
Events and Notifications
when some action happens on a object then its called as an EVENT
And if this information is to be sent to the interested object then its done with
the help of NOTIFICATION.
Distributed event-based systems extend the local event model by allowing
multiple objects at different locations to be notified of events taking place at an
object.
They use PUBLISH-SUBSCRIBE paradigm, in which an object that generates
events publishes the type of events that it will make available for observation by
other objects. Objects that want to receive notifications from an object that has
published its events subscribe to the types of events that are of interest to
them.
Distributed event-based systems have two main characteristics:
• Heterogeneous
• asynchronous
Events and Notifications
Simple Dealing Room System:
Notification Information
provider Notification
Notification
Notification
Notification
Dealer’s computer Dealer’s computer
Notification
Information
provider
Notification
Dealer Notification
Dealer
External
source
Events and Notifications
Simple Dealing Room System:
Consider a simple dealing room system whose task is to allow dealers using
computers to see the latest information about the market prices of the stocks
they deal in.
The market price for a single named stock is represented by an object with
several instance variables. The information arrives in the dealing room from
several different external sources in the form of updates to some or all of the
instance variables of the objects representing the stocks and is collected by
processes we call information providers.
Dealers are typically interested only in their specialist stocks.
A dealing room system could be modelled by processes with two different tasks:
• An information provider process continuously receives new trading
information from a single external source and applies it to the appropriate
stock objects. Each of the updates to a stock object is regarded as an event.
The stock object experiencing such event notifies all of the dealers who have
subscribed to the corresponding stock. There will be a separate information
provider process for each external source.
Events and Notifications
• A dealer process creates an object to represent each named stock that the
user asks to have displayed. This local object subscribes to the object
representing that stock at the relevant information provider. It then receives
all the information sent to it in notifications and displays it to the user.
Participants in distributed event notifications:
Main component is an event service that maintains a database of published
events and of subscribers interests. Events at an object of interest are published
at the event service. Subscribers inform the event service about the types of
events they are interested in, when an event occurs at an object of interest a
notification is sent to the subscribers to that type of event.
Roles of participating objects
Object of interest: this is an object that experiences changes of state, as a result
of its operations being invoked. Its changes of state might be of interest to other
objects.
Events: an event occurs at an object of interest as the result of the completion
of a method execution.
Events and Notifications
• Notification: a notification is an object that contains information about an
event.
• Subscriber: a subscriber is an object that has subscribed to some type of
events in another object. It receives notifications about such event.
• Observer objects: the main purpose of an observer is to decouple an object
of interest from its subscriber. An object of interest can have many different
subscribers with different interests.
• Publisher: this is an object that declares that it will generate notifications of
particular types of event.
Events and Notifications
Event service
object of interest subscriber
1. notification
2. notification notification
3. notification
Events and Notifications
Fig show three cases
• An object of interest inside the event service without an observer. It sends
notifications directly to the subscribers.
• An object of interest inside the event service with an observer. The object of
interest sends notifications via the observer to the subscribers.
• An object of interest outside the event service. In this case an observer
queries the object of interest in order to discover when events occur. The
observer sends notifications to the subscribers.
Roles of observers:
• Forwarding, a forwarding observer may carry out all the work of sending
notifications to subscribers on behalf of one or more objects of interest.
• Filtering of notifications: filters may be applied by an observer so as to
reduce the number of notifications received according to some predicate on
the contents of each notification. For ex: an event might relate to
withdrawals from a bank account, but the receipt is interested only in those
greater than $100.
Events and Notifications
• Patterns of events, when an object subscribers to events at an object of
interest, they can specify patterns of events that they are interested in.
ex: a subscriber may be interested when there are three withdrawals
from a bank account without an intervening deposit.
• Notification mailboxes, notifications need to be delayed until a potential
subscriber is ready to receive them.
for ex an observer may take on the role of a notification mailbox which
is to receive notifications on behalf of a subscriber only passing them on when
the subscriber is ready to receive them.
Processes and thread
• Thread is the operating system abstraction of an activity
• An execution environment is the unit of resource management
Ie., an execution environment will be consists of
an address space
thread synchronization and communication resources such as
semaphores and communication interfaces.
higher level resources such as open files etc.
• The central aim of having multiple threads of execution is to maximize the
degree of concurrent execution between operations thus enabling the
overlap of computation with input and output and enabling concurrent
processing on multiprocessors.
Processes and thread
1. Address spaces:
Its process’s virtual memory, its large and consists of one or more region,
separated by inaccessible areas of virtual memory.
A region is an area of contiguous virtual memory that is accessible by the
threads of the owning process.
Each region is specified by the following properties,
• Its extent( lowest virtual address and size)
• Read/write/execute permissions for the process’s threads
• Whether it can be grown upwards or downwards.
address space, which has three regions:
a fixed, unmodifiable text region containing program code; a heap, part of
which is initialized by values stored in the program’s binary file, and which is
extensible towards higher virtual addresses; and a stack, which is extensible
towards lower virtual addresses
Processes and thread
Processes and thread
2. Creation of a new process
For a distributed system creation of a new process consists of two steps
• The choice of a target host
• The creation of an execution environment.
Choice of a target host:
Process allocation policies range from always running new processes at their
originators workstation to sharing the processing load between a set of
computers.
Policy categories for load sharing are
• Transfer policy, determines whether to situate a new process locally or
remotely. This may depend on whether the local node is lightly or heavily
loaded.
• Location policy, determines which node should host a new process selected
for transfer.
Processes and thread
In sender-initiated load-sharing algorithms, the node that requires a new
process to be created is responsible for initiating the transfer decision. It
typically initiates a transfer when its own load crosses a threshold. By contrast,
in receiver-initiated algorithms, a node whose load is below a given threshold
advertises its existence to other nodes so that relatively loaded nodes can
transfer work to it.
Migratory load-sharing systems can shift load at any time, not just when a new
process is created. They use a mechanism called process migration: the transfer
of an executing process from one node to another.
Processes and thread
Creation of a new execution environment:
There are two approaches to defining and initializing the address space of a
newly created process. The first approach is used where the address space is of
a statically defined format. For example, it could contain just a program text
region, heap region and stack region.
Alternatively, the address space can be defined with respect to an existing
execution environment.
Processes and thread
Processes and thread
Processes and thread
3. Threads
Consider the server shown in Figure, The server has a pool of one or more
threads, each of which repeatedly removes a request from a queue of received
requests and processes it.
Let us assume that each request takes, on average, 2 milliseconds of processing
plus 8 milliseconds of I/O (input/output) delay when the server reads from a
disk (there is no caching).
Consider the maximum server throughput, measured in client requests handled
per second, for different numbers of threads. If a single thread has to perform
all processing, then the turnaround time for handling any request is on average
2 + 8 = 10 milliseconds, so this server can handle 100 client requests per
second. Any new request messages that arrive while the server is handling a
request are queued at the server port.
Processes and thread
Now consider what happens if the server pool contains two threads. We
assume that threads are independently schedulable – that is, one thread can be
scheduled when another becomes blocked for I/O. Then thread number two
can process a second request while thread number one is blocked, and vice
versa. This increases the server throughput. Unfortunately, in our example, the
threads may become blocked behind the single disk drive. If all disk requests are
serialized and take 8 milliseconds each, then the maximum throughput is
1000/8 = 125 requests per second.
Processes and thread
Architectures for multi-threaded servers:
Figure shows one of the possible threading architectures, the worker pool
architecture. In its simplest form, the server creates a fixed pool of ‘worker’
threads to process the requests when it starts up. The module marked ‘receipt
and queuing’ in Figure is typically implemented by an ‘I/O’ thread, which
receives requests from a collection of sockets or ports and places them on a
shared request queue for retrieval by the workers.
In the thread-per-request architecture the I/O thread spawns a new worker
thread for each request, and that worker destroys itself when it has processed
the request against its designated remote object.
The thread-per-connection architecture associates a thread with each
connection. The server creates a new worker thread when a client makes a
connection and destroys the thread when the client closes the connection.
The thread-per-object architecture associates a thread with each remote
object. An I/O thread receives requests and queues them for the workers, but
this time there is a per-object queue.
Processes and thread
Threads within clients, The first thread generates results to be passed to a server by
remote method invocation, but does not require a reply. Remote method
invocations typically block the caller, even when there is strictly no need to wait.
This client process can incorporate a second thread, which performs the remote
method invocations and blocks while the first thread is able to continue computing
further results. The first thread places its results in buffers, which are emptied by
the second thread. It is only blocked when all the buffers are full.
Threads versus multiple processes
We can summarize a comparison of processes and threads as follows:
• Creating a new thread within an existing process is cheaper than creating a
process.
• More importantly, switching to a different thread within the same process is
cheaper than switching between threads belonging to different processes.
• Threads within a process may share data and other resources conveniently and
efficiently compared with separate processes.
• But, by the same token, threads within a process are not protected from one
another.
Processes and thread
Threads programming
Processes and thread
Thread lifetimes
A new thread is created in the SUSPENDED state
After it is made RUNNABLE with the start() method, it executes the run()
method of an object designated in its constructor
A thread ends its life when it returns from the run() method or when its
destroy() method is called
Processes and thread
Thread synchronization
Processes and thread
Thread scheduling
In preemptive scheduling, a thread may be suspended at any point to make way
for another thread, even when the preempted thread would otherwise
continue running. In non-preemptive scheduling (sometimes called coroutine
scheduling), a thread runs until it makes a call to the threading system (for
example, a system call), when the system may deschedule it and schedule
another thread to run.