Sie sind auf Seite 1von 7

A Multicast RPC Implementation for Java

Philip Russell and Chase Covello


Computer Science Department
University of California, Los Angeles
{prussell,chase}@cs.ucla.edu
http://code.google.com/p/multicast-rpc/

than worry about the explicit memory


Abstract management required in distributed
We propose an extension of Remote memory multiprocessors.
Procedure Call (RPC) semantics to a Point-to-point RPC between a client and
multicast environment and discuss a a server system is mostly a solved
Java implementation thereof. The problem [1], but RPC in a multicast
implementation consists of a reliable, environment is still an active field of
FIFO multicast provider, five research. RPC has been criticized for
interchangeable Total Order Broadcast being inherently point-to-point [3], but
algorithms, and an RPC client and server we believe that the ability of a client
that make use of these technologies. We system to execute an RPC call on many
demonstrate that the implementation servers in parallel would be useful in
works correctly. Throughout the paper, many situations. For example, a master
we discuss future work that could be processor could send software updates or
done on this project. remote administration commands to a
network of mobile devices with little
Introduction effort on the part of the programmer.
Remote Procedure Calls (RPC) [1] are a Thinking bigger, a network of mobile
convenient paradigm to use when devices could be treated as a “grid,”
programming a distributed system. A bringing massively parallel computing
well-designed RPC implementation like power to any client within wireless
Java’s RMI is transparent to the range. Such a network could be scaled
programmer – that is, remote procedure up or down without requiring any
calls are made with identical syntax to changes to client programs of the grid.
local procedure calls [4]. This brings the This paper demonstrates an
ease of programming of shared memory implementation of multicast RPC in
multiprocessing systems to inexpensive Java. Because it is written entirely in
multicomputer clusters built with software without any modification to the
commercial off-the-shelf components. language or compiler, our
Instead of explicitly passing messages implementation does not have the
between processes, the programmer transparency of Java RMI. However, it
simply sets the RPC mechanism up and would be straightforward to integrate
performs method calls on objects. this code into the Java runtime system as
Threads can thus be moved to other an extension to RMI, providing more
computers without affecting method transparency. This project, then, is a
invocation semantics. This allows the proof of concept.
programmer to focus on the higher level
task of writing a correct program rather

1
Multicast Implementation • The total order broadcast
Correct RPC semantics require both the modules. These use the multicast
sender and receiver to agree on the order simulator for their transport layer
in which procedures are executed. and add a total ordering guarantee to
Multicast RPC is no different – in fact, message transport. They are based on
all receivers should agree on the same the pseudocode in [2]. Five
ordering for procedure calls. This implementations are available, which
property, called Total Order Broadcast allows the RPC user to choose
[2], is not provided in the Java standard among the performance and
library. Java’s network library supports reliability tradeoffs inherent in the
the core transport-layer protocols of the algorithms.
Internet: TCP, UDP, and multicast UDP. Note that all the packages mentioned are
Since multicast UDP does not provide relative to edu.ucla.cs.rpc.multicast.
any guarantee of reliable or in-order
delivery, it is not a total order broadcast Multicast Simulator
algorithm, and is thus unsuitable for use Our multicast code is located in the
with RPC. Furthermore, we were unable network package. Messages are sent
to correctly route multicast packets on using instances of multicast sender,
any of the networks we tested. Given the found in MulticastMessage-
limited amount of time available for this Sender.java. A message sender
project, we chose to implement a total maintains a set of message receiver
order broadcast protocol in software atop addresses. It sends a message by opening
TCP connections. This is not truly a a TCP connection to each receiver in the
multicast protocol, and in fact it is set, serializing the message, and sending
inefficient for production-quality the serialized byte stream over the
multicast RPC, but we believe that the connection. The set of receiver addresses
modular nature of the code lends itself to is kept up to date with the help of a
being replaced with a better separate thread that listens on a socket
implementation in the future, perhaps for broadcast updates.
taking advantage of advanced multicast Unicast messages are received with
services in the operating system or instances of MessageReceiver.java.
network hardware. This class runs a thread that listens on a
Our total order broadcast socket for incoming messages. The
implementation consists of the following messages are passed to the receiver’s
modules: message handler, a type of extensible
event handler that the user provides. A
• The multicast simulator. This is a handler might print the message to the
reliable, FIFO multicast transport console, queue it, or hand it off to a
layer. It has a set of message sender waiting processing thread, for example.
and receiver classes, along with a
multicast manager that tracks Multicast messages arrive at instances of
membership in the multicast group MulticastMessageReceiver.java. This
and relays the information to group class extends MessageReceiver.java to
members. maintain membership in the multicast
group, including sets of the sender and
receiver unique identifiers. Updates to

2
these sets are accomplished in the same Moving Sequencer
way as for multicast senders. The Fixed Sequencer scheme relies on a
A core concept in our multicast single, fixed intermediary process to act
simulator is that of the multicast as a relay between the group of senders
manager. It is implemented in (S) and the group of destinations (D).
MulticastManager.java. The multicast The main advantage of FS is that it is
manager listens on a TCP socket for Join relatively simple, but suffers from a
and Leave messages from the senders single point of failure. If the FS process
and receivers. Upon receiving such a ever crashes, then the entire system stops
message, it updates its internal processing messages. No message can be
membership sets and sends updates to sent from S, and no member of D can
members of the group. Multicast senders receive a message. Instead of a single
need to know the addresses of the intermediary process, we can use a
group’s receivers, so those are broadcast group of processes to relay messages.
to the senders whenever a receiver joins This is the Moving Sequencer scheme. It
or leaves the group. Similarly, sender distributes both the responsibility of
and receiver membership changes are relaying messages between S and D and
broadcast to all receivers in the group. balances the load of sending messages
amongst its members.
The combination of these four classes,
along with Message objects, provides a As outlined in [2], the MS
reliable, FIFO multicast transport layer. implementation relies on a token ring. A
Any single sender will observe a Token is an object that records a
consistent ordering of its own messages sequence number and keeps a list of
among all the receivers in the group. For messages that have been relayed
total ordering of all messages, however, between S and D. The algorithm
one of the five total order broadcast proceeds thus:
algorithms is needed. 1. Each Sender in S broadcasts to every
member of MS.
Fixed Sequencer
2. Each sequencer waits for both a
The simplest total ordering algorithm
message and the token. As soon as it
uses a single process through which all
holds both, of these objects, it
broadcast messages must be routed. This
attempts to add the message to the
process, called a sequencer [2], receives
list of already relayed messages. If
one message at a time, so a total order is
this operation fails, then the
imposed automatically. The sequencer
sequencer knows that the message
adds a sequence number to each
has already been relayed, and should
incoming message, then broadcasts them
be dropped. Otherwise, it stamps the
to all receivers in the multicast group.
message with a sequence number
Our fixed sequencer resides in the and sends the message to D.
sequencer.fixed package, in Fixed-
The token ring is implemented in the
Sequencer.java. It uses a message
receiver to accept incoming messages util.token package. The primary files
are Token.java and NetworkToken-
and broadcasts them with a multicast
Manager.java. A token manager is an
message sender. We have a test program
object that is responsible for both
in test.FixedTester.java.

3
initializing the token, receiving it from implementation is in Privileged-
the previous node, and sending it to the Sender.java, package privilege. It is
subsequent node in the ring when either tested in PrivilegeTest.java, found in
a timeout occurs or it has been signaled the testing package. The purpose of the
to do so by another process. test is to create the token ring amongst S,
and to send messages from S to a
The Moving Sequencer algorithm is
destination.
implemented in the sequencer.moving
package, in the file Moving-
Communications History
Sequencer.java. This class is tested by
the file MovingTest.java, found in test. In contrast to the Fixed/Moving
Sequencer and Privilege-based scheme,
The current implementation successfully the Communications History class of
constructs the token ring and relays only algorithms do not differentiate between
one copy of a message between S and D. S and D. Instead, all processes in the
There are no provisions for dealing with system belong to the same group P. Of
the possibility of a member of the ring the class of Communications History
failing. The current implementation can algorithms, we implemented the causal
be extended towards this functionality history algorithm, which proceeds as
by maintaining an on line registry of follows [2]:
token ring members. In the case of
member failure, each member of the 1. Every member p of P has an array of
token ring is notified by the registry, logical clocks that record that
which then calculates a new token ring maximum clock value of process q
by randomly assigning new previous and of P, as known by p. Initially, all
adjacent nodes to each member. value are zero. An update occurs
when p receives a message from q
Privilege-Based with a new clock value.
The Privilege based scheme, like 2. To broadcast, p update its own
Moving Sequencer, is based on creating logical clock, time-stamps the
a token ring of processes. Instead of message, and sends it across some
sequencers forming the ring, the FIFO channel (a TCP stream in our
members of S do so. The algorithm implementation).
given in [2] is:
3. When a process p receives a message
1. For each members in S, wait for both from q, p updates its array of logical
a message from the upper layer and clocks to be the maximum of the
the token. current logical clock value for q and
the value embedded in the message.
2. When both of these objects are held,
p then creates a set of deliverable
assign the message a sequence
messages, defined as those that have
number and multi-cast it to D.
been received but have not been
PB is a relatively simple scheme that delivered and the timestamp of each
distributes the load of sending messages message is less than the minimum
from the upper level amongst the clock value for each process in the
members of S. The token ring is logical clock array.
implemented in the same files mentioned
4. Each deliverable message is
in Moving Sequencer. The algorithm
delivered to the layer above, and

4
added to the delivered set. other destination. If not, it records
the sender id, keyed by the message.
The algorithm imposes an order on the
If yes, then it finds the globally
messages by restricting the conditions
maximum timestamp of the message,
under which a message can be delivered
stamps a copy of the message with
to the layer above. As described above
the global timestamp, and adds it to
and detailed in [2], a message cannot be
stamped.
added to the deliverable set unless the
process p has received a message from 5. For each message in stamped, the
each other process q with a later algorithm checks to see if every
timestamp than the message's. A system- message in received has a logically
wide total order is thus imposed by later timestamp. If this is true, then
forcing logically earlier messages to be the stamped message is delivered to
delivered first. the layer above.
The algorithm is implemented in the file The algorithm enforces a total order on
CausalProcess.java, in package the message delivery by finding the
commhistory. logically latest copy of each message,
and ensuring that it cannot be delivered
Destinations Agreement unless all other received messages are
The class of destinations agreement logically later [2]. Our implementation is
algorithms work on the principal that the found in the file AgreementDest.java,
Senders broadcast to the Destinations, package destination.agreement.
who re-circulate enhanced messages The algorithm assumes that there is a
amongst themselves to decide on the way to discover only the set D of
total order of message delivery. destinations. We were not able to
Our implementation focuses on implement this functionality in the given
destination agreements with agreed-upon time constraints. Possible extensions to
sequence numbers [2]. The algorithm this project would be to use a different
proceeds as follows: MulticastManager to negotiate the
message channels amongst only D.
1. Each sender in S broadcasts the
message from the layer above in the The RPC Layer
usual manner.
With a total order broadcast algorithm
2. Each destination D records two available, the actual RPC code becomes
message sets, stamped and received. very simple. A program that wishes to
As well, it initializes its logical clock call a remote method constructs a new
to be zero. RPC client instance, initializing it with a
3. Upon receiving a non-timestamped Sender implementation from one of the
message, the destination adds it fire total order broadcast algorithms. The
received, timestamps the message, relevant code is in RPCClient.java. A
and re-transmits the message to the client then invokes the call method,
other destinations. which accepts an object or Class
instance, a method name, and a variable
4. Upon receiving a timestamped number of arguments. The call method
message, the checks to see if it has a packages these arguments into an
copy of the message from every

5
RPCMessage and total order broadcasts Conclusion
it. In this paper we discussed the problem
On the server side, an instance of of making remote procedure calls to
RPCServer.java receives the message, multiple destinations as a multicast
unpacks its contents, and performs a operation. We outlined the architecture
dynamic class and method lookup using to achieve this goal in Java. The two
the Java reflection API. It then invokes primary components of the architecture
the method using the provided are the multi-cast simulator and the total
arguments and returns. order algorithms. Together, these pieces
create a working implementation of RPC
Our implementation does not currently
semantics in a multicast environment. As
support return values. Any method may
discussed throughout the paper, there is
be invoked, but return values are
future work to be done, starting with
discarded on the server side. We can
implementing a true multicast routing
think of two ways to support return
layer that the message senders, receiver
values: either allow multicast messages
and multicast managers communicate
to aggregate responses from the
through. Some of the total order
receivers and pass them back to the
broadcast modules, such as Moving
sender, or have the RPC client wait for
Sequencer, can be extended with further
responses from all the servers. An earlier
functionality, and entirely new modules
version of the project tried the former
can be created. We envision performing
solution, but it didn’t really work well
experiments to obtain performance
and complicated much of the total order
evaluation data for the different
broadcast code. Further work on this
algorithms. We are confident that this
project might include having the RPC
work forms the basis for a modular,
client wait for response messages from
transparent testing and implementation
the servers and aggregating their return
suite for multicast remote procedure
values.
calls.
Results
References
Our multicast RPC implementation
[1] Birrell, Andrew and Nelson, Bruce
works for our test cases.
Jay. “Implementing Remote
RPCClient.java contains a main
Procedure Calls.” ACM Trans.
method that starts a multicast manager, a
Computer Systems, Feb 1984.
fixed sequencer, a client, and several
servers. It then instantiates an RPCTest [2] Défago, Xavier, Schiper, André, et
class, which has an rpcTest method that al. “Total Order Broadcast and
prints its string argument to the standard Multicast Algorithms: Taxonomy
output. The client calls this method with and Survey.” ACM Computing
the string “Hello, world!”, then shuts Surveys, Dec 2004.
down the servers, sequencer, and [3] Tanenbaum, Andrew S. and van
multicast manager. Running the program Renesse, Robbert. “A Critique of the
produces one “Hello, world!” on the Remote Procedure Call Paradigm.”
console for each RPC server running. Proc. EUTECO 88, Apr 1988.
We are thus confident that we have a
working RPC implementation. [4] Waldo, Jim. “Remote procedure calls
and Java Remote Method

6
Invocation.” IEEE Concurrency, Jul
1999.

Das könnte Ihnen auch gefallen