Sie sind auf Seite 1von 12

(a) Summarise the differences between the BSD Socket

InterProcess Communications scheme, and the ONC RPC scheme.

Inter process communication in 4.4 BSD is being organized in communication


domains. Domains currently supported include the local domain, for
communication between processes that are executed on the same machine;
the internet domain, for communication between processes that are using
the TCP/IP protocol suite; the ISO/OSI protocol family for communication
between sites that are required to run them; and the XNS domain, for
communication between processes using the XEROX Network Systems (XNS)
protocols.
Inside a domain, communication takes place between communication
endpoints known as sockets. As mentioned in Section 2.6, socket system call
creates the socket and returns a descriptor; other IPC system calls that are
described in Chapter 11. Each socket has a type that specifies its
communications semantics; these semantics include properties such as
reliability and prevention of duplication of messages.
Each and every socket has associated with a communication protocol. This
protocol provides the semantics required by the socket according to latter's
type. Applications might request a specific protocol when creating a socket,
or might allow the system to select the protocol that is feasible for the type
of socket being created.
Sockets may have addresses that are bound to them. The form and meaning
of socket addresses depends on the communication domain in which the
socket is being created. Binding a specific name to a socket in the local
domain causes a file to be created in the filesystem.
Normal data that is transmitted and received through sockets is untyped.
Data-representation issues are the responsibility of those libraries which are
built on the interprocess-communication facilities. In addition to transporting
normal data, communication domains may also support the transmission,
reception of specially typed data, termed access rights. For example, the
local domain use this facility to pass descriptors between processes.
Networking implementations on UNIX before 4.2BSD usually work by
overloading the character-device interfaces. One goal of the socket interface
was for naive programs to be able to work without changing on stream-style
connections. Such programs can work only if the read and write systems
calls are unchanged. Consequently, original interfaces were left intact, and

were made to work on stream-type sockets. A new interface was added for
very complicated sockets, such as those used to send datagrams, with which
a destination address should be presented with each send call.
Another benefit is that the new interface is highly portable. After a test
release was made available from Berkeley, the socket interface was ported
to System III by a UNIX vendor (although AT&T didnt support the socket
interface until the release of System V Release 4, deciding instead to use the
Eighth Edition stream mechanism). The socket interface also ported to run in
many Ethernet boards by vendors, such as Excelan , that were selling into
the PC market, where the machines were too small to run networking in the
main processor. Recently, the socket interface was used as the basis for
Microsoft's Winsock networking interface for Windows.
Remote Procedure Calls are a way of doing distributed computing where
functions can call functions on remote systems and the remote system
returns to results over the network. This package implements ONC RPC
(Open Network Computing Remote Procedure Calls) as first implemented by
Sun Microsystems. It differs from the other significant RPC standard called
Courier by Xerox.
The most significant and widely used application that is based on ONC RPC is
the NFS (Network File system) standard. NFS is based on RPC and RPC is, in
turn, based on the XDR (External Data Representation) standard. Each
standard is formalized in an Internetworking RFC. Because detailed standard
exists for NFS, RPC and XDR, that makes them ideal for use in a nonproprietary operating system environment like Linux. No non-disclosure
agreement or dubious reverse-engineering is required.
Sun has provided two major freely distributable implementations of ONC RPC
that cover most of the RPC programming environment. There are two flavors
of RPC programming. The original and most widely used is called SunOS 4
style RPC and is based on the BSD socket API. The
other flavor is called TIRPC (Transport Independent RPC) which is based on
System V release 4. It might have been better to call it TLI Dependent RPC
because it is based on STREAMS and the Transport Layer Interface. Linux
uses the BSD socket API version of RPC which is compatible with SunOS
version 4.x.
In 2009, Sun relicensed the ONC RPC code under the standard 3-clause BSD
license[1] and then reconfirmed by Oracle Corporation in 2010 following
confusion about the scope of the relicensing.
ONC is considered "lean and mean", but has limited appeal as a generalized
RPC system for WANs or heterogeneous environments. Systems such
as DCE, CORBA and SOAP are generally used in this wider role.

Resources:
https://en.wikipedia.org/wiki/Open_Network_Computing_Remote_Procedure_C
all

(b) Summarise the differences between the ONC RPC scheme, and
the CORBA scheme.
Open Network Computing (ONC) Remote Procedure Call (RPC) is a
very widely deployed remote procedure call system. ONC based on calling
conventions is used in Unix and the C language. It serializes data using
the External Data Representation (XDR), which is also being used to encode
and decode data in files that are to be accessed on more than one platform.
ONC then finally delivers XDR payload using either UDP or TCP. Access to RPC
services on a machine are provided via a port mapper which listens for
queries on a well-known port (number 111) over UDP and TCP. ONC is
considered lean and mean but has got a limited appeal as generalized RPC
system for WANs or heterogeneous environments. Systems such as DCE,
CORBA and SOAP are used in this wider role.
Remote procedure calls allow programs on different, potentially remote
machines to interact together. A remote procedure call is the invocation of
the procedure of a program located on a remote host (the RPC server), as the
name implies. Doing so requires the procedure arguments on the client-side
to be encoded, or marshalled, i.e., converted to a representation suitable for
transfer over the network. On the server-side, upon reception of the RPC,
those arguments must be decoded or unmarshalled, i.e., converted back to a
form that is directly understandable by the server programfor instance,
data using Scheme data types, should the server program be written in
Scheme. The value returned by the RPC server must be encoded and
decoded similarly.
When using the ONC RPC protocol, the way data items are encoded is
dictated by the XDR standard. This encoding has the advantage of being

particularly compact, allowing for relatively low bandwidth requirements and


fast implementations, especially compared to more verbose RPC protocols
such as XML-RPC and SOAP.
The XDR encoding is not self-describing, i.e., it is impossible given an
arbitrary XDR encoded sequence to determine the XDR type of the encoded
data. This is different from D-Bus, for example, which uses a compact and
self-describing encoding. In practice, this is sufficient for a wide range of
applications.

The Common Object Request Broker Architecture (CORBA)


a standard that is defined by the Object Management Group (OMG) is
designed to facilitate the communication of systems which are deployed on
diverse platforms. CORBA makes collaboration between systems on different
operating systems, programming languages, and computing hardware.
CORBA has got many of the same design goals as object-oriented
programming: encapsulation and reuse. CORBA uses an OO model although
the systems that utilize CORBA do not have to be OO. CORBA is example of
the distributed object paradigm.
The CORBA specification dictates there shall be an ORB through which an
application would interact with other objects. This is how it is implemented in
practice:
1. The application simply initializes the ORB, and accesses an
internal Object Adapter,which maintains things like reference counting,
object (and reference) instantiation policies, and object lifetime
policies.
2. The Object Adapter is used to register instances of the generated code
classes. Generated code classes are the result of compiling the user
IDL code, which translates the high-level interface definition into an
OS- and language-specific class base for use by the user application.
This step is necessary in order to enforce CORBA semantics and
provide a clean user process for interfacing with the CORBA
infrastructure.
Some IDL mappings are more difficult to use than others. For example, due
to the nature of Java, the IDL-Java mapping is rather straightforward and
makes usage of CORBA very simple in a Java application. This is also true of

the IDL to Python mapping. The C++ mapping requires the programmer to
learn datatypes that predate the C++ Standard Template Library(STL). By
contrast, the C++11 mapping is easier to use, but requires heavy use of the
STL. Since the C language is not object-oriented, the IDL to C mapping
requires a C programmer to manually emulate object-oriented features.
In order to build a system that uses or implements a CORBA-based
distributed object interface, a developer must either obtain or write the IDL
code that defines the object-oriented interface to the logic the system will
use or implement. Typically, an ORB implementation includes with a tool
called an IDL compiler that translates the IDL interface into the target
language for use in that part of the system. A traditional compiler then
compiles the generated code to create the linkable-object files for use in the
application. This diagram illustrates how the generated code is used within
the CORBA infrastructure:

This figure illustrates the high-level paradigm for remote interprocess


communications using CORBA. The CORBA specification further addresses
data typing, exceptions, network protocols, communication timeouts, etc. For
example: Normally the server side has the Portable Object Adapter (POA)
that redirects calls either to the local servants or (to balance the load) to the
other servers. The CORBA specification (and thus this figure) leaves various
aspects of distributed system to the application to define including object
lifetimes (although reference counting semantics are available to
applications), redundancy/fail-over, memory management, dynamic load

balancing, and application-oriented models such as the separation between


display/data/control semantics (e.g. see Model-View-Controller), etc.
In addition to providing users with a language and a platform-neutral remote
procedure call (RPC) specification, CORBA defines commonly needed
services such as transactions and security, events, time, and other domainspecific interface models.

Resources:
http://www.gnu.org/software/guile-rpc/manual/guile-rpc.html
https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture

(c) Summarise the differences between the CORBA scheme, and


the Web Services
scheme.
At first, one of the most important differences between CORBA and Web
Services is how an application is modeled in either cases: while CORBA is a
true OO component framework, in Web Services there is no notion of objects:
they are centered around a passing message paradigm. Moreover, in CORBA
the interaction between client and server can be done directly, without any
further intermediation (except from the ORB, of course). The client obtains a
handle to a CORBA object and then apply a method on it: the result of the
call is possibly another CORBA object on which it can further apply other
methods. In Web Services each and everything is decoupled. The client
sends a message and receives a message: the response does not give an
immediate access to the next step.
CORBA applications can obtain the desired scalability and reliability by
combining the Portable Object Adapter policies with the Fault-tolerant CORBA
features and the Load-balancing CORBA service [3]. In Web Services these
kind of issues are left to the components. For instance, application servers
such as Red-Hat JBoss, IBMs WebSphere or Apache Tomcat implements its
own mechanism to handle scalability and reliability.

In CORBA, the client applications basically invoke operations on an opaque


objects reference, disregarding whether the object itself is local or remote.
Web Services client applications are referred to services by URLs, which
implicitly encode the location of the service. However, information about the
can be changed via DNS, exposing the whole infrastructure to risk of security
threats. On the other hand, encoding the network information in URLs, gives
possibility to write proxy services simply by manipulating the network
address at the level of application
CORBA security service supports a very wide variety of features such as
authentication, delegation, auditing, etc. Web Services dont provide any
standard security services, although some of aspects of security could be
dealt with that transport protocol level. Particularly, SOAP doesnt specify any
security feature but it rather makes it possible to exploit net technologies
such as XML-Signatures or SSL to achieve as much interoperability as
possible [13] [14].
In recent, many important Web services vendors have started providing
some proprietary security solutions. Mostly, these solutions have already
been implemented in COBRA in security service or derived from it. So, almost
in all these cases, they represent a very superfluous development effort.
The tests were conducted on a LAN (Local Area Network) with a speed of
100Mb / s. Stress testing and performance are conducted with a desktop PC
DELL Optiplex 740 (2.3GHz), 2Gb RAM and Opet systems GNU / Linux Ubuntu
10.04 Customer acting as a client and server use a macbook pro 5.5 , at
2.3GHz, 8GB RAM and OS-X systems opetarivo Lion. All the measurements
were performed in a lab conducting the tests only for each of the
technologies in isolation, so that the memory, latency and processor in the
messages were not affected by processes other than this experiment.
The measurements that are to be performed:
Memory used on the client.
Using the client processor.
Use a server processor.
Latency in a simple request
Latency in multiple requests.
Total bytes transferred.
Packet Counting
Applications used in this work are:
A simple arithmetic operation.
The sending of a text string.

This section compares the CORBA and Web service technologies based on two different aspects.
First, we provide comparisons based on the computing model. Next, we compare them based on the
features supported by each technology.
Aspect

CORBA

Web services

Data model

Object model

SOAP message exchange


model

Client-Server
coupling

Tight

Loose

Location
transparency

Object references

URL

Type system

IDL

XML schemas

static + runtime checks

runtime checks only

Error handling

IDL exception

SOAP fault messages

Serialization

built into the ORB

can be chosen by the user

Parameter passing

by reference

by value (no notion of


objects)

by value (valuetype)
Transfer syntax

CDR used on the wire

XML used on the wire

binary format

Unicode

State

stateful

stateless

Request semantics

at-most-once

defined by SOAP

Runtime composition DII

UDDI/WSDL

Registry

UDDI/WSDL

Interface Repository
Implementation repository

Service discovery

CORBA naming/trading service UDDI


RMI registry

Language support

any language with an IDL


binding

any language

Security

CORBA security service

HTTP/SSL, XML signature

Firewall Traversal

work in progress

uses HTTP port 80

Events

CORBA event service

N/A

Table 2: Comparison between CORBA and Web services


One important observation concerning CORBA and Web services is that whatever can be
accomplished by CORBA can be accomplished using Web service technologies and vice versa,
although the amount of effort required would be noticeably different. In particular, one can
implement CORBA on top of SOAP, or SOAP on top of CORBA.
Table 2 provides an overview of comparisons between the two technologies along several
architectural dimensions. Table 3 on the other hand provides a high-level comparison of the
technology stack comprising the layers that come into play when building a distributed service.
CORBA stack

Web Services stack

IDL

WSDL

CORBA Services

UDDI

CORBA Stubs/Skeletons

SOAP Message

CDR binary encoding

XML Unicode encoding

GIOP/IIOP

HTTP

TCP/IP

TCP/IP

Table 3: CORBA and Web services technology stacks

References:
http://www.researchgate.net/publication/261074660_Evaluation_of_CORBA_a
nd_Web_Services_in_distributed_applications
http://www2002.org/CDROM/alternate/395/

(d)Summarise the differences between GRID Computing model,


and the Cloud Computing
model.
Grid computing is basically, more than one computer coordinating to solve a
problem together. It is often used for problems which involves a lot of
number crunching, which can be easily done by parallel computing concept.
Cloud computing is basically an application doesnt accessing the resources
that are required, directly but accessing it through some service. So instead
of taking data from some hard drive or asking specific CPU to compute data,
it takes data from some service. The service receives requests. Usually the
service has a very large physical resources, and can be dynamically
allocated as and when they are needed
In this way, if a certain application requires a small amount of some
resource, say computation, then the service will only allocate a small
amount, on a single. If the application requires a large computation, then the
service may allocate grid of CPUs. In this way the application can scale well.
For example a web site written "on the cloud" may share a server with many
other web sites while it has a low amount of traffic, but may be moved to its
own dedicated server, or grid of servers, if it ever has massive amounts of
traffic. This is all handled by the cloud service, so the application shouldn't
have to be modified drastically to cope.
A cloud mostly use a grid. A grid isnt necessarily a cloud or part of a cloud.

Cloud computing and grid computing can be scaled. Scalability can be


done through load balancing of, instances of application that are running
separately on a different operating systems and connected through some
Web service. Network bandwidth and CPUs are allocated and de-allocated as
per demand of client or application. The capacity of system storage can go
up and down depending on the users, instances, and the amount of data that
is being transferred at a given time.
Both types of computing involve multitenancy and multitask, means that
many customers can perform many different tasks, by having access of a
single or multiple application instances. By sharing resources among a large
number of users assists in getting reduce peak load capacity and
Infrastructure costs. Both of them provide service-level agreements (SLAs)

for uptime availability. If the service fell below the level of the guaranteed
uptime service, they will have to pay the service credit to the consumers. .
Though the storage computing in the grid is quite good for data-intensive
storage, it isnt economically good for storing objects whose size is as small
as one byte. In the data grid, the amount of data distributed must be large
for maximizing benefit.
A computational grid aims on computationally intensive operations. Amazon
Web Services in the field of cloud computing offers 2 types of instances: High
and Standard-CPU.
The difference between grid computing and cloud computing is hard to grasp
because they are not always mutually exclusive. In fact, they are both used
to economize computing by maximising existing resources. Additionally, both
architectures use abstraction extensively, and both have distinct elements
which interact with each other.
However, the difference between the two lies in the way the tasks are
computed in each respective environment. In a computational grid, one large
job is divided into many small portions and executed on multiple machines.
This characteristic is fundamental to a grid; not so in a cloud.
The computing cloud is intended to allow the user to avail of various services
without investing in the underlying architecture. While grid computing also
offers a similar facility for computing power, cloud computing isnt restricted
to just that. A cloud can offer many different services, from web hosting,
right down to word processing. In fact, a computing cloud can combine
services to present a user with a homogenous optimized result.
There are many computing architectures often mistaken for each other
because of certain shared characteristics. Again, each architecture is not
mutually exclusive, however they are indeed distinct conceptually.
A Grid is a hardware and software infrastructure that clusters and integrates
high-end computers, networks, databases, and scientific instruments from
multiple sources to form a virtual supercomputer on which users can work
collaboratively within virtual organisations
Grid is Mostly free used by academic research etc.
Clouds are a large pool of easily usable and accessible virtualized resources
(such as hardware, development platforms and/or services). These resources
can be dynamically reconfigured to adjust to a variable load (scale), allowing
also for an optimum resource utilization. This pool of resources is typically

exploited by a pay peruse model in which guarantees are offered by the


Infrastructure Provider by customized service level agreements.
Cloud is not free. It is a service, provided by different service providers and
they charge according to your work done.

References:
http://www.researchgate.net/post/What_are_the_differences_between_grid_co
mputing_and_cloud_computing
http://stackoverflow.com/questions/1067987/what-is-the-difference-between-cloudcomputing-and-grid-computing
http://www.brighthub.com/environment/green-computing/articles/68785.aspx

Das könnte Ihnen auch gefallen