Sie sind auf Seite 1von 11

1 Introduction

The most significant advantage of parallel simulation is shorter simulation run times. Simulations that require days to run on a sequential computer might be accomplished in hours if the simulation algorithm can identify and exploit parallelism in the model. Shorter run times translate into more experiments for fewer dollars. In the case of realtime simulations, such as trainers, the necessary time constraints might be achievable only with a distributed or parallel approach. Distributed simulation opens the door to cooperation amongst geographically dispersed groups. For example, a weather model in Boston could be incorporated into a virtual battlefield in Washington. Fighter pilots in Tucson might then have training simulators connected to the virtual battlefield. To construct effective distributed simulators, there are a host of issues that must be addressed. Questions concerning problem partitioning, data representation, communication protocols, and other issues common to any distributed program must be contended with. Beyond these, a distributed simulator needs a synchronization protocol that preserves the ordering of cause and effect event pairs while allowing unrelated events to be processed in parallel. Several such synchronization protocols are currently in use. Many of these protocols make assumptions about the model under consideration that may not be generally applicable. Others make no such assumptions, but may not provide peak performance. Consequently, distributed simulators developed independently of one another tend to use different and seemingly incompatible synchronization protocols.

10

Recent initiatives to integrate these disparate simulators into a coherent whole have met with limited success. Past attempts at integration have used a collection of logical processes that interact via time stamped messages as a model for the distributed simulator. This approach fails to recognize fundamental properties of the system that the distributed simulator means to simulate. In particular, it is inadequate to capture the system theoretic notions of time and causality, and the relationship between the two. A new model for distributed simulation systems is needed to make further progress towards a general purpose distributed simulation environment. In the course of this thesis, distributed computing refers to computation on a network of computers that communicate via message passing. No shared memory is assumed. To date, this has been the most popular paradigm for research in distributed discrete event simulation (Ferscha 95, Overeinder 91, Reed 87).

1.1 A taxonomy
It is useful to introduce a taxonomy for distributed simulation algorithms. The taxonomy provides a common language for discussing classes of simulation algorithms, without focussing on the details of any particular algorithm. This taxonomy is shown in Figure 1.1.

11

Parallel and distributed simulation algorithms

synchronous algorithms

asynchronous algorithms

time driven

event driven

conservative

optimistic

locally optimistic Figure 1.1: A simulation algorithm taxonomy

globally optimistic

Distributed and parallel algorithms encompass all simulation algorithms designed to exploit multi-processor computer architectures. Distributed algorithms are based on the message passing programming paradigm. Parallel algorithms exploit the shared memory paradigm (Hwang 93). Synchronous algorithms use a global clock to synchronize the forward progress of each process in the distributed system. Each process is allowed to advance its local clock to the time indicated by the global clock. The global clock, in turn, is advanced either to the next time step or smallest time of next event. Event driven algorithms advance time on the basis of the time stamp of the next unprocessed event. Only processes whose next event is to take place at that time are permitted to execute a transition. Examples of event driven algorithms include distributed event lists and the Parallel DEVS abstract simulator (Chow 94, Zeigler 00). Time driven algorithms advance time in fixed increments. Given a time step of h and initial time t0, inputs, outputs, and states changes are computed at t0, t0 + h, t0 + 2h, and so

12

on. The Runge-Kutta method for numerical solutions to ordinary differential equations provides an example of a time driven algorithm (Yakowitz 89). Asynchronous algorithms allow events to be processed beyond the global time of next event. The local clocks of each process are synchronized whenever process interactions occur. Conservative simulation algorithms allow a process to execute internal events so long as the process will not receive input. The goal of the conservative algorithm is to prevent causality violations. Conservative simulators are often characterized by the use of a lookahead value that predicts the next output time of a process before the output generating state is reached. Examples of conservative simulation algorithms are the Chandy/Misra/Bryant algorithms (Ferscha 95) and the Conservative Parallel DEVS simulator (Zeigler 00). Optimistic algorithms allow the processing of events well in advance of the global time of next event, possibly resulting in causality violations. When a violation is detected, the simulator restores the state of the offending process to some safe, checkedpointed state, and then proceeds ahead once more. Locally optimistic simulators do not allow output generated at some time larger than the global time of next event to be released to other processes in the system. As a result, locally optimistic simulators only require a local rollback mechanism to recover from causality violations. Examples of locally optimistic algorithms are Breathing Time Buckets (Steinman 92) and the HORIZON and BASIC algorithms (Liao 93).

13

Globally optimistic algorithms allow each process to release output generated at some time after the global time of next event. In addition to the local rollbacks required by locally optimistic algorithms, so called anti-messages are needed to undo outputs generated by a process in an invalid (i.e. causality violating) state. Examples of globally optimistic algorithms are Time Warp and Breathing Time Warp (Steinman 93).

1.2 Existing Distributed Simulation Technologies 1.2.1 HLA


The High Level Architecture (HLA) is a Department of Defense initiative to support simulator interoperability. HLA provides a set of services that can be used to connect disparate simulators into a coherent whole (DMSO 98, Fujimoto 98). HLA supports simulations that use both synchronous and asynchronous algorithms. However, HLA requires that simulators produce sequences of output events with strictly increasing time stamps. This has the effect of prohibiting models that exhibit instantaneous (zero time) responses to input.

1.2.2 YADDES
YADDES supports a wide variety of distributed simulation algorithms including distributed synchronized event lists, the conservative Chandy/Misra algorithm, and optimistic Time Warp simulation (Preiss 89). YADDES is focused on speed up and presents a model specification language that is consistent with this goal. A YADDES model consists of logical processes that communicate via time stamped events, with a state changing function attached to each event type. A model specification is written using the YADDES language. This is compiled into a C language realization of the

14

models simulator. The user can select a distributed simulation algorithm at link time. Similar to HLA, YADDES requires that logical processes produce sequences of output events with strictly increasing time stamps.

1.2.3 ADAPT
The ADAPT environment supports mixed conservative and optimistic simulation by abstracting key features of the distributed algorithm. The ADAPT environment consists of a global control mechanism that performs message passing and virtual time computations, local controllers that manage the local time at each node, and logical processes that are assigned one per node (Jha 94). Similar to YADDES, models in ADAPT are represented as logical processes that communicate via time stamped events. ADAPT differs, however, in that each logical process can have a different local control mechanism. For example, a simulation consisting of three logical processes might have two optimistic local controllers and one conservative local controller.

1.2.4 SPEEDES
SPEEDES is an optimistic simulation engine that supports the locally optimistic Breathing Time Buckets and globally optimistic Time Warp and Breathing Time Warp algorithms (Steinman 92, Steinman 93). SPEEDES focuses on speeding up models that can be expressed using the SPEEDES modeling framework (Steinman 92). A SPEEDES model consists of several logical processes that communicate via time stamped events. Associated with each event is a procedure that changes the state of a logical process. Events are processed one at a time by the receiving process. Similar to HLA and YADDES, SPEEDES requires that logical processes produce sequences of

15

output events with strictly increasing time stamps. Also, SPEEDES models must be constructed in such a way that their state can be efficiently saved and recovered.

1.2.5 DEVS/HLA & DEVS-C++


DEVS-C++ is an implementation of the Parallel DEVS abstract simulator for DEVS models (Zeigler 96). During each simulation cycle, the global time is advanced only to the minimum of the next event times of all the models in the system. If several models have the same time of next event, those models can compute their next states in parallel. If simultaneous activity is rare, then very little speedup can be achieved by DEVS-C++. This situation can be mitigated by using a time granule, tg, and allowing events in the interval [t, t + tg] to be processed simultaneously, where t is the global time of next event. While substantial speedup can be achieved by this technique, it comes at a potential loss of fidelity (Fujimoto 99, Zeigler 97a). By adhering to the abstract simulator concept, DEVS-C++ is able to faithfully reproduce the behavior of any discrete event model (Zeigler 00). DEVS/HLA is a distributed implementation of DEVS-C++ over HLA (Zeigler 99). While DEVS-C++ does not require that model outputs have strictly increasing timestamps, this facility is sacrificed in order to conform to the strictly increasing clock requirement imposed by HLA. Unfortunately, in the case of DEVS/HLA, this sacrifice does not necessarily translate into shorter run times. This is due to the general problem of identifying lookahead in discrete event models (Lake 00). Where YADDES and ADAPT make lookahead values an intrinsic part of their simulation environments, the DEVS abstract simulator admits models whose lookahead value may be unknowable.

16

A fundamental aspect of this thesis is the separation of models and their simulators. Distinct treatment of modeling and simulation provides the underlying conceptual basis for the approach to distributed simulation that is presented in this thesis. A comparison of DEVS and HLA highlights the importance of this distinction, especially for distributed simulation (SarZei 00). A key benefit of treating models and their simulators separately is the ability to characterize each in such a way that model synthesis and the selection of simulation protocols can be addressed, more or less, independently. This yields not only computational advantages but also greater degree of model reuse.

1.2.6 Observations concerning distributed simulation systems


The systems discussed in the previous sections present tradeoffs between speedup and the range of models that can be simulated. As shown in figure 1.2, SPEEDES and YADDES offer substantial speedup for a specific class of models. ADAPT allows for mixed protocol simulation, and so potentially allows for a wider range of models. However, the mixed protocol simulator may exhibit less speedup than in the case of a single protocol simulator. HLA provides a wider variety of choices in the pursuit of speedup while allowing models that are not well suited to conservative or optimistic simulation. HLA, however, still places restrictions on the model in order to take advantage of faster synchronization algorithms. DEVS-C++ admits speedup only when it does not restrict the class of discrete event models that can be simulated.

17

ADAPT YADDES SPEEDES speed up HLA DEVS-C++ expressiveness

Figure 1.2: Existing distributed discrete event simulation technologies In general, it seems to be the case that speedup requires a sacrifice either in terms of fidelity (e.g. the time granule approach of DEVS-C++) or flexibility (e.g. HLA). While the tradeoff itself is not surprising, it would be useful if we could make fine grained decision about how this tradeoff should be made. For example, a system might contain one part that is suitable to conservative simulation and another to a time granule approach. Using the systems described above, it would be necessary to select one approach and apply it unilaterally. It seems natural to ask how can we build systems that allow for a more fine grained approach and, having built such a system, is it useful? Beyond speedup/expressiveness tradeoffs, these simulation systems have more fundamental differences that can be understood, in part, by considering the theoretical background against which they were conceived. In the case of SPEEDES, YADDES, and ADAPT, the software is intended to demonstrate parallel programming issues. These environments present modeling frameworks that reflect the environments design, rather than the design reflecting the class of models that will be simulated. Consider the SPEEDES modeling framework. It assumes that models can be expressed as lists of functions that are executed in order. Each function operates on the state of the system. New functions are added to the list when input is received. By

18

sending input to itself, a system can change its own state. Each input results in a new function. Consequently, the simulation engine serializes simultaneous inputs before they are made available to the system being simulated. It is not at all clear what types of systems can and can not be expressed in these terms. A danger presented by this approach is that the final product will not be suitable for simulating the system that the designer had in mind. Without an approach firmly grounded in systems theory, the simulation writer has no way of knowing if the software being constructed is really suitable for the problem at hand. In software engineering terms, the requirements are incomplete. Consequently, the design derived from them is likely to be faulty (Pfleeger 98). In the case of HLA, this is demonstrated by the difficulties encountered when integrating DEVS and HLA (Zeigler 99), as well as by the discussion surrounding zero lookahead in HLA federates (Fujimoto 97).

1.3 Thesis goals


This thesis will address the problem of computational efficiency versus flexibility by presenting a framework for exploring questions about simulator interoperability. The framework will be extensible to accommodate new simulation techniques as they are discovered. Once the framework has been developed, it will be used to design middleware that supports distributed simulation. The steps in this process are as follows: 1. Consider the effects of time and causality on distributed simulation from the viewpoints of systems theory and computer science. 2. Present an abstract framework for translating system theoretic model descriptions into software components that can be executed by a distributed algorithm. The framework

19

will address issues related to time and causality in distributed simulation of discrete event models. 3. Using this framework, design and construct middle-ware that supports distributed simulation. The resulting software should support the needs of simulators on both ends of the speed up, expressiveness spectrum. 4. Assess what model properties can be used as guidelines for assigning models to simulation algorithms.

20

Das könnte Ihnen auch gefallen