Sie sind auf Seite 1von 5


Verification is the process of ensuring that the model design (conceptual model) has been
transformed into a computer model with sufficient accuracy . Validation, on the other hand, is
the process of ensuring that the model is sufficiently accurate for the purpose at hand .
Verification has quite a narrow definition and in many respects it can be seen as a subset of the
wider issue of validation.

There are two key concepts in validation: the ideas of sufficient accuracy and models that are
built for a specific purpose. By now it should be clear that no model is ever 100% accurate,
indeed, a model is not meant to be completely accurate, but a simplified means for
understanding and exploring reality .

In verification and validation the aim is to ensure that the model is sufficiently accurate.
Further, this accuracy is with reference to the purpose for which the model is to be used. As a
consequence, the purpose, or objectives, of a model must be known before it can be validated.
This purpose may have been determined at the start of the simulation study, being expressed
through the objectives ,or it may be an alternative use for an existing model. Under this
definition for validation it is possible to think in terms of absolute validity; a model is either
sufficiently accurate for its purpose or it is not. In other words, validity is a binary decision with
a conclusion of ‘‘yes’’ or ‘‘no’’.

Validity and accuracy are related but separate concepts. While validity is a binary decision,
accuracy is measured on a scale of zero to 100%. The modeller should determine early on in a
simulation study the level of accuracy required FROM THE MODEL.

Various forms of validation are identified, which can be defined as follows:

a. Conceptual Model Validation: determining that the content, assumptions and

simplifications of the proposed model are sufficiently accurate for the purpose at hand. The
question being asked is: does the conceptual model contain all the necessary details to meet
the objectives of the simulation study?

B. Data Validation: determining that the contextual data and the data required for model
realization and validation are sufficiently accurate for the purpose at hand. As shown below
C. White-Box Validation: determining that the constituent parts of the computer model
represent the corresponding real world elements with sufficient accuracy for the purpose at
hand. This is a detailed, or micro, check of the model, in which the question is asked:

Does each part of the model represent the real world with sufficient accuracy to meet the
objectives of the simulation study?

D. Black-Box Validation: determining that the overall model represents the real world with
sufficient accuracy for the purpose at hand. This is an overall, or macro, check of the model’s
operation, in which the question is asked: does the overall model provide a sufficiently accurate
representation of the real world system to meet the objectives of the simulation study?

E. Experimentation Validation: determining that the experimental procedures adopted are

providing results that are sufficiently accurate for the purpose at hand. Key issues are the
requirements for removing initialization bias, run-length, replications and sensitivity analysis to
assure the accuracy of the results. Further to this, suitable methods should be adopted for
searching the solution space to ensure that learning is maximized and appropriate solutions
F. Solution Validation: determining that the results obtained from the model of the
proposed solution are sufficiently accurate for the purpose at hand. This is similar to black-box
validation in that it entails a comparison with the real world. It is different in that it only
compares the final model of the proposed solution to the implemented solution. Consequently
,solution validation can only take place post implementation and so, unlike the other forms of
validation, it is not intrinsic to the simulation study itself. In this sense, it has no value in giving
assurance to the client, but it does provide some feedback to the modeller. What should be
apparent is that verification and validation is not just performed once a complete model has
been developed, but that verification and validation is a continuous process that is performed
throughout the life-cycle of a simulation study. In the same way that modelling is an iterative
process, so too is verification and validation.

At an early stage in a simulation project a conceptual model is developed. At this point this
model should be validated. However, as the project progresses the conceptual model is likely to
be revised as the understanding of the problem and the modelling requirements change. As a
consequence, the conceptual model also needs to be revalidated. While the conceptual model
is being transformed into a computer model, the constituent parts of the model (particularly
those recently coded) should be continuously verified. Similarly, the details of the model should
be checked against the real world throughout model coding (white-box validation). Black-box
validation requires a completed model, since it makes little sense to compare the overall model
against the real world until it is complete. This does not imply, however, that black-box
validation is only performed once. The identification of model errors and continued changes to
the conceptual model necessitates model revisions and therefore further black-box validation.
In a similar way, the experimental procedures need to be validated for every revision of the
model, including the experimental scenarios. It cannot be assumed that the requirements for
experimentation are the same for every model version.

Although white-box validation and black-box validation are often lumped together under one
heading, operational validity , it is because they are performed as separate activities during a
simulation study that a distinction is drawn between them here. White-box validation is
intrinsic to model coding, while black-box validation can only be performed once the model
code is complete.

Black-box validation

In black-box validation the overall behaviour of the model is considered. There are two broad
approaches to performing this form of validation. The first is to compare the simulation model
to the real world. The other is to make a comparison with another model. This approach is
particularly useful when there are no real world data to compare against. Comparison with the
real system If confidence is to be placed in a model then, when it is run under the same
conditions (inputs) as the real world system, the outputs should be sufficiently similar .

This concept is expressed as the alternative hypothesis (H 1 ) in Figure above , since the
purpose of validation, the null hypothesis, is to demonstrate that the model is incorrect . The
approximation sign shows that the model need only be sufficiently accurate.If this is the case
then the comparison can be made against the expectations and intuition of those who have a
detailed knowledge of the real system. Comparison against approximate real world data such as
these may not give absolute confidence in the model, but it should help to increase confidence.

Historical (or expected) data collected from the real system, such as throughput and customer
service levels, can be compared with the results of the simulation when it is run under the same
conditions. This can be performed by judging how closely the averages from the model and the
real world match, and by visually comparing the distributions of the data. Various statistical
tests also lend themselves to such comparisons .Assuming that the same quantity of output
data is generated from the simulation model as is available from the real system, then a
confidence interval for the difference in the means can be calculated as follows: