Sie sind auf Seite 1von 14

210 CHAPTER 4

4.2. BRIEF OVERVIEW OF NUMERICAL METHODS


FOR PHASE CHANGE PROBLEMS
Consider the one-dimensional heat conduction problem ((1)-(4) of 4.1), but
now regarding the material as a phase change material with a melt temperature
T
m
, and take
T
init
(x) < T
m
and T

(t) > T
m
.
Then melting of the material will occur, commencing at x = 0 at some time
t
init
> 0, and we want to nd t
init
, T(x, t) and X(t) such that
c
S
T
t
= (k
S
T
x
)
x
for 0 < x < l , 0 < t < t
init
and for X(t) < x < l , t > t
init
,
c
L
T
t
= (k
L
T
x
)
x
for 0 < x < X(t) , t > t
init
,
k
S
T
x
(0, t) = h[T

(t) T(0, t)] for t < t


init
,
k
L
T
x
(0, t) = h[T

(t) T(0, t)] for t > t


init
,
T
x
(l, t) = 0 for t > 0 ,
T(x, 0) = T
init
(x) , X(t
init
) = 0 ,
T(X(t), t) = T
m
, t > t
init
,
LX(t) = k
L
T
x
(X(t)

, t) + k
S
T
x
(X(t)
+
, t) , t > t
init.
Clearly we have to separate the problem into two distinct problems: a pure heat
conduction problem, until the face x = 0 reaches the melt temperature at time
t = t
init
, and a two-phase Stefan problem after time t
init
. The numerical solution to
the rst problem was presented in 4.1, but that of the Stefan problem is much
more difcult, due to the underlying geometric nonlinearity of the problem: the
regions in which the two heat conduction equations are to hold change in time, and
we have to compute the location of the interface x = X(t) concurrently. Several
approaches have been devised with this aim, collectively referred to as front
tracking schemes, because they attempt to explicitly track the interface using the
Stefan condition.
One approach is to x the spatial step, x, but allow the time step, t
n
to
oat in such a way that the front always passes through a node (x
j
, t
n
). An
example of this approach is the method of [DOUGLAS-GALLIE].
Another approach is to x the time step and allow the spatial step to oat, in
fact, use two distinct and time-varying space steps for the two phases. The
isotherm migration method of J. Crank is of this type, [CRANK, 1981, 1984].
Yet another approach is based on the Landau transformation, which we have
used in the perturbation method (see 3.3). By a change of variables (3.3.B), the
regions representing the phases become xed, the underlying geometric nonlinear-
ity showing up algebraically now in the transformed equations. Then one solves
the resulting system of nonlinear equations by some numerical method.
4.2 OVERVIEW OF NUMERICAL METHODS 211
All such approaches work well, more or less, for simple Stefan problems (that
would arise e.g., in laboratory settings) in which we know what to expect, namely
a single sharp front separating the two phases. It is not difcult to realize however,
that such problems are not the rule in practice, particularly when time dependence
of heat input/output or thermal cycling occur. For example, in Latent Heat Ther-
mal Energy Storage, one must deal with cases of extreme thermal cycling, multiple
fronts, disappearing phases and non-predictable behavior (1.3, 5.3). Internal
heating is another source of difculties, rst documented by [ATTHEY], in which
extended mushy zones may appear instead of sharp fronts. Constitutional super-
cooling of binary alloys results in similar effects which may not be ignored ( see
[ALEXIADES-WILSON-SOLOMON, 1985] ). Simultaneous mass transfer by
diffusion and/or convection complicate the phase change process to the point that
we cannot guess a priori the qualitative picture in enough detail to even be able to
formulate the problem in the classical fashion of a Stefan type problem with sharp
front, etc. If so many complications can arise in 1-dimensional processes, what
about 2- and 3-dimensional processes? Despite the difculties, successful methods
for 2-D hydrodynamic instability problems have been developed by Glimm and
coworkers, [GLIMM]. Surveys of front tracking methods appear in [MEYER,
1978], [CRANK, 1981, 1984], [ALBRECHT-COLLATZ-HOFFMANN], etc.
Such reasons make front-tracking schemes unviable as general simulation tools
for modeling realistic phase-change processees. The only viable general approach
is the so-called enthalpy method, precisely because it bypasses the explicit track-
ing of the interface. In this approach the jump condition (Stefan condition) is not
forced on the solution, but it is obeyed automatically by it as a natural boundary
condition (in the sense of the Calculus of Variations). Its theoretical basis is a
formulation of the Stefan problem different than the classical one, the so-called
weak or enthalpy formulation, described in 4.4. It is similar to the weak formu-
lations commonly used in gas dynamics for shocks (see [HYMAN] for a brief
overview). Another xed-domain method (as opposed to front-tracking), based on
variational inequalities [DUVAUT], [ODEN-KIKUCHI] reformulation of the
Stefan problem and nite elements, lacks the direct physical interpretation of the
enthalpy method and has not lived up to its initial promise for Stefan-type prob-
lems.
It can be safely concluded today that the enthalpy method, to which we turn in
the next section, discretized by (integrated) nite differences, is the most versatile,
convenient, adaptable, and easily programmable numerical method available for
phase change problems in 1, 2 or 3 space dimensions.
We hasten to add however that it does not solve all the problems. Excluded
are problems which we do not know how to formulate weakly due to their special
interface conditions. Such is the case with supercooling problems, where the
instability of the interface must be studied. A very successful computational
approach for such problems is another xed-domain type formulation, the so-
called phase-eld approach, under intense development lately, [CAGINALP,
1989, 1991], [KOBAYASHI].
212 CHAPTER 4
4.3. THE ENTHALPY METHOD
IN ONE SPACE DIMENSION
4.3.A Introduction
The, so called, enthalpy or weak solution approach is based on the fact that the
energy conservation law, expressed in terms of energy (enthalpy) and temperature,
together with the equation of state contain all the physical information needed to
determine the evolution of the phases. It turns out that, for the purpose of
obtaining numerical schemes, the most appropriate and convenient way to state
energy conservation is the primitive integral heat balance over arbitrary volumes
and time-intervals, from which all other formulations can be obtained, namely
t+t
t


t
j
,
(
V

E dV
\
(
,
dt
t+t
t

.
n

dS dt (1)
where E e is the energy density (per unit volume), and q

.
n

is the heat ux
into the volume V across its boundary V, n

being the outgoing unit normal to V.


The distinct advantage of this primitive form is that it is valid irrespectively of
phase, and even if E and q

experience jumps, so it is actually more general than


the localized differential form
(2)
E
t
+ divq

0 .
The two forms are equivalent for smooth E, q

, thanks to the Divergence Theorem


( 1.2 ). In the presence of a phase-change, the partial differential equation (2) can
only be interpreted in the classical pointwise sense inside each phase separately,
and then conservation across the interface must be imposed explicitly as an
additional interface (Stefan) condition, making front-tracking necessary.
Alternatively, the PDE (2) may be interpreted in a generalized (weak) sense
globally, as described in detail in 4.4. It turns out that the numerical solutions
obtained via the enthalpy method approximate this weak solution, as we shall
show in 4.5.
In this section, we describe the enthalpy method for Stefan problems in one
space dimension, and its numerical implementation via time-explicit or time-
implicit schemes.
4.3.B The enthalpy method
The idea of the enthalpy approach is very simple, direct, and physical. We
partition the volume occupied by the phase-change material into a nite number of
control volumes V
j
and apply energy conservation, (1), to each control volume to
obtain a discrete heat balance. Note that this is the same discrete heat balance as
for plain heat conduction ( 4.1.B ), and we use it to update the enthalpy, E
j
, of
4.3 THE ENTHALPY METHOD IN ONE SPACE DIMENSION 213
each control volume. From the equation of state we know that E
j
0 ==> V
j
is
solid, E
j
L ==> V
j
is liquid, and 0 < E
j
< L ==> V
j
is partially liquid and
partially solid, so we call it "mushy". A mushy cell contains an interface and the
fraction of the cell occupied by liquid is naturally given by the value of the
liquid fraction :
j

E
j
L
.
Note that in this scheme the phases are determined by the enthalpy alone, with no
mentioning of interface location(s). It is a volume-tracking scheme, as opposed
to front-tracking. Since the front location may be recovered a posteriori from
the values of the enthalpy, it may be characterized as a front-capturing scheme,
similar in spirit to shock-capturing schemes of gas dynamics (see [HYMAN]
for an overview of various types of schemes).
Let us see how the method works in detail, by considering the heat conduction
problem of 4.1, except now we assume that our slab 0 x l is occupied by a
material that changes phase at a melt temperature T
m
. We assume that initially the
material is solid with
(3) T(x, 0) T
init
(x) T
m
, 0 x l ,
the face x 0 is heated convectively by T

(t) T
m
:
(4) q(0, t) h [ T

(t) T(0, t) ] , t > 0 ,


and the face x l is insulated :
(5) q(l, t) 0 , t > 0 .
The energy conservation law in its integrated form (1) applied to the present one-
dimensional control volumes V
j
[ x
j
1
2
, x
j+
1
2
] A (see 4.1.B ) becomes
t
n+1
t
n


t
j
,
(
A
x
j+
1
2
x
j
1
2

E(x, t)dx
\
(
,
dt
t
n+1
t
n

A
x
j+
1
2
x
j
1
2

q
x
(x, t) dx dt. (6)
We seek numerical approximations to the temperature, energy and ux
obeying (3)-(6) with q k T
x
of course.
This problem is formally identical to the heat conduction problem (1b,c),(8a)
of 4.1. The two differ in that now the enthalpy E is the sum of sensible and latent
heat in the liquid, so that, instead of (6) of 4.1.B, we have
E(x, t)

T(x,t)
T
m

c
S
( )d , T(x, t) < T
m
(solid)
T(x,t)
T
m

c
L
( )d + L , T(x, t) > T
m
(liquid)
(7)
The phases are described by
214 CHAPTER 4
(8a) E(x, t) 0 > solid at (x, t)
(8b) 0 < E(x, t) < L > interface at (x, t)
(8c) E(x, t) L > liquid at (x, t) .
Thus, only the relation between E and T is now different than in 4.1, while the
discretization of (6) is still (12) of 4.1.B. The exibility and generality of this
approach will be further illustrated in 4.3.H where even a wall layer will be
incorporated into the global scheme by adjusting the energy.
Consider the case in which
(9) c
S
, c
L
constants .
Then (7) becomes
E

c
S
[ T T
m
] , T < T
m
c
L
[ T T
m
] + L , T > T
m
(10)
or, solving for T,
T

T
m
+
E
c
S
, E 0 ( solid )
T
m
, 0 < E < L ( interface )
T
m
+
E L
c
L
, E L ( liquid )
(11)
Proceeding with the discretization of uxes and boundary conditions as in
4.1, we arrive at the following discrete problem.
(12a)
initial values: T
0
j
T
init
(x
j
), j 1, . . . , M
boundary condition at x 0 : q
n+
1
2

T
n+
1
T
n+

1
h
+ R1
2
, R1
2

1
2x
k
1
(12b)
(12c) boundary condition at x l : q
n+
M+
1
2
0 ,
(12d)
interior values: E
n+1
j
E
n
j
+
t
n
x
j
,

q
n+
j
1
2
q
n+
j+
1
2
]
]
, j 1, . . . , M
where
(12e)
q
n+
j
1
2

T
n+
j
T
n+
j1
R
j
1
2
with R
j
1
2

1
2x
j1
k
j1
+
1
2x
j
k
j
, j 2, . . . , M,
and
T
n
j

T
m
+
E
n
j
c
S
, E
n
j
0 ( solid )
T
m
, 0 < E
n
j
< L ( interface )
T
m
+
E
n
j
L
c
L
, E
n
j
L ( liquid )
(12f)
4.3 THE ENTHALPY METHOD IN ONE SPACE DIMENSION 215
The updating algorithm from any time t
n
to the next t
n
+ t
n
proceeds as fol-
lows: Knowing the enthalpy, temperature and phase (see below) of each control
volume, we compute the resistances and uxes, which are then used to update the
enthalpies, which in turn yield new temperatures and phase states.
The most convenient phase-indicator is the liquid fraction of a control volume
V
j
, dened as

n
j

0 , if E
n
j
0 ( solid )
E
n
j
L
, if 0 < E
n
j
< L ( mushy )
1 , if L E
n
j
(liquid)
.
(13)
If 0 <
n
j
< 1 the control volume is said to be mushy with liquid volume
n
j
x
j
and solid volume (1
n
j
)x
j
(per unit cross sectional area).
The denitions of resistances and uxes between control volumes are identical
to their denitions in 4.1, with the resistance at x
j
1
2
expressed as
R
j
1
2

x
j1
2 k
j1
+
x
j
2 k
j
.
The effective conductivity k
j
of a mushy control volume depends on the structure
of the phase-change front, and it is not always clear how to choose it, especially in
2 or 3 dimensional situations. Some alternative choices are:
Sharp front(s): A control volume containing a sharp front consists of layers of
solid and liquid in a "serial" arrangement, for which the effective resistivity is
the sum of the resistivities of the layers. With the layer thicknesses determined
from the solid and liquid fractions, we have
(14a) 1
k
n
j


n
j
k
L
(T
m
)
+
1
n
j
k
S
(T
m
)
, j 1, 2, . . . , M.
Columnar front: A front consisting of columns of solid and liquid constitutes a
"parallel" arrangement, so the effective conductivity is the sum of the conduc-
tivities of the phases :
(14b) k
n
j

n
j
k
L
(T
m
) + (1
n
j
) k
S
(T
m
) .
Amorphous mixture of solid and liquid: The inter-phase region may be a random
mixture of solid and liquid. In this case one may use the following formula
which interpolates the previous two cases [CHEMICAL ENGINEERING
GUIDE, p.242]:
k
n
j
k
S
(T
m
)
1 +
2/3
( 1)
1 + (
2/3
)( 1)
,
n
j
,
k
L
(T
m
)
k
S
(T
m
)
. (14c)
In most situations we do not know which of the above cases is relevant. In 2
or 3 space dimensions even a sharp front will generally not be moving in the
216 CHAPTER 4
direction of one of the axes, so the choice is not clear. A simple expedient is to
take the average of the solid and liquid conductivities,
(14d) k
n
j

1
2 (k
S
+ k
L
)
However, the best alternative, when applicable, is to employ the "Kirchoff trans-
formation" (see (7)4.4.D), to replace the temperature T by the "Kirchoff tempera-
ture" u. This can be used when the conductivity is a function of temperature only.
In particular, for constant k
S
, k
L
the "Kirchoff temperature" is
u

k
S
[ T T
m
]
0
k
L
[ T T
m
]
if T < T
m
if T T
m
if T > T
m
. (15)
Then q k T
x
u
x
, so the discrete ux is simply q
j
1
2
( u
j1
u
j
) / x . To
compare this with (12e), resubstitute u in terms of T : u
j
k
j
[ T
j
T
m
], and
write it as
q
j
1
2

T
j1
T
m
R

j1
+
T
m
T
j
R

j
, R

j
: x / k
j
. (16)
Thus the ux neatly splits to a sum of two terms, one for each of the two adjacent
nodes. Note that if one of the nodes, say node j, is mushy then T
j
T
m
and the
node is not contributing to conduction. We see that in this prescription each node
has its own resistivity R

j
x / k
j
, which may be found conveniently from
(17)
R

j
x {
j
/ k
L
+ (1
j
) / k
S
} ,
the value for mushy nodes being irrelevant since they are not contributing to the
ux. No averaging of values is used, so this prescription results in the highest
effective conductivity. It is the best choice for the enthalpy scheme since it is con-
sistent with the mushy nodes being treated as isothermal.
Note that such issues arise only when k
L
(T
m
) and k
S
(T
m
) are substantially dif-
ferent, in which case the sensitivity of the solution to the choice of effective con-
ductivity should be examined by comparing the results from the various choices.
(14a) yields the lowest effective conductivity and (16)-(17) yields the highest, so
these two pretty much bracket the system behavior.
We emphasize again that the interface location is not involved in the computa-
tion at all, this being an essential advantage of the enthalpy method. If the prob-
lem being modeled admits a sharp interface, then the enthalpy scheme ought to
produce a single mushy node at each time step. If at time t
n
the mushy node is the
m-th node, then a good approximation to the interface location X(t
n
) is given by
(18) X
n
: x
m
1
2
+
n
m
x
m
.
REMARK 1. Missing the phase-change
The latent heat effect is only felt in mushy nodes, so each control volume
should pass through the mushy state before changing phase. The algorithm
4.3 THE ENTHALPY METHOD IN ONE SPACE DIMENSION 217
described here is "robust" in this respect, so skipping this transition indicates a
bug in the code, or too large a time-step. Some other implementations may not
be so robust and one must be careful not to miss the transition. Especially
prone are algorithms that track the temperature and account for the latent heat
via a source term.
REMARK 2. Temperature dependent heat capacities
If c
S
c
S
(T), c
L
c
L
(T), then the equation of state is the nonlinear rela-
tion (7), and nding T from E is not quite as simple as in the constant c
S
, c
L
case. If E 0 we need to nd T from the equation E
T
T
m

c
S
( )d , for
which a Newton-Raphson method may be employed. Alternatively, we may
rewrite the equation as
dE
dT
c
S
(T), or
dT
dE

1
c
S
(T)
, which is an ODE with
initial condition T T
m
for E 0. Similarly, if E L, then we may
solve the equation E
T
T
m

c
L
( )d + L via a Newton-Raphson method, or
solve the ODE
dT
dE

1
c
L
(T)
with initial condition T T
m
for E L.
Any ODE solver can be used for this purpose, e.g. forward Euler, backward
Euler, Runge-Kutta, etc.
Actually, the temperature dependence of heat capacities is commonly
expressed in the form
c
i
(T) A
i
+ B
i
T +
C
i
T
2
, i S, L , (19)
with T in degrees Kelvin and A
i
, B
i
, C
i
given constants. Then the integrals
expressing the sensible heat can be computed analytically, and the resulting
algebraic equations may be solved very effectively via a Newton-Raphson
method. Note that this needs to be done for each node at each time step,
adding considerably to the expense of the computation. A reasonable starting
value is the temperature at the previous time step, T
n
j
.
4.3.C A time-explicit scheme
Choosing 0 in (12), the uxes are evaluated at the old time t
n
and we
assume that up to time t
n+1
the process is driven by these uxes. The explicit
scheme proceeds as follows.
Initially, the phase and temperature of each control volume are known with
T
0
j
T
init
(x
j
) , j 1, 2, . . . , M, (20)
which in turn determine the enthalpies E
0
j
, j 1, 2, . . . , M, via (10). Assume that
we have found enthalpies, temperatures and phase-states (
j
) through the n-th
218 CHAPTER 4
time step. From (13) we nd the liquid fractions,
n
j
, hence the phase of each node
and from (12f) the (mean) temperatures. If only one node is mushy the interface
location at time t
n
is given by (18).
Now we compute the conductivities from (14), the resistances and uxes from
(12b,c,e), with 0, and then E
n+1
j
is found from (12d), j 1, 2, . . . , M. Note that
for the boundary control volumes ( j 2, M 1 ) we may use the analog to the
implicit relation (43) of 4.1, in order to guarantee the Maximum Principle and
still have the CFL condition guarantee no growth of errors. Thus, the updating of
enthalpies to time t
n+1
is complete.
The stability condition is the same as in the pure conduction case, namely
t
n

1
2
(min x)
2
(max
n
)
, (21)
where min x
j1,2,...,M
min x
j
and
max
n
max

k
L
(T
n
j
)
c
j
,
k
S
(T
n
j
)
c
j
, j 1, 2, . . . , M

.
It is good practice to take a number slightly smaller than
1
2 in order to avoid stabil-
ity problems arising from roundoff. The value of max
n
can be computed at each
time step t
n
and then we can use t
n

1
2
(min x)
2
max
n
as the next time step. As
this quantity may become impractically small, it is good programming practice to
halt the computation if t
n
becomes smaller than a prescribed minimum t, and
carefully examine what caused it to become so small.
The great advantage of the time-explicit scheme lies in its simplicity and the
ease with which it can be programmed. In situations where the time-step must be
small for physical reasons (to capture rapidly moving fronts or resolve rapid
changes in data, for example), the stability requirement may not impose undue
restrictions, and the explicit scheme may turn out to be as efcient as implicit
schemes. The extreme case of laser annealing with picosecond or nanosecond
pulses is such a situation [ALEXIADES et al, 1985a].
4.3.D Performance of the explicit scheme on a one-phase problem
To see how the scheme of 4.3.C performs, we test it on the simplest problem
with known exact solution, namely the one-phase Stefan Problem with constant
imposed temperature at x 0. The Neumann (similarity) solution in dimension-
less variables appears in (14)-(16) of 2.1.
To retain direct physical meaning, we implement the enthalpy scheme on the
original formulation (1)-(4) of 2.1, which can be made identical to the dimension-
less formulation ((19)-(23), 2.1) by choosing
T
m
0 . c
L
k
L
1 , T
L
1 , L
1
St
. (22)
4.3 THE ENTHALPY METHOD IN ONE SPACE DIMENSION 219
We simulate melting in the slab 0 x 1 for two extreme values of the Stefan
number: St . 1 and St 5; the corresponding transcendental roots are found to
be . 22 and 1. 06.
To exhibit the convergence of the algorithm, we discretize the slab 0 x 1
with M 10, 20 and 40 uniform subintervals. Since k
L
/c
L
1 (from (22)),
the corresponding time steps will be (see (21)) t
1
2
(
1
10
)
2
,
1
2
(
1
20
)
2
and
1
2
(
1
40
)
2
,
showing the severe limitation on the time step imposed by the stability criterion.
Table 4.3.1 shows temperatures at several locations x, at time t 3 for the
problem with St 0. 1 and at time t . 12 when St 5. At these times the melt
fronts have not reached x . 8 yet, so there has been no backface inuence. The
second column shows the exact (Neumann) temperatures found as in 2.1. The
numerically computed temperatures with M 10, 20 and 40 nodes are listed in the
other columns (linearly interpolated from nodal values for M 20 and M 40).
Observe the progressive convergence to the exact solution as M increases. It is
also interesting to compare the code execution run times for the three mesh sizes:
on an IBM-PC/XT, they were 12.5, 41.5 and 260 seconds for St 0. 1 and 7, 9 and
24 seconds for St 5, respectively for M 10, 20 and 40 illustrating the dramatic
slowdown the ner mesh causes.
In Table 4.3.2 we compare the exact and numerical interface locations at a few
sample times from the same runs as above.
In Figures 4.3.1 and 4.3.4, the computed temperature history (melting curve) at
a xed location is compared with the exact (Neumann) solution when St St 0. 1 and
St St 5 respectively. The staircase shape is characteristic of enthalpy methods
and it is much more pronounced for M 10 nodes (Fig. 4.3.1(a), 4.3.4(a)) than for
Table 4.3.1: Exact and Computed Temperature Proles
For St 0. 1 at time t 3. 0 For St 5 at time t . 12
x T
exact
M 10 M 20 M 40 x T
exact
M 10 M 20 M 40
0 1.0000 1.0000 1.0000 1.0000 0 1.000 1.000 1.000 1.000
.1 .8667 .8667 .8677 .8672 .1 .8132 .8144 .8135 .8133
.2 .7336 .7333 .7359 .7346 .2 .6341 .6360 .6346 .6342
.3 .6010 .6000 .6051 .6026 .3 .4692 .4711 .4698 .4693
.4 .4691 .4666 .4755 .4712 .4 .3236 .3254 .3243 .3238
.5 .3380 .3333 .3473 .3405 .5 .2003 .2035 .2015 .2004
.6 .2080 .2000 .2203 .2105 .6 .1001 .1076 .1017 .1002
.7 .0793 .0667 .0942 .0809 .7 .0220 .0331 .0193 .0241
.8 0.0 0.0 0.0 0.0 .8 0.0 0.0 0.0 0.0
.9 0.0 0.0 0.0 0.0 .9 0.0 0.0 0.0 0.0
1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
220 CHAPTER 4
Table 4.3.2: Exact and Computed Interface Locations
For St 0. 1 For St 5
time X
exact
M 10 M 20 M 40 time X
exact
M 10 M 20 M 40
0 0. 0. 0. 0. 0 0. 0. 0. 0.
.5 .3111 .3089 .3101 .3108 .02 .2998 .2995 .3008 .3004
.0 .4400 .4375 .4401 .4399 .04 .4240 .4237 .4243 .4251
1.5 .5389 .5369 .5390 .5388 .06 .5193 .5227 .5197 .5206
2.0 .6223 .6207 .6218 .6225 .08 .5996 .6082 .6023 .6006
.10 .6704 .6958 .6724 .6722
Neumann solution
numerical with M = 10 nodes
Figure 4.3.1(a). Temperature history at x . 3, with M 10 nodes,
compared with the exact solution, St St 0. 1 .
M 40 nodes (Fig. 4.3.1(b), 4.3.4(b)). This is due to the fact that while the inter-
face lies anywhere inside a particular mesh interval, the temperature of that inter-
val is held at T
m
, so the temperature in the rest of the slab relaxes to a steady state
corresponding to a xed isotherm through that node. When the interface moves to
the next mesh interval, the temperature adjusts rapidly and then relaxes to a new
steady state. It follows that the duration of each step is strictly a function of the
time the interface remains in each mesh interval, and therefore, the ner the mesh
the shorter the steps. Indeed, using M 80 nodes the computed and exact solu-
tions would be indistinguishable graphically. Figures 4.3.2 and 4.3.5 show that the
interface location computed with only M 10 and M 20 nodes, for St St 0. 1 and
St St 5 respectively, agrees well with the exact interface. Finally, temperature pro-
les with M 10 and M 40 nodes are shown in Figures 4.3.3(a),(b) (at time
t 2., St St 0. 1) and Figures 4.3.6(a),(b) (at time t . 1, St St 5).
These gures bare out the fact that while numerical methods for phase-change
problems can easily capture interface locations and even temperature proles, the
errors show up vividly in temperature history plots.
4.3 THE ENTHALPY METHOD IN ONE SPACE DIMENSION 221
numerical with M = 40 nodes
Neumann solution
Figure 4.3.1(b). Temperature history at x . 3, with M 40 nodes
compared with the exact solution, St St 0. 1 .
Neumann solution
numerical with M = 10 nodes
Figure 4.3.2. Melt front location with M 10 nodes,
compared with the exact solution, St St 0. 1 .
numerical with M = 10 nodes
Neumann solution
Figure 4.3.3(a). Temperature prole at time t 2., with M 10 nodes,
compared with the exact solution, St St 0. 1 .
222 CHAPTER 4
Neumann solution
numerical with M = 40 nodes
Figure 4.3.3(b). Temperature prole at time t 2., with M 40 nodes,
compared with the exact solution, St St 0. 1 .
numerical with M = 10 nodes
Neumann solution
Figure 4.3.4(a). Temperature history at x . 3, with M 10 nodes,
compared with the exact solution, St St 5 .
numerical with M = 40 nodes
Neumann solution
Figure 4.3.4(b). Temperature history at x . 3, with M 40 nodes,
compared with the exact solution, St St 5 .
4.3 THE ENTHALPY METHOD IN ONE SPACE DIMENSION 223
Neumann solution
numerical with M = 20 nodes
Figure 4.3.5. Melt front location with M 20 nodes,
compared with the exact solution, St St 5 .
Neumann solution
numerical with M = 10 nodes
Figure 4.3.6(a). Temperature prole at time t . 1, with M 10 nodes,
compared with the exact solution, St St 5 .
Neumann solution
numerical with M = 40 nodes
Figure 4.3.6(b). Temperature prole at time t . 1, with M 40 nodes,
compared with the exact solution, St St 5 .

Das könnte Ihnen auch gefallen