Sie sind auf Seite 1von 12

Impact of a lter-forcing turbulence model on the performance of the

OVERFLOW compressible ow code


J. P. Strodtbeck
a
, J. M. McDonough
University of Kentucky
a
jpstro2@uky.edu
Abstract
The eect of a new compressible turbulence model on the performance of OVERFLOW is investigated. This
model uses explicit ltering to provide articial dissipation and chaotic forcing term to provide backscatter.
It is shown that OVERFLOW exhibits highly ecient scaling on the cluster used for these experiments, and
that the impact of the turbulence model on the codes performance is negligible, making this approach a
good candidate for further study and development.
1. Introduction
Recently, massively parallel clusters incorporating hundreds of processors have become aordable for
industrial CFD. As hardware vendors simplify their systems, high-performance computing (HPC) is becoming
aordable even for smaller organizations. These machines typically have discrete memory and several multi-
core processors on each compute node. Because of this, there is an increasing demand for CFD codes which
scale well with large numbers of parallel processors and minimize computational overhead. Further, several
popular commercial CFD software vendors charge license fees per parallel process. In an era where high-
performance computers have hundreds or even thousands of hardware threads, this can make full utilization
of the hardware prohibitively expensive if computation is done with commercial software. In addition,
commercial vendors cannot keep pace with the increasingly specialized needs of individual CFD users, so
access to source code is of increasing importance in modern CFD. Thus more organizations are turning to
open-source, free, or internally developed CFD codes. Further, there is concern that any engineering code
meet certain basic criteriastability, ease of mesh generation, compatibility with standard post-processing
tools, etc.
In the era of massively parallel computing, then, it is essential that a turbulence model have minimal
impact not only on the overall eciency of the core parallelism of the software, but on the computation
time itself. At the extreme end of accuracy, a perfect turbulence model would introduce as many degrees of
freedom into the solution process as needed for DNS, thus increasing computation time and storage overhead
to unacceptable levels. Therefore, not only the accuracy, but also the real-time computation performance
eects of a turbulence model are necessary information for the engineer.
In the current investigation, we are testing the parallel performance of a new lter-forcing approach
by integrating it into a compressible ow code developed by NASA Ames, OVERFLOW. OVERFLOW
is a three-dimensional, implicit, nite volume, structured overset grid CFD code employed widely within
the NASA community and academia, and with limited use in industry. Because it is a mature code, it
has good stability characteristics, a wide variety of boundary condition and turbulence model options, and
features advanced shock-capturing routines and multispecies models. Decomposition and reassembly are
fully automated processes, leaving the entire parallelization process invisible to the user. The version of
the code used in this work, which uses a hybrid MPI/OpenMP implementation, is an ideal test bed for
testing parallel performance, as it was shown by Djomehri and Jin to exhibit high scalability [1]. While the
distribution of OVERFLOW is limited by United States export control regulations, the hybrid C/FORTRAN
source code is distributed free of charge within the USA.
Preprint submitted to Computers & Fluids September 19, 2012
2. Governing equations and methods
OVERFLOW solves the dimensionsless, compressible NavierStokes equations (CNSEs) in generalized
coordinates (, , ). The dimensionless CNSEs without body force or heat source terms are given in rect-
angular coordinates by

t
+ (u) = 0, (1a)

t
(u) + (u u) p +
1
Re
(1b)

t
(e) + (eu) =
1
Re Pr( 1)
T pu +
1
Re
u. (1c)
In these equations, u is the velocity vector, is density, T is temperature, e is internal energy, is the
specic heat ratio, is the dynamic viscosity, Re is the Reynolds number, Pr is the Prandtl number,
= (u + u

) 2/3( u)I, and I is the identity matrix. Details of the scaling can be found in
the Users Manual for OVERFLOW 2.1. Transforming the CNSEs to generalized coordinates is done by
selecting a dierentiable coordinate transformation = ((x), (x), (x)) and applying the chain rule, i.e.,

i
=
i

, (2)
where

is the gradient operator with respect to . The transformed CNSEs can then be written in the
general form

t
q +

E +

F +

G = 0, (3)
where q = [, u
1
, u
2
, u
3
, e], and e is internal energy. Here, E, F, and G contain the advection, pressure,
and dissipation terms. The specic form of the discretization and splitting is specied by the user, and the
code oers a wide range of options of shock capturing schemes. In the present work, we are using the 5
th
-
order WENOM scheme of Hendrick et al. [2], which is an accuracy-maintaining correction for the original
WENO scheme of Liu et al. [3], which may drop to as low as 3
rd
-order accuracy near critical points.
3. The lter-forcing scheme
The turbulence model used in this work is motivated by the observation of Vasilyev et al. [4] that the
traditional formulation of the LES equations is not fully resolved on computational meshes and therefore
produces aliasing error. It was further observed that no discretization scheme is equivalent to spatial ltering
the governing equations. Thus, rather than assuming that the numerical scheme and under-resolved mesh
act as a lter, we perform explicit ltering at each time step. If the commutation error associated with the
lter is of the same order as the discretization, this is equivalent to solving the dealiased ltered equations,

t
+
j
(u
j
) = 0, (4a)

t
u
i
+
j
u
j
u
i
=
i
p +
j

ij
+ HFI
i
, (4b)

t
e
0
+
j
(e
0
+ p) u
j
=
j

ij
u
i

j
q
j
+ HFI
e
, (4c)
where
HFI
i
=
j
_
u
j
u
i
u
j
u
i
_
(5)
HFI
e
=
j
_

ij
u
i

ij
u
i
(e
0
+ p)u
j
+ (e
0
+ p) u
j
_
. (6)
are high-frequency interaction (HFI) terms. Assuming that the ltering operation approximates a spectral
cuto with support on the mesh, Eqs. (4a)(4c) contain exclusively resolved-scale terms, and the HFI
terms contain exclusively subgrid-scale quantities. Numerical tests revealed that this formulation tends to
be excessively dissipative, i.e., it suers from insucient backscatter depending on the ltering chosen. Thus
rather than modeling the missing HFI terms with additional dissipation, they are modeled with a forcing
function. Thus the overall structure of a time step is as follows:
1. Compute a single time step as usual, including all Newton iterations.
2. Apply an explicit, low-pass lter.
3. Extract a high-pass velocity eld, u
hi
.
4. Perform local checks to determine whether the ow is locally turbulent.
5. If the check is positive, iterate a chaotic discrete map and construct a pointwise force, f.
6. Incorporate this force into the right-hand side vector to be used in the next time step.
7. Return to step 1.
Details of the ltering, chaotic map, and forcing will be given below. As can be seen from this structure,
there are no additional transport equations or other PDEs added to the global system. It further involves
signicantly more operations per point than the Smagorinsky model, and because it has a an explicit check
for local turbulence, its global eect on performance should be dependent on the ow and model parameters.
In particular, the forcing function employed here has a single parameter that aects the magnitude of the
forcing.
3.1. HAMR ltering
Explicit ltering was performed on the conserved variables at the end of each time step using an optimized
high-accuracy and maximum-resolution (HAMR) scheme, which is an asymptotically stable Pade lter fea-
turing low dispersion, introduced by Liu et al. [5]. The unique ability of Pade ltering to avoid attenuating
low-wavenumber modes resulted in superior solution accuracy compared to a classical 10
th
-order lter as
demonstrated in numerical experiments performed by the same authors in a second paper [6]. In this work,
we used lter coecients from [6] corresponding to their designation of O40.20 for the interior lter and
0.20 for the boundary lter.
3.2. High-pass ltering
To calculate the high-pass velocity components for the backscatter model, we obtained lter coecients
by doing a least-squares t to a sharp spectral cuto at = 0.3 as seen in Fig. 1. While this method proved
to suciently isolate high-wavenumber content for the purpose of structural turbulence modeling, the lter
coecients do not exactly satisfy the relations necessary to be used as a low-pass lter for mollifying a PDE
solution as part of a numerical procedure, so we caution any reader away from using these coecients in
that manner. The lter coecients for the interior points are given by
= 0.6275872367
= 0.2691885228
p
0
= 0.06314459948
p
1
= 0.1096411682
p
2
= 0.0701432898
p
3
= 0.02940528862,
and the same least-squares procedure was used to obtain coecients for the boundary points, which are
given by
a
2
=(0.3096256995, 1.0, 1.1380646293,
0.4106696169, 0.0)
b
2
=(0.3084688023, 1.0057844862, 1.1264956568,
0.4222385894, 0.0057844862, 0.0011568972),
and
a
3
=(0.1477868412, 1.0, 1.1264956568,
0.6357553622, 0.1477868412)
b
3
=(0.1470348738, 0.6395151994, 0.9924803256,
0.6532750366, 0.1440270040, 0.0007519674).
By using these coecients in a high-pass lter, we obtain small-scale conserved variable eld q
hi
. The
high-pass primitive velocities are found by
u
hi,i
=
(u
i
)
hi

. (7)
Figure 1: Real (solid) and imaginary (dashed) components of the transfer function for the 3
rd
point away from the boundary
versus scaled wavenumber
3.3. Shock detection
Care must be taken regarding shocks. While a HAMR scheme attenuates high-k modes, it does not
dissipate in a Gaussian-like manner. The smooth, sharp shocks created by shock capturing schemes depend
on high-k modes in order to locally eliminate the Gibbs phenomenon. By attenuating primarily these modes,
the HAMR scheme actually counteracts the shock-capturing scheme and reintroduces near-shock oscillations.
Additional ltering at each time step compounds the oscillations until they cause critical instability or
nonphysical quantities, such as negative densities or pressures. In addition, construction of u
hi
will also
create undesired velocities in the vicinity of the shock, which will cause spurious activation of the turbulence
model.
The approach to shock treatment taken in this work is to simply avoid ltering near the shock. If we
have a discrete signal, {u
i
|1 < i < n}, we will represent the n n HAMR ltering matrix by H
n
. If we have
near-discontinuities at {u
i1
, . . . , u
im
} we wish to apply the ltering operation only to the smooth sections
of u and ignore the discontinuous regions, i.e.,
u =
_
_
H
i11
(u
1
, . . . , u
i11
)
T

T
, u
i1
, . . . , u
im
,
_
H
nim
(u
im+1
, . . . , u
n
)
T

T
_
. (8)
Although this introduces O(
5
) commutation error associated with the boundary lter at places in the inte-
rior of the ow, this is preferable to the solution-stopping instabilities caused by allowing Gibbs phenomena
to accumulate. Further, since the WENOM scheme is 5
th
-order, this will have no eect on global solution
accuracy.
Shock detection is a nontrivial problem and currently an area active research, although the main focus
of literature on this topic is visualization of shock waves rather than computing CFD solutions. Because of
this, many of the methods are too slow to be incorporated into a CFD solver. The HAMR scheme requires
the solution of pentadiagonal systems in each spatial direction, so additional computations should be kept
to a minimum. Kanamori and Suzuki [7] identify two main classes of shock detection in current use: those
based on the assumption that local gradients are perpendicular to the shock, and those based on solving the
local Riemann problem, of which their method is an example. However, this latter class of methods tends
to be too computationally expensive to be included as a method of shock detection in CFD.
In the present work, we employed a simple density smoothness indicator similar to the smoothness
functions employed in WENO shock capturing. The formula is given by
F
smooth
=
13
12
(
i1
2
i
+
i+1
)
2
+
3
2

i
(
i+1

i1
)

2
i
. (9)
This is computationally inexpensive and does not require the creation of whole new variable arrays, since it
can be computed only for the vector currently being ltered. But in this formula, there is no reason to expect
a universal value of F
smooth
. For the cases studied in the present work, cutting out points for F
smooth
> 0.2
proved to be adequate and returned results close to using the pressure-gradient scheme of Lovely and Haimes
[8].
3.4. Chaotic forcing
Attempts at inducing backscatter via random forcing functions were attempted in the 1990s by Leith [9]
and Chasnov [10] with reasonable success. Although this has not been a particularly popular method, the
basic idea continues to be used and modied with some success. A more recent example that has exhibited
fairly good agreement with DNS results is the LESLangevin model of Laval and Dubrulle [11], although the
dissipative component of this model was provided by a Smagorinsky-type eddy viscosity. However, all of these
examples used a Gaussian random variable to provide the unpredictable component of the forcing. Because
turbulence is non-Gaussian, we use a discrete dynamical system (DDS) we have called the compressible
poor mans NavierStokes (CPMNS) equations. The idea of using a chaotic, discrete map in the context
of turbulence modeling was proposed by Hylin and McDonough [12] and developed into the incompressible
PMNS equations by McDonough [13]. The extension of this technique to the compressible equations was
then presented and studied by Strodtbeck et al. [14]. This dynamical system takes the form,
a
(n+1)
i
=
i
a
(n)
i
(1 a
(n)
i
)
j
a
(n)
i
a
(n)
j
(1
ij
)
ij
a
(n)
j

i
e
(n)
, (10a)
e
(n+1)
=
[ e
(n)
(1
j
a
(n)
j
) + 2
_

ji

ji
(a
(n)
i
)
2
+
ij

ji
a
(n)
i
a
(n)
j
_
+
ij
a
(n)
i
a
(n)
j
_
/(1 +
T
),
(10b)
where summation is over j for the a
i
equations and over both i and j for the e equation. Formulas for the
bifurcation parameters are given by

i
=1

(11a)
=/(M
loc
)
2
(11b)

ij
=
1
3
k
i
k
j
(1
ij
) (11c)

ij
=
2
3
_
( 1)M
2
loc

j
k
i
k
j
Re
loc
_
(11d)

i
=

(11e)

ij
=k
i

( 1)M
2
loc
Re
loc
(11f)
Re
i
=

_
|
i
|

(11g)
Re
loc
=
1
3
(Re
1
+ Re
2
+ Re
3
), (11h)
where all ow variables have been appropriately scaled with local quantities as described in [14]. Here, M
loc
is local Mach number, k
i
is the wavenumber associated with the length of the grid cell in the i
th
direction,

i
is the i
th
component of vorticity, is the lter width (here taken to be the average length of a grid
cell), = 1/||, and is kinematic viscosity. Note that the formulation of Re
i
is similar to that of y
+
,
the dimensionless turbulent wall distance, although with an instantaneous vorticity component in place of
mean strain rate. With the exception of elements of , the formulas for the bifurcation parameters are taken
directly from the derivations. At each time step, the bifurcation parameters are constructed and, if the
DDS is in the quasiperiodic or chaotic regime, randomly initialized and iterated 12 times, which is generally
sucient for the system to exhibit its characteristic behavior. If the bifurcation parameters correspond to a
non-broadband regime, the map is not iterated and the SGS forcing at the node for the given time step is
set to zero.
The specic form of forcing employed in this work is inspired by the successful use of linear forcing,
rst proposed by Lundgren [15], implemented for stationary, incompressible turbulence by Rosales and
Meneveau [16], and extended to the compressible case by Petersen and Livescu [17] to create a stationary
state in homogeneous, isotropic turbulence. This sort of formulation is attractive due to its straightforward
implementation and analysis. Because we are trying to model backscatter rather than achieve stationary
turbulence, we apply the forcing only to q
hi
and multiply by a
i
in order to provide chaotic dispersion. Thus
the formulas for our forcing terms are
f
i
=(C
PMNS
)u
hi,i
(a
i
A
i
) (12a)
f
e
=f
i
u
i
(12b)
where f
i
is the forcing term for the momentum equation, f
e
is the forcing term for the energy equation,
C
PMNS
is the PMNS model constant, and A
i
is the average of a
i
over the twelve iterations. This averaging is
necessary to ensure that the average kinetic energy induced by the forcing term is zero; i.e., energy should be
scattered rather than articially injected. The forcing term on the energy equation is a simple formulation
for maintaining consistency in the governing equations.
Figure 2: Close-up of overset grid features for a 24 degree shock ramp featuring a trip wire
4. Test Problem and Computational Facilities
To test the eects of the turbulence model on the parallel performance of OVERFLOW, a test problem
was constructed based on the physical geometry of the 24-degree shock ramp used in the experiments of
Ringuette et al. [18]. In this case, we have modeled the rst 120 mm of the ramp. Free stream conditions
were imposed at the inlet with M

= 2.9, T

= 196.67

R, Re = 5909 based on a length scale of 1 mm,


and = 1.4 was held constant. Boundary conditions in the spanwise direction were periodic. The lower
boundary was a viscous, adiabatic wall with pressure extrapolation, and both the the upper boundary and
the outow used characteristic extrapolation based on Riemann invariants.
Parallel computations were performed on the University of Kentuckys DLX cluster, which has 376 nodes
with two 2.66 GHz Xeon X5650 processors at each node, for a total of twelve cores and 36 GB of RAM per
node. The data presented in this paper was gathered over several months of utilization of this HPC facility
as part of the development and testing of this turbulence model.
4.1. Validation
Validation of OVERFLOWs parallel performance had been done prior to the development of the lter-
forcing model using a very early version of the mesh. Three dierent meshes were used for validation, here
referred to as the coarse, medium, and ne meshes. The coarse mesh had 6.7 million nodes, the medium
mesh had 11.8 million nodes. and the ne mesh had 22.4 million nodes. Each mesh had the same basic
structure as seen in Figure 2. The ARC3D diagonalized BeamWarming scheme [19] was used for the implicit
solves, and WENOM [2] was used for shock-capturing. Simple second-order implicit time-stepping, with a
dimensionless time step of 0.01, scaled by the freestream speed velocity and a length scale of 1 mm, which
was sucient to achieve stability.
The average time per ow solution step is recorded in Table 1. OVERFLOW provides a detailed break-
down of the entire computation time, so we have here recorded only the time spent in the solution process,
as time spent reading from and writing to les can be minimized by reducing the number of times save states
are written. At the time of these numerical experiments, the DLX cluster was experiencing severe problems
N. Cores Coarse Med Fine
1 8.53 14.3 26.7
6 1.50 2.74 4.99
12 0.826 1.58 2.97
24 0.441 0.843 1.48
36 0.312 0.641 1.02
48 0.266 0.476 0.776
60 0.673
72 0.565
144 0.357
Table 1: Time per step in seconds for the coarse, medium, and ne meshes
with its le system, resulting in an undue impact on overall time (as much as 80% of total run time could be
consumed by le I/O). Figure 3 shows the data for computation speed with a power law curve t. For the
coarse grid, f(x) = 0.12x
0.91
, for the medium grid, f(x) = 0.07x
0.87
, and for the ne grid, f(x) = 0.04x
0.88
,
where x is the number of cores.
As can be seen in Figure 3, computation time follows an approximate power law. Ideal behavior is a
doubling of speed with a doubling of processors, which corresponds to an exponent of 1 in the power law
curve t. The gure shows that this exponent does approach 1 as the grid size decreases, although it should
be noted that even the coarse grid at 6.7 million nodes is not particularly small by current engineering CFD
standards. Further, for all three grids, the exponent is still close to 0.9in fact, it is approximately the same
for both medium and ne grids, despite the latter grid having about twice as many points as the former.
This conrms that OVERFLOW maintains its near-ideal behavior on UKs DLX cluster.
4.2. Turbulence model performance
Data for the performance impact of the model has been gathered from numerical tests used to validate
the models eects on physical ow behavior using a high-quality LES mesh. This mesh was constructed of
a single grid in order to avoid any errors associated with the non-conservative chimera interpolation scheme.
In terms of dimensionless wall distance y
+
, the dimensions near the wall were (
x
,
y
,
z
) = (25, 1, 25)
everywhere with a vertical growth rate of
y,2
y,1
= 1.28. This is a typical resolution for a LES mesh. See,
e.g., the experiments of Knight and Yan [20] on the same ow conditions, where they used a mesh with a
resolution of (24, 1.9, 8.1). This mesh was 500 87 300, with 13.2 million grid points and can be seen in
Fig. 4. ARC3D Beam-Warming and WENOM were used as before. However, the second-order time step was
enhanced using six Newton subiterations, which was sucient to cause ux residuals to drop three orders of
magnitude at each time step.
To generate boundary layer turbulence, a copy-to/copy-from condition recycling was used, with a small
trip placed near the inlet by using OVERFLOWs hole-cutting feature to remove a small, rectangular wire
from the grid. Because OVERFLOW does not provide any means of rescaling the velocity in order to
achieve a canonical, self-similar, at-plate boundary layer, the results produced by this method are not
strictly comparable to those of physical experiments.
Discretization was 2
nd
-order in time using a linearized implicit method with six Newton subiterations,
which resulted in right-hand side ux residuals dropping at least three orders of magnitude at each step.
Spatial discretization used a 5
th
-order HLLC scheme with WENOM interpolation for the shock capturing,
and the left-hand side matrix of the implicit solution procedure was constructed using the ARC3D Beam-
Warming block tridiagonal scheme [19]. Experiments were run for C
PMNS
= 0 (no model), 1, and 5. When
C
PMNS
= 0, the high-pass velocity eld is not constructed so that the full impact of the turbulence model
can be measured.
The average computation time per step is recorded in Table 2. As can be seen from the data, the eect
of the turbulence model on the run time is small. There is a 4% increase in time between C
PMNS
= 0 and
Figure 3: Processing speed versus number of cores
Figure 4: Mesh for the 24 degree shock ramp. Every 4
th
point is displayed.
C
PMNS
= 5, but this is within the natural variability of the experiment. Further, the numerical schlierens
in Fig. 5 clearly show that the model has substantial eects on the qualitative behavior of the ow in a
signicant portion of the domain.
5. Conclusion
In this paper, we have presented the eects of a new lter-forcing turbulence model on the parallel
performance of OVERFLOW. OVERFLOW exhibits high scalability and excellent overall performance,
and we have conrmed that this turbulence model has negligible eects on the computation time. This
is consistent with the fact that the backscatter term is computed entirely from local quantities without
introducing any new PDEs. From a computational standpoint, then, this conrms that the lter-forcing
approach is viable as a LES scheme.
C
PMNS
Average Maximum
0 4.192 4.764
1 4.105 4.654
5 4.369 4.933
Table 2: Time per step in seconds
Figure 5: Numerical schlierens for the Mach 2.9 compression ramp with, beginning from the top, C
PMNS
= 0, 1, and 5.
References
[1] M. Djomehr, H. Jin, Hybrid mpi+openmp programming of an overset cfd solver and performance
investigations, Tech. Rep. NAS-02-002, NASA Ames Research Center, Moett Field, CA (2002).
[2] A. K. Hendrick, T. D. Aslam, J. M. Powers, Mapped weighted essentially non-oscillatory schemes:
Achieving optimal order near critical points, J. Comp Phys. 207 (2005) 542567.
[3] X. Liu, S. Osher, T. Chan, Weighted essentially nonoscillatory schemes, J. Comp Phys. (1994) 115,
200212.
[4] O. V. Vasilyev, T. S. Lund, P. Moin, A general class of commutative lters for les in complex geometries,
J. Comp. Phys. 146 (1998) 82104.
[5] Z. Liu, Q. Huang, Z. Zhao, J. Yuan, Optimized compact nite dierence schemes with high accuracy
and maximum resolution, Int. J. Aeroacous. 7 (2) (2008) 123146.
[6] L. Zhanxin, H. Qibai, H. Li, Y. Jixuan, Optimized compact ltering schemes for computational aeroa-
coustics, Int. J. Num. Meth. Fluids 60 (8) (2009) 827845. doi:10.1002/d.1914.
URL http://dx.doi.org/10.1002/fld.1914
[7] M. Kanamori, K. Suzuki, Shock wave detection in two-dimensional ow based on the theory of charac-
teristics from cfd data, Journal of Computational Physics 230 (2011) 3085 3092.
[8] D. Lovely, R. Haimes, Shock detection from computational uid dynamics results, in: 14th Computa-
tional Fluid Dynamics Conference, Norfolk, VA, 1999, pp. AIAA19993285.
[9] C. E. Leith, Stochastic backscatter in a subgrid-scale model: plane shear mixing layer, Phys. Fluids A
2 (1990) 297299.
[10] J. Chasnov, Simulation of the kolmogorov inertial subrange using an improved subgrid model, Phys.
Fluids A 3 (188-190).
[11] J.-P. Laval, B. Dubrulle, A les-langevin model for turbulence, Eur. Phys. J. 49 (2006) 471481.
[12] E. Hylin, J. McDonough, Chaotic small-scale velocity elds as prospective models for unresolved tur-
bulence in an additive decomposition of the navierstokes equations, Int. J. Fluid Mech. Res. 26 (1999)
539567.
[13] J. McDonough, Three-dimensional poor mans navierstokes equation: A discrete dynamical system
exhibiting k
5/3
inertial subrange energy scaling, Physical Review E 79 (2009) 06530214.
[14] J. Strodtbeck, J. McDonough, P. Hislop, Characterization of the dynamical behavior of the compressible
poor mans navierstokes equations, Int. J. Bif. Chaos 22 (2012) 1230004S.
[15] T. S. Lundgren, Linearly forced isotropic turbulence, in: Annual Research Briefs, Center for Turbulence
Research, Stanford, 203, pp. 461473.
[16] C. Rosales, C. Meneveau, Linear forcing in numerical simulations of isotropic turbulence: Phys-
ical space implementations and convergence properties, Physics of Fluids 17 (9) (2005) 095106.
doi:10.1063/1.2047568.
URL http://link.aip.org/link/?PHF/17/095106/1
[17] M. R. Petersen, D. Livescu, Forcing for statistically stationary compressible isotropic turbulence, Phys.
Fluids (2010) 116101.
[18] M. Ringuette, P. Bookey, C. Wyckham, A. Smits, Experimental study of a mach 3 compression ramp
interaction at re

= 2400, AIAA J. 47 (2009) 373385.


[19] T. Pulliam, Numerical techniques for viscous ow computation in turbomachinery bladings, Brussels,
Belgium, 1986.
[20] D. Knight, H. Yan, Large Eddy Simulation of Supersonic Compression Corner, in: APS Division of
Fluid Dynamics Meeting Abstracts, 2000, p. D3.

Das könnte Ihnen auch gefallen