Beruflich Dokumente
Kultur Dokumente
Production Optimization
Mehrdad Gharib Shirangi
Stanford University
Abstract Introduction
Wang et al. (2012) applied retrospective opti-
mization to well placement problem. They ap-
In well control optimization for an oil reservoir plied k-means clustering for selecting the real-
described by a set of geological models, the ex- izations at each subproblem. The focus of this
pectation of net present value (NPV) is opti- work is on well control optimization. A response
mized. This approach called robust optimiza- vector is introduced to well characterizes the dis-
tion, entails running the reservoir simulator for similarity between the response of realizations.
all the reservoir models at each iteration of the Here, kernel k-medoids clustering is applied for
optimization algorithm. Hence, robust optimiza- choosing a small set of statistically representa-
tion can be computationally demanding. One tive realizations.
way to reduce the computational burden, is to Our objective is to find an optimal well control
select a subset of models and perform the op- vector that maximizes the expectation of the life-
timization on the reduced set. Another popu- cycle NPV over the ensemble of reservoir models.
lar technique, is using fast proxy models, rather The well control vector u, can be shown by
than full-physics simulators. In this work, a ker- 1 2 Nc Nc T
1 2
nel clustering technique is used to select a subset u = u1 , u1 , , u1 , , uNw , uNw , , uNw ,
of reservoir models that capture the range of un- (1)
certainty in the response of the entire set. where, each subscript denotes the well index and
each superscript denotes the control index; Nw
denotes the number of wells and Nc denotes the
In this work an adaptive strategy is used
number of control steps for each well. unj can
to build fast proxy models for the NPV, and
be either well pressure (BHP) or oil rate or total
then optimizing the proxy model using a pattern
liquid rate of the jth well at the nth simulation
search algorithm. The proxy model is generated
time step. In this work, the well controls are
by training an artificial neural network (ANN)
the BHPs. We only consider simple bound con-
or support vector regression model (SVR) using
straints.
some training examples. The challenge in build-
In robust production optimization, we want to
ing a proxy model is using finding good train-
maximize E[J(u, m)], where
ing examples. In this work, after optimizing the
proxy model, new training examples are gener- Ne
ated around the current optimal point, and a 1 X
E[J(u, m)] = J(u, mj ), (2)
new proxy model is built and the procedure is Ne
j=1
repeated. An example is presented that shows
the efficiency of kernel k-medoids clustering and where m denotes a realization of the reservoir
building proxy models for production optimiza- model, and J(u, mj ) denotes the corresponding
tion. NPV.
1
2
Model Selection for Reducing Number strategy. At this stage, as we are only interested
of Realizations in calculating a dissimilarity distance, the re-
sponses can be obtained from a rank-preserving
A simple way to reduce the computational cost,
proxy-type simulator, e.g. the streamline simu-
is to select a few statistically representative mod-
lator, and need not be the exact simulator.
els in the ensemble; these representative models
are selected based on ranking or clustering some
attributes. A random selection of the realiza- Building Fast Proxy Models
tions will not provide a representative set for the
ensemble. In this work, we use ANN and SVR to build fast
Wang et al. (2012) used k-means clustering proxy models. One challenge in building a fast
for selecting a set of representative realizations proxy, is finding good training examples. Note
from a large ensemble of models in field develop- that the input space (well controls) is very high-
ment optimization. The attributes they used for dimensional and continuous, hence, a randomly
clustering are OOIP, normalized cumulative oil generation of a few training examples may not
production, permeability distance, and oil-water be very efficient.
contact depth. They applied k-means clustering
to obtain a number of clusters, and then one re- Support Vector Regression as a Proxy
alization was chosen randomly from each cluster.
Scheidt and Caers (2009) developed an effective SVR is a regression model that can be trained
model selection method using kernel k-means to make predictions of output of a function for a
clustering in metric space. In this method, after given vector of input features. The general idea
obtaining the clusters, the model nearest to the of SVR is to perform linear regression in a high
centroid of that cluster (the medoid) is chosen. dimensional feature space by using kernel trick
In this work, kernel k-medoids clustering is (B. Scholkopf and Bartlett, 2005). In this work
used, where the models are divided into some we use a SVR software (Chang and Lin, 2011).
clusters and the robust production optimization It is worthwhile to mention that for using the
is applied on the reduced ensemble which con- SVR, all the input features should be scaled to be
tains the models corresponding to the clusters between 0 and 1; same is about the output value.
centroids. As the center of each cluster repre- Hence, here all the well controls (input features)
sents all the models in that cluster, a weight is are scaled to be in [0, 1], while the output values
assigned to each centroid. Finally, the weighted (-NPV), are also scaled to be in [0, 1]. In -SVR,
expectation of NPV of the centroids of the clus- there are two important parameters that affect
ters is maximized: the efficiency of the algorithm, which are and
C. In addition, the number of training examples
nk
1 X used also affects the predictions.
maximize j J(u, mj , yj ), (3)
Ne
j=1
Artificial Neural Network as a Proxy
where nk denotes the number of clusters. With
Nj representing number of realizations in the jth Artificial neural network (ANN) is a popular ma-
cluster, j = Nj is set as the weight of the jth chine learning algorithm inspired by biological
centroid in the objective function of the subprob- neural network. A neural network consists of
lem. input nodes, output nodes and interconnected
In this work, the field cumulative oil and wa- groups of artificial neurons. In this work, we use
ter production and the field cumulative water in- the neural network toolbox of MATLAB. The
jection in time are used to construct a response idea is to build a ANN proxy of some training
vector for each realization. examples, and then optimize the ANN model
To obtain the response vectors, all realizations rather than the computationally demanding full-
are ran with the simulator using the same control physics model.
3
30 0.3 30 10
7
P7 I2 P2 P7 I2 P2 x 10
9
25 0.25 25 2 0.5
8
20 20 7
0.2
1
Y 6 0
Y
15 P4 P5 P6 15 P4 P5 P6
5 0
0.15
10 10 4
1 0.5
0.1 3 5 0.5
5 P1 5 P1 1 1
I1 P3 I1 P3
2 7 0 0
x 10 0 8 0
0.05 1 x 10
5 10 15 20 25 5 10 15 20 25 5 1 0.5 1
X X
(a) true porosity field (b) true ln(k) field (a) original space (b) kernel space
Figure 1: True porosity and log-permeability fields. Figure 2: MDS plot of 100 realizations with 7
clusters and their centroids.
Example x 10
7 NPV vs time steps
15
This example pertains to a two-dimensional hor-
10
izontal reservoir model with 2830 uniform grid.
NPV $
True porosity and log permeability fields with
5
well locations are shown in Fig. 1.
There are 7 producers and 2 injectors in this 0
reservoir. Ne = 100 realizations of both poros-
ity and log-permeability are generated using geo- 5
2 4 6 8
statistics and then conditioned to 300 days of time steps
production history using truncated SVD param-
eterization (Gharib Shirangi, 2011). Production Figure 3: NPV values in time for 100 realizations
optimization is performed for the time interval colored for different clusters, markers
of 300 to 3000 days. show the curves of the cluster cen-
troids.
The objective is first to reduce the number
of realizations in robust optimization to signif-
icantly reduce the computational costs while ob- solution, and plotted the corresponding P10, P50
taining nearly the same final expected NPV and and P90. As one can see, P10 and P50 values for
characterization of uncertainty in the expected the exhaustive set at the final time are in good
NPV. Second objective is to test the efficiency agreement with those from the cluster centroids.
of using SVR and ANN in building fast proxy This is of significant importance. Fig. 3 shows
models in production optimization. the unoptimized NPV (from initial guess) of all
realizations versus time, colored based on clus-
ters.
Kernel Clustering
Fig. 2 shows the MDS plot and the points for Building Fast Proxy Models
each of the 7 clusters. In uncertainty assessment
of production optimization, capturing P10, P50 We first perform a sensitivity study on parame-
and P90 of the uncertain NPV distribution is ters of SVR ( C and ). To do so, we generate
very important. As one can see in Fig. 4(a), we 700 training examples, by randomly perturbing
could achieve this goal by using only 7 clusters. the control vector around the mean. Out of these
Fig. 4(b) shows the P10, P50 and P90 of the op- training examples, we set the last 200 points as
timal NPV obtained by maximizing the expecta- testing set. Fig 5 shows the sensitivity of SVR
tion of NPV of cluster centroids. The optimiza- to parameters C, , and number of training ex-
tion was performed by first generating an ANN amples. As can be seen, SVR has small testing
model and then optimizing the ANN model us- error even for small number of training examples.
ing pattern search algorithm (will be discussed In addition, a larger value of C and a smaller
later). We ran all the models with the optimal provides smaller testing error.
4
7 P10, P50, P90 from exhaustive set, and clsuter centroids
x 10
12
exhaustive set
cluster centeroids Testing error of epsilonSVR for epsilon=0.1
Relative Error
0.016
4
0.014
2
0
0.012
2 0.01
1 2 3 4 5 6 7 8 9
time step
0.008 2 0 2 4 6
10 10 10 10 10
(a) Initial NPV parameter C (cost)
3
x 10
8 P10, P50, P90 from exhaustive set, and clsuter centroids (a) sensitivity to C
exhaustive set
cluster centeroids Error of epsilonSVR for C=1
P10, P50, P90 of NPV distribution
2.5
0.02
2 0.018
Relative Error
1.5 0.016
1 0.014
(b) sensitivity to
Figure 4: P10, P50 and P90 of all 100 realiza- Error of epsilonSVR for = 0.1, C=1
0.1
tions and those calculated using only
cluster centroids for the initial NPV, 0.08
0.06
0.04
gorithm. Depending on how many training ex- (c) number of training examples
amples we use in generating the SVR model, the
solution from optimizing SVR can be different.
Figure 5: Sensitivity of SVR (testing error) with re-
Fig. 6 shows how the optimal NPV changes by in- spect to parameters.
creasing the number of training examples. Fig. 7
shows the results of directly optimizing NPV us-
ing pattern search algorithm. While directly op-
8
timizing NPV takes about 800 function evalua- 1.3
x 10
optimal value from SVR model
tions, we could obtain the same optimal NPV 1.4
corresponding true value
1.5
References
B. Scholkopf, R. C. W., A. J. Smola and
2 0 P. L. Bartlett, 2005, New support vector algo-
0 1 2 3 4 5
Patternsearch Iteration
rithms, Neural Computation, 12, 12071245.
Figure 8: Optimizing ANN proxy of NPV using pat- Chang, C.-C. and C.-J. Lin, 2011, LIBSVM:
tern search algorithm. Left axis shows -
A library for support vector machines, ACM
NPV while right axis shows the number
of function evaluations Transactions on Intelligent Systems and Tech-
nology, 2, 27:127:27.
ate a few training examples, and we optimize Gharib Shirangi, M., 2011, History Matching
the ANN model. Having obtained the optimal Production Data With Truncated SVD Param-
point, we run the simulator with the new point, eterization, Masters thesis, The University of
and generate new training examples around the Tulsa.
current point either by randomly perturbing the
current point or by polling in different directions Scheidt, C. and J. Caers, 2009, Representing spa-
around the current point. Fig. 8 shows the re- tial uncertainty using distances and kernels,
sults of using ANN as a proxy. According to the Mathematical Geosciences, 41(4), 397419.
results of this figure, with only 60 function eval- Wang, H., D. Echeverra-Ciaurri, L. Durlofsky,
uations, the NPV increased to about the value and A. Cominelli, 2012, Optimal well place-
obtained by directly optimizing NPV. ment under uncertainty using a retrospective
optimization framework, SPE Journal, 17(1),
Conclusions 112121.