Sie sind auf Seite 1von 15

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/216771630

Human-Centered Oilfield Automation


Article January 2011

READS

131

1 author:
Bertrand du Castel
Schlumberger Limited
36 PUBLICATIONS 26 CITATIONS
SEE PROFILE

All in-text references underlined in blue are linked to publications on ResearchGate,


letting you access and read them immediately.

Available from: Bertrand du Castel


Retrieved on: 13 April 2016

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

Human-Centered Oilfield Automation


du Castel, Bertrand
Information Technology, Sugar Land, TX 77478, USA
ABSTRACT In this article I spell out basic principles of human-centered oilfield automation with a specific example in a part
of upstream processes that constitutes a dominant expenditure of the industry, namely, drilling.
Keywords Artificial Intelligence, Automation, Drilling, Human-Centered, Pattern Recognition, Stochastic Grammar.

I. FOREWORD

II. HUMAN-CENTERED AUTOMATION

of oilfield technology has largely been one of


automation, albeit human-centered automation. Human
operators have always been the driving force behind progress
in the search for oil and gas, applying ingenuity, initiative,
procedures, and persistence and using tools that have evolved
to give them more precise control over events that happen, by
necessity, outside of human reach.
From the fields of Pechelbronn in eastern France, where the
Schlumberger brothers first recorded electric logs, to the
subsalt formations of offshore Brazil, where seismic
processing guides minute geologically driven drilling, every
step along the way has been marked by progress in both
technology and methods benefitting field and office personnel.
Whereas previously, measurements made painfully in the field
would be transported mechanically to the office, where maps
would be drawn by hand before being distributed to decision
makers, digital networks have now taken over and computers
assist those tasks all along the way.
This is only the beginning of the technological story of
oilfield automation. While data extraction, analysis,
interpretation, and evaluation have been refined to ever greater
lengths, we are, as an industry, in a poor state in terms of
automatically converting that information into actionable
positions. While there were efforts in the 1970s and 1980s to
achieve quantum progress in that domain, all that was attained
were spectacular failures that left a very bad taste in the mouth
of oilfield participants, to the point that it is only now, in the
2010s, that scientists and technicians can again bring to the
oilfield methods and apparatus offering a human-centered,
high-value, and forward-looking proposition to decision
making.
In this article I spell out basic principles of human-centered
automation, domains of application, overall concepts and
approach, and a specific example in a part of upstream
processes that constitutes a dominant expenditure of the
industry, namely, drilling.

Automation in the oilfield must be human-centered. The


complexity, risks, environmental demands, and sheer size of
the ventures make it impossible to imagine computers
replacing in great extent the extraordinary people of our
industry in the near future. What we can do, though, is relieve
our personnel of tasks that are now reachable by computers in
this era in which rovers operate autonomously on Mars or cars
drive themselves automatically over the mountains of the
western USA.
Although quite different a challenge from that of
conducting an airplane from take-off to safe landing, oilfield
automation shares with that activity the idea that the human, in
this case the pilot, is best served by computers when possible
and best left alone when only the human brain is capable of
processing in time the necessary information while keeping
track of the goal to attain. For that reason, our industry can
largely adopt principles of human-centered automation spelled
out, for example, by Thomas Sheridan of MIT [1].
As detailed by Sheridan, it is possible to measure task
complexities, to evaluate the manual and supervisory control
involved, to identify capabilities of both humans and
computers in analyzing, planning, and learning, and to derive
models of where automation is appropriate. It is furthermore
possible to define a scale of automation, from the lower level
where the computer offers no assistance at all, to the mid
levels where the machine can present solutions that the
operator can veto, and on to an ultimate level where robots
operate independently. With these levels of automation in
mind, it is then possible to map implementation to the various
phases of our operations, along the lines of data acquisition,
analysis, decision making, and command. Consequently,
architectures, tools, and mediating technology can be put in
place over entire fields of action. The following sections of
this paper match this global view with the particulars of the oil
and gas industry.

HE HISTORY

III. UPSTREAM GOALS

Manuscript received October 24, 2010. Corresponding author: Bertrand


du Castel (e-mail: ducastel@slb.com).

To define the business value brought by human-centered


automation to the upstream, it helps to partition the industry
according to geography and life cycle. Geographically, steadystate operations go from the subsurface to the surface, then to

2
the asset, then to the base and the office, and back. In time,
operations go from delineation to characterization to
development and then to production and abandonment. For
each of these activities in space and time, it is possible to
outline areas of automation. First, I will paint a picture of the
value brought by human-centered automation geographically,
projecting into the future.
Subsurface: Downhole information and control is available
through high-bandwidth communication channels throughout
well construction, completion, and production. Installation of
completion hardware is predictable and reliable. Flexible,
semi-autonomous, bandwidth-optimized, and context-aware
systems reduce the need for human intervention while making
it more effective when it is required.
Surface: Surface systems can be controlled locally and
remotely. Automation drives efficiency, safety, and
economics. The surface environment is safe and attractive.
Dangerous, unpleasant, and inefficient tasks, as well as tasks
prone to human error, are automated. Well-trained technicians
operate surface equipment with input from remote experts.
Asset: Multivendor asset equipment is fully networked from
downhole (to seabed) to surface. Information about the
performance of the asset and the levels of uncertainty in future
performance is constantly updated and displayed for
surveillance and alarms. Automation plays a key role in a
rolling simulation, uncertainty analysis, and optimization of
asset exploitation.
Base: Field locations have the benefits of local autonomy
with logistics and resources optimized across the company.
Local staff has access to expertise through automated systems,
particularly in case of breakdown. Work schedules are
attractive because of the elasticity of response created by
automation and access to multiple remote experts.
Office: Expertise is distributed around the world in and
between companies. It is enhanced by automation in data
management,
simulation,
uncertainty
management,
surveillance, and prognostics. Experts facilitate decisions and
are part of the continuous improvement process.
Administration is guided by logistics automated through
network services, computerized as well as assisted by humans.
Phases in time of the asset life-cycle can be similarly
brought to light.
Delineation: From basin modeling to gravimetry,
automation takes the form of software tools and practices as
well as equipment. In seismics, the logistics of surveys and
sensor positioning is dynamic, and the processing of results is
fully informed by the entire knowledge environment. The
process is driven by engineers relying on extensive networks
of experience.
Characterization: Visualization techniques assembling all
data available in composite pictures are routine, while the
underlying model supports simulation runs that allow
immediate corrections in the field. The introduction of
stochastic evaluations puts forward risk perspectives that
guide economic decisions. Experts drive the overall process
basing their judgment on global business considerations.
Development: Infill drilling is supported by prediction

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

techniques that simulate the outcome of changes in asset


configuration. Knowledge of the local geology, rock physics
and target formations allows controlling the drilling process
with a real-time assessment of progress and information
flowing back to the original characterization models for
possible actions. Operators rely on virtual assistance to guide
their decisions.
Production: Completion mechanisms are understood by
means of physical models supported by the manufacturing
process. Knowledge of equipment origin and projected
behavior allows monitoring and taking corrective actions
before the breaking point is reached. Encompassing analysis
of micro- to macro-scale events, including surface facilities
and oil and gas demand, guides allocation processes that
optimize fluid flows according to both production and
depletion targets.
Overall, expertise is distributed over the asset and the
logistics of local operations is guided, while back office
simulation makes predictive analysis for forward correction
and risk moderation. This view of the future allows outlining a
global model of upcoming operations.
IV. MODEL
While automation techniques apply across all domains of
intervention, it is possible to classify these techniques into the
rough categories partitioning of human-directed and materialdirected implementations.
Modeling human behavior and thought process has
historically been a core inquiry of philosophy and science; its
conversion into automated actionable measures is, however, a
recent development. The conversion of manual operations to
low-level commands, followed by a gradation of high-level
commands, auto-pilot, and remote operations has seen an
accelerating technological accretion. At the time of this
writing, the state of the art is such that certain processes in
inaccessible regions such as space, deep sea, and otherwise
inhospitable environments can be conducted autonomously or
semi-autonomously, with humans intervening only at set
points in the process. The frontier of science is now that of
modeling the entire brain both physically and logically, with
anticipation that computers will assume part of the thinking
process. A good example is computer vision, in which scene
recognition is now sufficient for robotic driving. We can
expect more of human-derived technologies to be put into
action in the oilfield, as an example will show later in this
article.
On the material side, which concerns implementations
derived from the science of nature, we can map relevant
techniques which are all to be developed to continuously
improve upstream automation prowess, thereby increasing the
efficiency, reliability, repeatability, and continuous
improvement properties of oilfield processes. Again, I will
project into the future.
Formation: On the mathematical side, inversion techniques
provide predictive models maximizing the value of local
information gathering. Network services allow coupling
multiple formation measurements to focus on the parameters

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

of most interest, a process known as compressive sensing.


Orchestration of services is based on knowledge of local
conditions.
Well: Drilling and completion processes based on
evaluation and simulation allow deciding an optimal
positioning and installation while feeding global knowledge
repositories for environmental feedback. A formidable
challenge still is dynamic, or even static, automatic learning
which would allow correcting and developing future material
and processes.
Reservoir: This is the domain of ontologies, repositories of
knowledge covering all aspects of operations, allowing
melding in a comprehensive architecture acquisition, fusion,
processing, analysis, interpretation, and action. Modeling and
simulation are tools to that effect, and providing a consistent
environment to bring them to bear is one of the mountains to
climb, going beyond current shared earth modeling onto
human-level (non-monotonic) knowledge building.
Asset: The resolution of asset studies, from the finest
chemical constructs to regional sedimentary maps, presents a
fractal distribution that makes upscaling for decision making a
difficult challenge. Associated with the necessity to compute
uncertainties at all scales, and to assemble the results with that
of other assets, is the need to create the necessary federations
of knowledge accumulation within and without oil and gas
companies and institutions.
Finance: The cap to this hierarchy, as well as its fuel, is
made of governance principles, standards and regulations, and
business decision processes that also require automation.
Financial factors must be integrated increasingly finely all the
way into the spectrum of activities just enumerated.
The previous snapshots outline the potential extent of
human-centered automation. I will now provide an example
that prefigures these scenarios of the future.
V. ARCHITECTURE
We can architect human-centered oilfield automation by
considering on one hand the real-time environment of
applications, and, on the other hand, the overarching
knowledge organization that supports them. The real-time side
includes sensors feeding individual signal processing and data
fusion to stochastic evaluations and actuators guided by a
command and control infrastructure. The elements involved
can be represented in a single knowledge organization that
both integrates and merges them. The knowledge
representation is built on a consistent approach to the actual
description of signal processing, stochastic evaluation, and
command and control. This single description allows
encompassing the entirety of operations for reasoning and
transforming sensor information into actionable decisions in
full consideration of the context involved.
As I am now going to show, even early attempts at fullfledged human-centered automation involve all of these
elements in more-or-less advanced forms, depending in part
on the state of the art and in part on the capabilities of oilfield
organization to foster and propagate such innovations.

3
VI. DRILLING AUTOMATION

According to Steve Holditch, Schlumberger Fellow, pastPresident of the SPE, and now Head of the Texas A&M
Petroleum Engineering department, drilling can be 60% of
upstream expenditures for a company [2]. Drilling is an
expensive and time-consuming operation that involves
operations marked by an unforgiving environment, a science
in development, and techniques that are necessarily
conservative in the face of the cost of failures, both human and
material. Surface operations of drilling are surprisingly at the
same time very manual and very efficient. The crews are
remarkably specialized and operate at levels of performance
that are tough challenges for automation. I refer the reader for
example to the multiple videos of drilling on the web; the
swiftness of rig operators is astonishing.
When researchers were faced with the different challenge of
building autonomous cars, an important part of the solution
was the development of better sensors, for example laser
ranges that take multiple measurements at various angles and
inclinations, providing, in some cases, better information than
can be obtained by eye since waves can penetrate otherwise
opaque surfaces. Unfortunately, in drilling, we must satisfy
ourselves primarily with four surface measurements as regards
the basic operation of drilling: position of the drilling block,
hook load, torque, and pressure. While it would seem easy to
add other sensors to the rig, the reality is that the industry is
extremely wary of changing existing practices at the critical
core of the wellsite. A business case must be made that
overwhelms cost, cultural, physical, safety, training, and other
considerations, while extending to a majority of rigs [3].
Any measurement beyond the four fundamentals is,
therefore, at the present, gravy or a substitute (for example,
rotation can add to or substitute for torque), but it is not
possible to count supplemental measurements as mandatory
requirements for processing. Even more challenging is the fact
that in some cases, some of the four surface measurements are
absent, and useful information needs to be computed from yet
a smaller input base, a subject that is of interest but that I will
not go into in this article. So, automation of surface operations
relies on these four measurements, not necessarily precise at
that, presenting a task that is so difficult that it is only recently
that progress has been made thanks to new mathematical
techniques allowing for contextual understanding, such as
those that I will discuss here. Now that I have answered the
question of why we are so limited in input information, the
obvious question is: Why try to analyze such a sparse input?
The reason is that we want to understand what drillers
actually do in a way that does not rely solely on people sitting
and observing them, at the drilling site or remotely. Drilling
jobs last weeks and even months, and only a computer is
patient enough to gather information coming at 3 to 5 seconds
sampling during this time. Only a computer can sift through
all these data to determine what are the productive phases
(e.g., making hole) and the unproductive ones (e.g., sitting
idle). And only a computer is capable of making the necessary
calculations to cross-reference many jobs to find patterns of
possible improvement and places where the driller could be

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

4
assisted in making more informed decisions for better business
results. Finally, to end on a very positive note, there are
patterns, for example of early kick-in detection, that a
computer is sometimes capable of finding earlier than humans,
leading to immediate prognostics that can improve the safety
and reliability of drilling. However, for those patterns to be
detected in the right context, it is necessary to understand from
the surface measurements at which state of drilling the rig
stands, so that the algorithms do not get distracted by artifacts
outside of their scope, a key requirement of human-centered
drilling automation.

Hook load is in dark blue. In green are typical patterns of inslips situations (i.e., situations in which the drilling string is at
rest, hanging on slips at the rig floor). In pink are patterns
which are quite close but whose identification as in-slips
patterns would be erroneous, something that we know from
having a full record of the job.

VII. RIG STATES


The most immediate action that can be taken for
understanding surface drilling measurements is to analyze first
the individual patterns of each measurement (single signal
analysis), and then cross-patterns of several measurements
(signal fusion). For example, in order to understand what is
happening on the rig, one very important thing to know is
whether the driller is pumping or not; for making hole, the
driller needs to pump, therefore pumping is a contributing
indication that the hole may be getting deeper. A more subtle
example consists in the operation of weighing, a short hand
that I will use for the common practice of obtaining drillstring
weights regarding pick up, slack off, and rotating. For
instance, the driller may weigh the drillstring when a stand is
removed, as excessive drag provides an indication of abnormal
hole condition to take into consideration in proceeding to the
next stand removal.
Figure 1, in which pressure is displayed in purple, shows a
simple example of identification of pump-off patterns (green
circles). The identification is less straightforward than it looks;
for example, the small pink circle at minute 13 shows a pattern
in which the pump is in action since it started kicking in, but
the signal goes back to zero anyway. Tougher cases are
calibration changes, in which the driller, or some physical
event, changes the baseline of reference; I will show an
example of this later. However, computing whether pumps are
on or off exhibits a case in which single-signal analysis
provides some results that can be used as a basis for further
consideration.

Figure 2: Hook Load in slips pattern (green).

To address this problem, we need to combine information


from the block position (red in Figure 3 below) and the hook
load. What we are looking for is a particular pattern of
decreasing block position combined with a minimum hook
load. What this says physically is that to be in slips, the
drillstring must first be lowered into them, hence the pattern.

Figure 3: Hook load in-slips pattern (green).

The patented technique [4] used by Schlumberger to


recognize this kind of fusion pattern is a sequential Bayesian
filter controlled by a hidden Markov model [5]. In more polite
English, this means that one defines a model of state
transitions which allows predicting for each sample in turn the
state of the next sample in a probabilistic manner; the data
found for the next sample are then matched with the prediction
to assign probabilities to the states associated to that sample
and then again for the next sample. The computation occurs
over the four channels and eventually assigns, for each time
sample, one of 18 global states called rig states (Figure 4).
Figure 1: Stand pipe pressure pump-off pattern (green).

Signal fusion analysis is well illustrated by hook load


measurements. Figure 2 shows several hours of job recording.

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

Figure 4: The 18 rig states.

In Figure 4, the first two columns give a name and a


numerical value to each rig state. The eight other columns are
discriminative traits that classify the rig states. For example,
when the rig state Back Ream is on, the block moves up, the
bit is off bottom, the pump is on, the drillstring is rotating,
and, of course, the drillstring is not in slips. The last three
states (Absent, Unclassified, and Data Gap) have particular
status as they are actually statements of existence. Data are
unclassified if no rig state has been found for them with a
high-enough probability among the list of the first 15 states,
data are absent if the acquisition system was occasionally
unable to produce data to feed the rig states algorithm, and
data gap is on when for a period of time the acquisition system
was not producing anything. So at this point, we have a
categorization of data with rig states, which tells us interesting
things, like whether we are making hole or not, in rotary or in
sliding mode. It also provides more generic, but yet useful,
information like whether we are running in or pulling up.
But are we running in because we are tripping in (placing
the drilling string in the hole), or, say, pulling out because we
are tripping out (removing the drilling string), or are we
actually moving the drilling string up and down the hole

(wiping) for washing it?


VIII. INTELLIGENCE
Rig states provide a solid basis to build upon for extracting
more intelligence from the data. For example, if interpretation
of the rig states suggests that we are in a tripping out
environment, then some events, like weighing, are much more
likely to happen than in, say, a tripping in situation; knowing
this, weighing patterns may be given more importance during
analysis. Identifying activities such as weighing brings us a
step further towards providing that grail of drilling
automation, which is automatic job reporting; if we can
progress in that matter, then we can start not only better
understanding each job, but also benchmarking jobs against
each other with the goal of dramatically improving our
prognostic abilities.
To introduce the subject of gathering more knowledge from
the data once rig states have been established, I will show an
example of a situation in which the rig states found actually
lead to a wrong analysis if they are not complemented by a
broader understanding of the context of operation (Figure 5).

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

Figure 5: False identification of in slips pattern (pink).

In Figure 5, hook load data are in dark blue, and block


position data are in red. In green (pattern A) we find the
typical in-slips pattern, consisting of a drop in position
accompanied by a minimum in hook load values. What is
interesting here is the similar pattern found in pattern B; here
we also have a drop in position (marked B) and a minimum in
hook load values (marked B).
However, we should now look at blue patterns, which
illustrate slow decreases in block position typical of a holemaking situation; the drill bit penetrates the earth at a slow and
steady rate. The sequence of blue pattern 1 (making hole with
the block low) and of blue pattern 2 (making hole with the
block high) explains the in-slips pattern A. What happened
between blue patterns 1 and 2 is that the drilling crew put the
drillstring into slips to add one stand of pipes and then resume
making hole. Contrast this with the blue patterns 2, 3, and 4.
What we see here between 2 and 3 is a steady progression of
the bit into the earth. Then some events happen, and in 4 we

see the hole-making activity resuming at the same depth as we


left it in 3. Now, what happened between 3 and 4?
Well, strangely enough, while we know that pattern 3 is at
the bottom of the well, since the bit is penetrating the earth,
we see in B that the block seems to suddenly go below that
depth (i.e., somehow, the drill bit would be going below the
total depth of the well). Even more strangely, it would get
back from there and come back to the bottom of the well as
shown by pattern 4. This is obviously impossible.
So the question is: how can the bit seem to go, albeit
temporarily, below bottom depth, a physical impossibility?
This was a difficult problem that was examined in detail by
Walt Aldred, Jacques Bourque, Jonathan Dunlop, Maurice
Ringer, and Mike Sheppard at Schlumberger Cambridge
Research. From studying several similar cases, we
subsequently developed the following analysis with Richard
Meehan. Pressure is in purple, torque is in green (Figure 6).

Figure 6: Drillstring contraction explanation for the apparent position of the drill bit below the bottom of the hole.

In Figure 6, two new patterns, in gray, can be recognized.


These are labeled Max and Min. At blue pattern 3, the

driller switches off the rotary (torque goes to zero) and lifts the
block up. From the gray pattern Max to Min, the driller

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

brings the pressure to zero in steps. This causes a shortening


of the drillstring [6]. The driller lowers the block to move the
bit towards bottom. The drillstring further shortens as its
tension reduces. We can actually measure the extent of the
contraction, which is the difference between the block position
at pattern 3 (82 ft) and that at pattern B (71 ft). That 11-ft
contraction of the drillstring relates to the total depth, which is
17,150 ft. It represents somewhat less than 1/1000 of the total
length, well within known contraction capabilities of
drillstrings [7]. Additionally, these 11 feet explain why the
block apparently lowered the bit below total depth, which
answers our question. When pumping restarts at the gray
pattern Min, the driller lifts the block to accommodate the
expansion of the drillstring and then resumes making hole
(blue pattern 4).
To complete the story, I need to mention that we looked for

confirmation at another channel, not shown here, that is the


angle of drilling. It was actually deriving at blue pattern 3
which informed us that the driller was attempting to correct
the drilling direction. Actually, the driller switches from
rotating to sliding (for direction control) further down the job
with the same pattern of attack, which confirms the
interpretation.
In this example, we observe a phenomenon that an
automatic analysis needs to account for. So as far as this
article is concerned, the question is: How can complex
situations like this one be automatically understood by a
computer?
Before addressing the question, I offer a second example of
complex analysis, going back to the weighing situation
presented earlier (Figure 7).

Figure 7: Identification of drillstring weighing (gray 2, 4, and 6).

In Figure 7, the driller is making hole (blue pattern) before


lifting off three stands; that happens when gray patterns 1, 3,
and 5 associate with in slips green patterns A, C, and E and a
block drop typical of stand removal.
The interesting point here is that after each stand removal,
another in-slips pattern, shown by green patterns B, D, and F,
occurs. James Brannigan clarified this. To understand what is
happening during the second in-slips sessions B, D, and F, we
need to look at the patterns preceding them, namely gray
patterns 2, 4, and 6, respectively. Those are smaller, but very
similar to gray patterns 1, 3, and 5 that precede the first inslips sessions A, C, and E. In other words, gray pattern 1 is to
green pattern A what gray pattern 2 is to green pattern B, and
similarly for 3 to C with 4 to D, and for 5 to E with 6 to F.
How do we explain this? Well, the physical interpretation
here is first that to remove a stand, we lift it up until it is above
the slips, then we put the slips in, and then we lower the stand
and disconnect it; we then lower the block to pick up the
drillstring at the rig floor. That accounts for patterns 1+A,
3+C, and 5+E. But now, watch. Once we have connected the
drillstring, we lift it again, but this time by a small amount, to
do some operations such as circulating that do not show here

(they appear if the display is enlarged, though, and are detailed


in the rules presented below), and then we put the drillstring
back into slips. That accounts for patterns 2+B, 4+D, and 6+F.
That whole sequence is called weighing, which was mentioned
earlier. The patterns of removing stand and weighing are very
close; the latter is almost a version of the former appearing at
a lower block position. Therefore, for the computer to
differentiate removing stand from weighing the drilling string,
it must somehow detect that these are operations that look
similar, except that one is typically done when the block is
high, while the other is done when the block is low.
IX. DRILLING STATES RULES
It is actually very difficult for the computer to recognize
when the block is high and when it is low in respect to the
current stand being handled, even though it looks trivial in this
example. The reason is that typical jobs have approximate and
changing calibrations of the block position sensor, that the
length of stand being removed is not always the maximal
length, and that sensor noise and reading errors must be taken
into account. Moreover, to make things more difficult, similar
patterns found elsewhere are almost like the one seen here

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

8
without being weighing, which forces the computer to be very
discriminative. All of those difficulties can be surmounted,
though, and I give some indication of how later in this paper.
So for the time being, assume that it is possible to discern the
difference between low and high position, so that rules can be
presented that handle the two cases discussed.
The situation in which the pump is cut and the apparent
depth is below the total depth is handled by the two rules of
Figure 8. The first rule (pump_cut) defines the context in
which the dip pattern occurs. The second rule (hook_zero)
recognizes the said pattern within that context. The rule is
probabilistic, in that it says that the pattern occurs in 40% of
cases in a high position of the block while in slips and in 60%
of cases in a low position. Of course, the combination of the
two rules applies in the broader context of other rules which
are not shown here. In particular, the immediate rule above
pump_cut is drill_operation, and this situates the two rules
shown in yet a broader context, that of operations occurring
while making hole. The *10 part of each rule name means
that the operation may last about 10 samples, although the
distribution around that can be quite large, a statement that
cannot be fully grasped without the introduction to stochastic
grammars that appears later in this article.

Figure 8: Main two rules detecting drillstring contraction.

The rules of weighing, shown in Figure 9, are more


involved, as the driller may do more varied operations. The
first rule (weigh) is the total sequence of actions. That rule
expresses a succession of two rules (weigh_ready and
weigh_complete) that themselves involve several actions in
alternative and sequence, as well as repetitions. In the top rule,
notice the presence of the trait height low which specifies
that this rule happens most probably only when the hook is
low in position, as discussed earlier.

Figure 9: Main rules detecting weighing activities

Before completing this story, I will return to the question of


automatic calibration of block position, a necessary requisite
to proper identification of high and low block position. The
job shown in Figure 10 illustrates the problem. The curve in
brown is the block position before automatic calibration. The

pink patterns 1 and 2 show how difficult it can be to assess


where the midpoint of the data is for evaluating low and high
positions of the block, since the baseline changes. Patterns A,
B, and C are blocks of data that contain regular add-stand
sequences while tripping in. In pink pattern 1, the block is

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

seemingly going below the rig floor, which is impossible. In


pink pattern 2, there is a sudden change in calibration that
offsets block C versus block B. James Brannigan has
interpreted that pattern 1 is an instance of slip and cut, a
drilling operation extending the life of equipment; it involves

removing the drilling line from the drum, which means that
the block position sensor does not measure the position of the
block anymore, but rather the travel of the line while it is
worked on.

Figure 10: Automatic calibration of block position.

For the computer to properly assess the midpoint for blocks


A, B, and C, it needs to recognize them as separate entities,
which requires that it first recognizes events 1 and 2.
Unfortunately, to properly understand events 1 and 2, it also
needs first to know that A, B, and C are blocks of regular
operations, and therefore that 1 and 2 are deviations from that.
That is a Catch-22 situation. To know A, B, and C, I need to
know 1 and 2, and to know 1 and 2, I need to know A, B, and
C. However, even as that seems intractable, that is the kind of
thing that the human brain handles very well.
Our brain is wired with feedback loops that allow constantly
matching higher level interpretations with lower level
interpretations to assure overall consistency [8]. In this case,
the solution is to mimic the brains functioning by first making
an interpretation attempt without consideration of midpoint
conditions and then operating the feedback loop by evaluating
whether this interpretation is compatible with the data. That
allows isolating and recognizing pink patterns 1 and 2 in order
to operate a second round of interpretation. Rather than going
further into this discussion, I will rather now explain how all
of this actually runs on a computer.
X. TECHNOLOGY
Using the techniques outlined above, a system was designed
that takes as input the rig states as well as the original
channels from the rig. This system, informally called Drilling
States, recalibrates the block position channel and defines
additional rig states by means of sensor fusion, taking into
account features such as whether the rig state happens during a
high or low block position. At the end of 2010, that processing
produced one of 63 augmented rig states for each data sample.
In further processing, Drilling States uses a stochastic

grammar (more explanation later) of 68 rules such as those


presented earlier to define 167 drilling states each with a
probability assigned for each sample. The states of highest
probability are then computed and displayed while statistics
are operated on them to produce the kind of interactive display
found in Figure 11 in a web browser of the users choice.
The job of Figure 11 lasted about 40 days, with 4 seconds
sampling in steady state, producing 750,000 samples. The full
Drilling States processing takes somewhat less than 3 hours on
a Dell Latitude E6400 laptop. Notwithstanding the various
command buttons at the top, bottom, and right of the display,
the bottom part shows the full extent of the job, with for
reference in light blue the bit depth recorded at the well site
with driller input. The transparent light-gray area overlying the
bottom part at mid-right shows the interval of the job that is
displayed in more detail in the middle part. The middle part
displays that subset of the data, which is an interval of 8.5
days, on a 12-hour grid. The data are displayed tracking the bit
depth.
This allows immediate recognition that after pulling out
from the bottom (first hump on the left), the driller went back
close to bottom, then pulled again out (second hump) before
going down making hole until pulling back again (third
hump). Here I have elected to show block position (in red) and
pressure (in purple). Below the bit depth curve are colored
stripes which show the drilling states. For example, on the first
hump, we see a gray drilling state indicating that the rig is idle
for about 12 hours. The drilling states in the second trough
show that the driller has been circulating without the bit ever
penetrating the earth. In the third trough, the brown color
indicates places where the driller is actually making hole.

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

10

Figure 11: Drilling States view of 8.5 days of a 40-day job.

Figure 11, of course, is just a screen picture and lacks the


dynamic features of the actual system that allow to look at and
understand the data at all levels of precision However, a better

understanding of what the drilling states represent can be


obtained by looking at some of them in more detail (Figure
12).

Figure 12: Some of the drilling states in details.

In Figure 12, I will focus on drilling states 13, 14, 15, and
16, which correspond to the rules discussed earlier regarding
the pattern of drillstring contraction. What state 15 indicates,

for example, is that during the activity well_operation, there


was a drill operation leading to a drill_a_section activity,
further decomposed into a drill_operation named

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

pump_cut, with a hook_zero pattern occurring while the


drilling pipe was in_slips with the block position low.
At the right of the display of Figure 11, each activity, such
as drill_a_section, pump_cut, and the others above, is
associated with a particular color. The ribbons of color under
the bit depth curve represent, for each state, the hierarchy of
colors corresponding to the hierarchy of activities that
constitute this state. This informs us not only of the large-scale
activities, like drill_a_section, but also of smaller-scale
activities, like hook_zero, in one swoop. In the third trough,
we clearly see, for example, the alternation of brown
(make_hole) and green (well_operation) colors that provide a
larger-scale understanding of what is happening. The user can
dig deeper in the understanding by looking at colors below
that. For example, the orange color below the brown color
means that we are using rotary drilling.
On top of the display are statistics regarding the drilling
states, for both the entire job and the particular interval
displayed. Those statistics allow focusing on anomalies, such
as an inordinate amount of time idling, not enough time
circulating, inconsistencies in add/remove stand operations,
and so on.
XI. SCIENCE
The probabilistic rules that I have presented are but a few
ones of a stochastic grammar supporting the automatic
analyses we have explained. This section offers a brief
description of stochastic grammars, the quintessential artificial
intelligence technique used in Drilling States. The term
stochastic grammar sounds like a high-school nightmare.
However, it is not as barbaric as it sounds. At school, grammar
is understood as the rules of language. In mathematics,
however, a grammar is just a tool (formally, a noncommutative algebra) that allows organizing and interpreting
data.
For example, the following is a grammar with six rules that
I will refer to as a traditional (deterministic) grammar [9]:
(1)
(2)
(3)
(4)
(5)
(6)

Bird => BlueBird


Bird => RedBird
BlueBird => BlueBirdFeather*3
RedBird => RedBirdFeather*3
BlueBirdFeather => Blue
RedBirdFeather => Red

In plain English, rules (1) and (2) say A blue bird is a bird,
and so is a red bird; rules (3) and (4) say that blue birds have
three feathers as do red birds; rules (5) and (6) say that the
feathers of a blue bird are blue, while the feathers of a red bird
are red. If we apply all the rules of the grammar, we see that it
allows only for two combinations of feathers:
(i)
(ii)

Blue Blue Blue


Red Red Red

So if we find on the ground three blue feathers, we can use


the grammar to tell us that the feathers are from a blue bird

11

here, and, similarly, if we find three red feathers, the feathers


are from a red bird.
But what if we find three feathers of different colors? And
what if, for example, the blue birds are typically not entirely
blue and the red birds are not entirely red? That leads us to
introduce a new kind of grammars that allow for probabilities,
and which are nowadays called stochastic grammars [10].
Here are the rules of the stochastic grammar at hand:
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)

Bird => .5 BlueBird


Bird => .5 RedBird
BlueBird => BlueBirdFeather*3
RedBird => RedBirdFeather*3
BlueBirdFeather => .8 Blue
BlueBirdFeather => .2 Red
RedBirdFeather => .3 Blue
RedBirdFeather => .7 Red

This grammar is far more detailed. Rules (1) and (2) say
that there are about the same number of blue and red birds.
Rules (3) and (4) say that those birds have typically 3 feathers.
Rules (5) and (6) say that the feathers of the blue bird are blue
in 80% of the cases, 20% otherwise. Rules (7) and (8) say that
the feathers of the red bird are red in 70% of the cases, and
blue in 30% of the cases. So now, instead of just two
possibilities, we have eight. For each possibility, the grammar
can derive from the rules a top probability:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)

Blue Blue Blue (BlueBird: .95)


Blue Blue Red (BlueBird: .67)
Blue Red Blue (BlueBird: .67)
Blue Red Red (RedBird: .82)
Red Blue Blue (BlueBird: .67)
Red Blue Red (RedBrid: .82)
Red Red Blue (RedBird: .82)
Red Red Red (RedBird: .98)

For example, if the feathers found are all blue, which is case
(i), the bird is most likely a blue bird (95%), since the
probability of a red bird having only blue feathers is low
indeed. If all the feathers are red, which is case (viii), the
probability is even stronger that is it a red bird (98%), because
the probability of a blue bird having a red feather is lower than
the probability of the red bird having a blue feather, so red is
less likely to be found on blue birds. All intermediate cases
can be similarly computed.
The computations are essentially matrix manipulations. In
short, first we list all the possible states of the grammar:
(1)
(2)
(3)
(4)

BlueBird BlueBirdFeather Blue


BlueBird BlueBirdFeather Red
RedBird RedBirdFeather Blue
RedBird RedBirdFeather Red

That just says that feathers can be blue and red, and can
belong to either a blue bird or a red bird. The matrix showing
this has coefficients expressing that considering a blue feather

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

12
in absence of context, I would not know if thats the feather of
a blue bird or of a red bird, so I would make it a toss between

(1)
(2)
(3)
(4)

BlueBird BlueBirdFeather Blue


BlueBird BlueBirdFeather Red
RedBird RedBirdFeather Blue
RedBird RedBirdFeather Red

Now, in the context of looking for blue and red birds and
finding several feathers, I can examine each feather in turn. I
want to know, starting from the beginning, how much I learn
each time I find a new feather, until I have reviewed all the
feathers. We express this by means of a transition matrix

Start
BlueBird BlueBirdFeather
Blue (1)
BlueBird BlueBirdFeather
Red (2)
RedBird RedBirdFeather
Blue (3)
RedBird RedBirdFeather
Red (4)

the two, and same for a red feather:

Blue feather found


.5

Red feather found


.5

.5
.5
(formally, a hidden Markov model generated from the
stochastic grammar). We add the states Start and Finish to the
matrix, in order to get a complete representation of the
situation, from beginning to end:

BlueBird
BlueBirdFeather
Blue (1)
.4

BlueBird
BlueBirdFeather
Red (2)
.1

.57

.14

.28

.57

.14

.28

The rows add up to 1 if we discount the fact that here we


have rounded each number for presentation purposes. For
those who wonder about the .28 in the Finish column, it
expresses that the grammar also handles the cases in which we
find fewer or more than three feathers (I alluded to that earlier

BlueBird BlueBirdFeather Blue (1)


BlueBird BlueBirdFeather Red (2)
RedBird RedBirdFeather Blue (3)
RedBird RedBirdFeather Red (4)

a. Consider the state vector after sample 1, which is (.73 0 .27 0)


b. Multiply the state vector by the transition matrix to find the
prior state vector: (.73*.57, .73*.14, .27*.21, .27*.5) = (.4161,
.1022, .0059, .1325)
c. Find from the input matrix the possible states based on red
input: (0, .5, 0, .5)
d. Hadamard multiply the possible states by the priors (Bayesian
rule) to find the new states: (0, .0511, 0, .06625)

RedBird
RedBirdFeather
Red (4)
.35

Finish

.21

.50

.28

.21

.50

.28

in this paper when explaining the role of the * symbol), but


that is too detailed for this discussion. Well, now we are ready
to process data. The following matrix shows how the
processing progresses for the input Blue Red Red:

1st feather found is blue


.73
0
.27
0

The stochastic grammar processing assigns to each state, for


each sample, a probability. Then it decides which is the most
likely state based on those probabilities; that is how we assign
the probability 82% that the bird is red when the three feathers
found are Blue Red Red, as it is the highest value found in
the third numerical column. For the mathematically inclined,
here is how we move from sample 1 (1st feather found is blue)
to sample 2 (2nd feather found is blue):

RedBird
RedBirdFeather
Blue (3)
.15

2nd feather found is red


0
.43
0
.57

3rd feather found is red


0
.18
0
.82

e. Normalize to 1 to get the new state vector: (0, .43, 0, .57).

Well, enough of birds and feathers. It takes a little bit more


than that to build a stochastic grammar and a processing
system capable of managing drilling and producing results
such as those of Figure 11. At the end of 2010, the grammar of
Drilling States had 68 rules leading to 167 states processed in
parallel for each of the hundreds of thousands of samples of
typical jobs, with the processing lasting no more than a few
hours on a Dell Latitude E6400 laptop for the biggest jobs to
date. The Drilling States stochastic grammar is far more
involved than can be fully explained here, but the details can
be found in the two patents that Harry Barrow and I filed on
the subject [11] [12]. For the time being, I will content with
providing a snapshot of the stochastic grammar of Drilling
States (Figure 13).

M&S: SCHLUMBERGER J. OF MODELING, DESIGN, AND SIMULATION, VOL 2, JUNE, 2011

13

Figure 13: Snapshot of the current Drilling States grammar.

In Figure 13, one of the elements I have not talked about is


the Confusion window at the bottom right of the picture. Even
though the driller, the drilling engineering software, and
Drilling States itself provide the best input to the stochastic
grammar processing, it is impossible to trust that the input is
devoid of errors of various forms. We call this confusion, and
it is one of the parameters entered into the processing. I hope
youll appreciate that even confusion has been automated to
lessen the users pain.

States are perhaps a weak approximation of the same number


of neurons, there are still about 10 billion more neurons and
100 trillion synapses in the mind of the driller, so a lot is still
to be done!
Drilling automation is one of the most difficult, but the
most rewarding challenges of human-centered oilfield
automation today. That we are only at the very beginning of
answering this challenge talks to the enthusiasm that we
should all feel in the technology to come to our industry.

XII. FUTURE WORK

ACKNOWLEDGEMENT

My purpose in this article was two-fold. I wanted to show


that human-centered oilfield automation could now be tackled
with a new array of technologies that were unavailable when
the industry first tried it in the 1970s. At that time, both the
practice and the theory of computer science were in their
infancies. More powerful computers and mathematical
progress on several fronts allow advances such as the one
presented here, which were unimaginable then. More will
come with new knowledge in robotics, control, computers,
networks, mathematics, and artificial intelligence.
Second, and since I have presented stochastic grammars, I
would like to mention that this is but the beginning of
neurocomputation (i.e., computation based on an
understanding of neural processes) in the oilfield. Stochastic
grammars, originally developed to understand natural
language, reflect on the probabilistic nature of synaptic and
neuronal computations [13]. While the 68 rules of Drilling

The need for Drilling States has been first expressed by


Richard
Meehan,
of Schlumberger
Drilling
and
Measurements. Harry Barrow, Fellow of the Association for
the Advancement of Artificial Intelligence and retired from
Schlumberger Cambridge Research, proposed using stochastic
grammars for the task and implemented the first version of
such. I was put in contact with Harry by Walt Aldred and
Mike Sheppard from Schlumberger Cambridge Research as I
was working on upstream automation, and I switched to
building a complete system of automation with Harrys work
at the core. I was then helped by Jonathan Dunlop and
Maurice Ringer of Schlumberger Cambridge Research, James
Brannigan and Blaine Dow of Schlumberger Drilling and
Measurements, Jacques Bourque and Chris Luppens of
Schlumberger Integrated Project Management, and Tom
Zimmerman of Schlumberger Ltd. Harry Barrow is the genius
behind the story. Walt Aldred and Mike Sheppard put it in

DU CASTEL.: HUMAN-CENTERED OILFIELD AUTOMATION

14
context through numerous discussions that we transformed
into internal and external presentations that I used extensively
for the introductory part of this article. LeAnn Rushing of
Schlumberger Drilling and Measurements and Eric Schoen of
Schlumberger Information Solutions thoroughly reviewed this
paper which is much better for it. Opinion and errors are
strictly mine. Data samples are from Schlumberger Integrated
Project Management and Schlumberger Drilling and
Measurements.

REFERENCES
[1]
[2]
[3]

[4]

[5]

[6]

[7]

[8]

[9]
[10]

[11]
[12]

[13]

T. B. Sheridan, Humans and Automation: System Design and Research


Issues, New York: John Wiley & Sons, 2002.
S. Holditch, Personal communication, October 5, 2009 and October 19,
2010.
E. van Oort, E. Taylor, G. Thonhauser, and E. Maidla, Real-time rigactivity detection helps identify and minimize invisible lost time,
World Oil Vol. 229 No. 4, April 2008.
J. Dunlop, W. Lesso, W. Aldred, R. Meehan, M. R. Orton, and W. J.
Fitzgerald, System and method for rig state detection,
PCT/GB2003/005586, December 22, 2003.
L. R. Rabiner, A Tutorial on Hidden Markov Models and Selected
Applications in Speech Recognition, Proceedings of the IEEE, Vol. 77,
No. 2. February 1989.
C. R. Chia, H. Laastad, A. Kostin, F. Hjortland, and G. Bordakov, A
new method for improving LWD logging depth, Proceedings - SPE
Annual Technical Conference and Exhibition, 2, Society of Petroleum
Engineers, 2006, pp. 1184-1192.
M. A. Mian, Petroleum engineering handbook for the practicing
engineer, Volume 2, Chapter 9: Production Technology, PennWell
Books, 1992.
S. M. Sherman and R. W. Guillery, The role of the thalamus in the flow
of information to the cortex, Philos Trans R Soc Lond B Biol Sci. 2002
December 29; 357(1428).
N. Chomsky, Syntactic Structures, The Hague/Paris: Mouton, 1957.
T. Booth, Probabilistic representation of formal languages, IEEE
Conference Record of the 1969 Tenth Annual Symposium on Switching
and Automata Theory, IEEE Computer Society Press, pp. 74-81, 1969.
H. Barrow, Harry and B. du Castel, System and method for determining
drilling activity, PCT/IB2009/006346, July 23, 2009.
B. du Castel and H. Barrow, System and method for automating
exploration
and
production
of
subterranean
resources,
PCT/UB2009/006350, July 23, 2009.
T. Branco and K. Staras, The probability of neurotransmitter release:
variability and feedback control at single synapses, Nature Reviews
Neuroscience 10, 373-383, May 2009.

Bertrand du Castel is a Schlumberger Fellow. Past director and vicechairman of the Petroleum Open Software Consortium, now Energistics, and
other organizations, Bertrand was awarded in 2005 the visionary award of
Card Technology magazine for pioneering the Java Card, by 2007 the most
sold computer in the world. He is the author with Timothy M. Jurgensen of
Computer Theology (Midori Press, 2008) and of publications in artificial
intelligence, computer security, linguistics, logic, software engineering, and
oilfield and geothermal technology. Bertrand has a PhD in Computer Science
from the University of Paris and an Engineer Diploma from cole
Polytechnique, France. He joined Schlumberger in 1978 and is based in Sugar
Land, Texas.

Das könnte Ihnen auch gefallen