Making espresso, modeling reservoirs

Neil McNaughton returned from the Denver SPE ACTE with a new buzz—the use of refinery techniques to model and optimize oilfield production. He speculates on how this new approach will impact upstream modeling. He suggests that upstream modelers could smarten-up their act with better visibility of input assumptions and by applying more science to the model building process.

If you like coffee and are of an obsessive nature I strongly recommend that you read no further. My new hobby is espresso and I don’t mean popping into Starbucks occasionally. I mean using a vast range of appliances at home in a never ending quest for the holy grail—the ‘perfect shot’. After my recent trip to the Denver SPE ACTE—where optimization is all the rage—it struck me that that was exactly what I was doing in my search for coffee nirvana.

Green beans

To give you an idea of the extent of my malady, let me walk you through the process. First, get your beans—‘variable’ number one. These have to be green, un-roasted as bought from obscure boutiques on the net, selecting a variety, blend and origin. Next roast ’em—another variable, coupled to the first. The degree of roast must be tuned both to the type of coffee and to the desired result.

Hard grind

When the roast has cooled, the grind setting comes into play. Again, this is a variable that needs incessant tweaking—to match upstream (beans/roast) choices and downstream—desired results. A few more hurdles before the tasting—the tamp of the coffee in the espresso brew head—and the timing of the pour. Some really serious home baristas add a PID controller to allow for super-fine brew temperature regulation.

Start-over

You can then savor your shot—or more likely throw it away if you are a beginner—and start over, tweaking one variable or another in the never ending search for perfection. As you can imagine with so many variables to play with, it is frequently hard to know which ones to tweak to fix a particular deficiency. After a shot or two it can be hard to remember what you tweaked last time.

ACTE

Sitting through a demo of a modeling tool at the SPE ACTE I was reminded of my coffee quest. Reservoir modeling, like espresso making is an optimization process where multiple variables and scenarios are adjusted to achieve a desired result.

Forms, forms

What struck me at the SPE ACTE, as the nth data entry form popped up and the mth menu dropped down, was an undue emphasis on visualizing model results. Most of the input assumptions are obscured to the subsequent users of the model. Maybe there is room for better visualization and management of input tweaks. This would facilitate ‘assumption-mining’ rather than data mining, although tools such as those on display from SpotFire would seem well suited to this exercise.

Virtual hyperspace

We have a ‘knowledge management’ issue here. By making all input accessible in some virtual hyperspace, model evaluators would get used to the ‘shapes’ that tried and tested parameter selections represent—and could quickly spot anomalous and unlikely choices. Perhaps future managers will massage the input with haptic devices—or maybe I had too much coffee this morning. Whatever.

Aspen

After Denver we were invited to attend the AspenTech user group meeting right here in Paris. More on this—and on the SPE later. But this meeting tied up a few loose ends in our quest for illumination in simulation and optimization—the big buzz at the SPE. There is a headlong dash underway in the industry to implement the ‘e-field’—with technology transfer underway from refining to the upstream and an inevitable culture clash.

Culture clash

The first thing we had to grasp is the huge difference between process modeling at the refinery—and modeling the reservoir. Refiners model processes they understand. Any discrepancy between the model and the facts can be fixed by looking at some of the plethoric measurements of the process itself.

Poor relation

Reservoir modeling is the poor relation of process modeling. Here parameters are tweaked so that the model fits a limited data set. ‘History matching’ is widely used to establish the model and here I bring you an interesting aside from an book review that appeared recently in EOS*. Reviewing Hugh Gauch’s book—Scientific Method in Practice**, Gerard Fryer of the University of Hawaii writes that Gauch ‘correctly argues that a model cannot be judged from its performance in predicting the data that were used to fit it in the first place’.

Objective?

That pretty well destroys most ‘history matching’ in one fell swoop. But according to Fryer, there are ways to judge model performance objectively—‘Statistical efficiency, Akaike Information Criterion and Schwarz’s Bayesian Criterion’ to name-drop but three. I’m not sure how widely these objective criteria are applied to oilfield models. Suddenly though, with sim opt, culture clash, data mining and statistical certainty (?) all looming on the horizon, reservoir management looks as though it is undergoing something of a renaissance. Like I said—more on these fascinating developments next month.

PID?

By the way, some of you may be wondering what a ‘PID controller’ is. My researchers tell me that is a ‘Proportional, Derivative, Integral’ device. Not sure what that means—but they are all over refineries!

*EOS Transactions of the AGU Vol 84, N° 36 September 2003.

**Cambridge University Press. ISBN 0-521-01708-4.

This article originally appeared in Oil IT Journal 2003 Issue # 10.

For more information or to comment on this topic email here.

This recent, 890 word, article is currently for subscribers only. To request a copy, click here. This is a discretionary offer, restrictions may apply.