We were kindly supplied with a review copy of the latest edition of James Murtha’s book*, “Decisions involving uncertainty, an @Risk Tutorial for the Petroleum Industry.” First off, this is not really a book about uncertainty analysis. It is an oil industry-specific version of the @Risk manual. As such it does a credible job and walks the reader through the use of forecasting, volumetrics, porosity and saturation modeling with uncertainty (or perhaps that should be with clarity?). @Risk offers many bells and whistles, with lognormal, triangular error distributions and user-assigned dependencies of variables.
All in all a straightforward introduction to the subject of Monte Carlo analysis (MCA). Incidentally, for those who already have the book, there is little that is new. In fact there is less in this issue that in previous editions, the Lotus 123 references have been eliminated. Elsewhere the main changes concern the alignment of the text and illustrations with later versions of Excel and @Risk itself.
You may be wondering why this brief book review appears in this month’s editorial. The reason is that in chapter 1, Murtha asks “why do risk analysis,” and makes the case for stochastic (statistical in other words) as opposed to deterministic analysis. It seems to me that in this simple introduction, Murtha, like the whole risk analysis industry, overlooks an alternative was of analyzing risk. A much simpler way in fact. Let me explain. Murtha offers the universal problem of computing volumetric reserves as a starting point for his argument.
Where A is the area in acres, h is the net pay in feet (European readers will excuse the quaint units) and R is the recovery factor (bbl/acre ft.). The argument moves on quickly through the deterministic approach (which gives a single value) through the scenario approach (worst, most likely, best) and on to stochastics, the subject of the book. For those of you who have not come across MCA, the technique involves performing a large number of ‘simulations’ of the reserve calculation using input values picked at random from within a probable distribution. It is the ‘casino’ aspect of the method that gives it its name.
MCA is above all great fun. I remember the first time I saw a colleague program a MCA simulation, and we watched the old HP plotter whack, whack, whack the crosses onto the paper, building up a cloud of points showing the range of possible values of oil in place. MCA has the virtue of being simple to implement, and requiring considerable compute resources to run on all but the simplest models. A dream for the computer buff. But is it always necessary?
Certainly, oil companies today are extremely interested in computing uncertainty, but if you have a million cell reservoir model, and a few dozen input variables, a fully fledged MC simulation for every input would probably take longer than the field’s life-span to complete. While pondering Murtha, I remembered something from high-school days. Wasn’t there a way of doing the error analysis more deterministically? In the old days this was called “error analysis.” Well I won’t get too far into this, the old memory isn’t what it was, but it involves “drunken walks” for added parameters and added errors for multiplied terms. Error analysis says if there is a 5% error (standard deviation) in each term of the A*h*R equation, then there will be a 15% error in the computation. Easy!
Why do the proponents of MCA not consider error analysis? Could it be because it is too simple? That deprives engineers of the pleasure they get from plowing through reams of MCA computations? Deprives them of their futzing in fact.
In these days of empowerment, tools like MCA allow just about anyone to do science that is arguably outside their own “core competence”. In other industries, like drug testing, statistics are performed by... statisticians! Giving a tool like @Risk to a geologist, engineer or financial manager may not be the most straightforward way of analyzing risk. In fact, as a model’s complexity grows, the results will probably make for rather dubious science. Pulling triangular distributions out of thin air, and coupling and uncoupling variables at will, may be within reach of the software, but the result? It will probably be something of a drunken wander away from the truth, proving yet again the Churchillian epithet alluded to in the title.
Putting it another way, whatever they tell the shareholders, to launch a multi-billion dollar development offshore west Africa on the basis of a couple of exploration wells and a 3D seismic survey takes more balls and faith than statistics!
*Decisions involving uncertainty, an @Risk Tutorial for the Petroleum Industry. James A. Murtha, ISBN 1-893281-02-7, 2000 Palisade Corp.
This article originally appeared in Oil IT Journal 2000 Issue # 9.
For more information or to comment on this topic email here.