Ask Any Question Any Time (AAQAT)

Oil IT Journal Editor, Neil McNaughton, reflects further on Wal-Mart’s data management and compares it to the parlous state of the oil and gas industry’s data effort. He suggests that, to be able to ‘ask any question any time’ (AAQAT) we need to abandon legacy, project centric workflows and move to a new paradigm with the workstation as a ‘window’ on live data.

Wal-Mart CIO Nancy Stewart’s presentation to the 2006 PNEC Data Integration Conference (see the June 2006 editorial) has been turning around in my mind since then. As I hope you remember, Stewart described Wal-Mart as being ‘manic about data’. The mission of Wal-Mart’s Information Systems Division is to let its marketing specialists ‘ask any question any time’. My ruminations, as I said previously, have been along the lines of ‘why can’t upstream engineers and scientists ask any question any time?’ Well, I think that I have finally come up with the answer. But before revealing all, I’d like to look at why the ability to ‘ask any question any time’ (AAQAT) is important to our business.

Macroscopic reality check

Early in most peoples careers there has been some kind of a realization that your interpretation—of a structure’s tectonics, a reservoir’s size or recoverable reserves is implausible. Some old timer, maybe your boss, looks over your shoulder, or lays into you in a partner meeting, to point out that your picture is ‘out of line’. A billion barrel field in a mature basin where the largest field to date is 50 mmbbl is unlikely.

Devil in detail

In the early days, a region’s prospectivity was measured in tried and tested summary units like barrels per acre foot—making it relatively easy to see when things were out of line. Today, the assumptions behind a reserve computation are manifold and may be rather resistant to review. How many Petrel dialog boxes can you be expected to check before you are comfortable with the end result? Just where do all these numbers come from? Are they guesses? There seems to be a general belief that a lot of guesses will somehow converge on the truth, something I find highly improbable.

Production Monitoring

A similar picture comes from the production monitoring space. Each action—from shutting in a well, controlling a downhole valve, or performing a workover used to be done with a macroscopic analysis of a few parameters like water cut. Today, with real time data, decisions may depend on analysis of an increasingly large amount of information streaming in from the field. Drinking from the data ‘fire hose’ may make it harder to understand what’s actually happening.

Big picture

The question in both these scenarios (and in many others) is how do you get the most out of larger and larger data sets? How can you see the wood from the trees? One approach, a novelty in the production context, is the use of data mining to derive likely parameter values from the data itself. But to do this, you need to have access to the data, preferably all the data. And this is where the industry is really lagging behind Wal-Mart.

AAQAT

In both the upstream and the production monitoring context, we can’t AAQAT because of what is increasingly looking like a major anachronism—the Project! Let me explain with an example from the web. You have just fired up Google Earth and are looking at Manhattan. Suddenly you fancy taking a trip over to Jersey for a virtual visit to Tony Soprano’s birthplace. Imagine your surprise if, as you panned across the Hudson, Google Earth said ‘sorry, you can’t go there, we will have to take time out to build a Jersey ‘Project’ from our database. This will be ready some time next week!’

Ridiculous

This is of course, ridiculous, but it is exactly what the upstream has gotten used to. While you are interpreting one field, you are ‘blind’ to data in the adjoining area. But this is just one example of how our work practices have conditioned us to working with at worst, guesses and at best, subsets of the available data. Seismic processing or geostatistical variogram parameter selection, decline parameters, process control set points—are these always based on all the evidence?

Subset

History has put us in position of managing a minimal subset of data, rather than an approach which says, ‘let’s capture and make everything accessible to our knowledge workers’. Most data strategies involve successive filtering of data into subsets, divided by work practice-based demarcation lines. ‘Field Data’, ‘Processed Data,’ ‘Workstation Data,’ ‘Corporate Data.’

Hand to mouth

Vendors and consultants have come along and elaborated data ‘strategies’ that are retrofitted to mask what is really fire fighting, living from hand to mouth. This is evidenced in publications which cut up the data cake into ‘low value’ data (presumably to be handled by ‘low value’ data managers!) through higher value ‘knowledge’ for the engineers and ultimately ‘wisdom’ for the bosses.

Edifice

This edifice has grown up to hide our inability to provide answers based on all available data because we are still living with so many antiquated demarcation lines. The data, knowledge, wisdom spectrum and the data filtering and loading gymnastics it involves is really just lipstick on the pig.

Evidence-based

Data mining, evidence-based reasoning, is the way forward for sure. But to mine data, you have to have access to it! To do this we need to lose the Project and move to data infrastructures designed for AAQAT. With interfaces that, like Google Earth, are dynamic windows onto the big picture. How are we going to do this? Unfortunately, lack of space precludes my answering that question!

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.