Editorial - Data Management Conferences Proliferate. (October 1996)

PDM's editor, Neil McNaughton surveys the state of play of data management conferences and offers a personal view of data management tools.

In September, in the space of a week there were 4 conferences covering E&P data and information management. The PESGB had their Geoscience Data Management Seminar simultaneously with PSTI's Knowledge Working in the Oil and Gas Industry. And a couple of days later Stephenson and Associates followed with first their E&P Data Management '96 2 day event which was back to back with their E&P Data Model Seminar.

scary

I find this sudden interest in data management conferencing a bit scary for two reasons. Firstly, last year there were no conferences, this year if we include the US, there have been a total of 7 to date, which to my mind makes the data management scene look like a bubble about to burst. Secondly, you would imagine that with this level of activity, that just about everything that could be said on data management would have been said, rather like the monkeys playing away on their typewriters and coming up with Shakespeare's sonnets. Well the bubble may burst. I don't think that 7 plus conferences a year is sustainable, but we have certainly neither heard the last word on data management, nor for that matter heard an awful lot about real world solutions to the problems that beset the upstream sector. We are however getting down to describing our problems in detail, an essential first step to understanding and then hopefully solving them.

all fall down

But as for solutions, Barrie Wells (XRG) describes the state of the software industry today as the equivalent of architecture in the 13th century. In those days people new how to build cathedrals, but they often fell down several times before they got them just right! Today many of the "solutions" to data management are yesterdays applications dressed up with some new jargon. On the other hand, there is a groundswell of opinion that yes, it would be a good idea to devote resources to data management, that yes we should all try and use the same names for wells and seismic lines and that yes we have made mistakes in the past, and should learn from them.

pyramids

Many speakers have attempted to classify various levels within the IT/Data hierarchy, and have usually come up with a pyramid-like structure (do pyramids abound because everyone uses the same clip art, or do they have some mystic signification?). These can be used to represent just about anything, from the data through information to knowledge and wisdom spectrum, to the n-tiered architecture of a distributed database. All these representations have their place, but I would like to offer my own. Because we are still not up to speed with our graphic printing, I'll forgo the pyramid and present this as a table.

A data - centric view of data management

level

Management tools

Data

data types

O/S Habitat

Executive Information Office Automation, Pesonal Productivity, Notes, Full text, Document Management Systems. Reports, memos, composite graphics, maps, sections. .doc, .xls, tiff, cgm. etc. PC-Intel-Windows.
Vertical Applications Workstation Applications Project data, logs seismics maps etc. SEG, LIS, proprietary binaries, data models UNIX
Data Stores Data delivery systems Raw field data seismic and well logs SEG, LIS, legacy native format. UNIX

Mainframe

Please note that this is not how things should be, it is just a representation of the way things are. Many of our problems stem from the difficulty of transferring data vertically through this matrix, others stem from the fact that our brains and the information we manipulate most definitely do not fall into this type of categorization. If the exploration manager of your company is a geochemist, he will probably, like all his peers use an Excel spreadsheet to evaluate the likely return on investment from a variety of scenarios. But unlike his peers, he will have a very special interest in the maturity of source rocks in the vicinity of the prospect. A geophysically-bent (and they can be..) manager may well check the interpreters pick over the prospect, and query how statics were applied, and what migration velocities were used. Geology and reservoir-engineering based bosses will likewise have their own foibles.

non-competitive?

The point of this is that the top level decision in a company does not just occur at the top level of the data matrix, and that attempts to separate "low value" non-competitive "raw data" from the topmost level of a companies thinking are probably doomed to failure because of this. Data, especially in E&P, does not separate easily into hierarchies, and the evaluation of data's worth does so even less. The next mega play in a basin could just come from the realization that migration paths were longer or different from those previously assumed, or as I have seen twice in my brief career, that the predominant dip below the unconformity was counter-regional, and not down to basin, as the multiple-plagued seismics would have it.

pernicious

Our present technology for interpretation can have a pernicious effect on our ability to jump around the data matrix. We used to have well log and seismic section headers which allowed for verification if processing parameters on the fly and avoided many pitfalls due to processing artifacts etc. On the other hand, a staff member may come back from a data room, or scout meeting with a piece of highly important, but more or less unclassifiable information, that refuses to fit into the data base. Such as the fact that "a nearby well produced oil at a great rate". Barry Wells spoke of the particular difficulty encountered in populating the rich data structures of Epicentre where it seems that wherever you start, there is always some data you needed to enter before that point to ensure referential integrity. Wells also spoke of the need to record in some circumstances the fact that no data was recorded citing the case of a missing plug (why was it not taken?) as a problem that was handled in the now defunct Spooler project which was an attempt to standardize core descriptions which catered for just this eventuality. Such richness is absent from many of our data stores today. In general, it is unlikely that the heavy duty data model will ever match the domain specific detail available within a more focused commercial product. Isobel Emslie (Conoco), again at the Stephenson Conference, cited PetroVision as a particularly rich tool for handling scouting information and entitlements -thereby hangs a tail see our article.

transparent

What an E&P shop needs is transparent access to any and all of its' data at the drop of a hat. The multi-tiered data management solutions on offer today are a long way from supplying this. The philosophy behind many of them, that data can be filtered and processed so that the next person in the data food chain only sees what he needs to see is not how things have worked in the past, and it is not how things work today when the members of an asset team all beaver away on data from all over the place. Similarly the separation of tasks between "low value, non competitive" data management and high added value "wisdom based" decision making is equally hard to justify. Good data management will give a company a supreme competitive advantage which will probably increase with time. The data itself will get harder to manage, the use of data will intensify with on the fly processing, and the return on today's investment in data management will be high indeed. Lets get cracking!

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199610_3 as the subject.

© Oil IT Journal - all rights reserved.