Report from the data management frontline

Oil IT Journal Editor Neil McNaughton reports from the 8th PNEC Data Integration Conference. A quick glance at the program might lead one to think that little has changed since the PNEC started in 1996. But things are changing as the supermajors put serious money into big data cleanup projects.

There are two kinds of data managers. Those that are ‘just doing it’ and those that are still waiting for a silver bullet to ‘do it’ for them. This really is my take-home from the excellent PNEC Petroleum Data Integration* conference held in Houston this month. Our complete report on the 8th PNEC will appear in next month’s Oil IT Journal—and of course as part of The Data Room’s extended Technology Watch Report service. But I thought that you might like a preview in the form of some thoughts on where data management is today.

De Gaulle

I believe it was the good old Général De Gaulle who said ‘plus ça change, plus c’est la même chose’. Indeed it is easy for regular attendees at the PNEC, observing a certain sameness in the debates, to conclude that nothing has changed, that we are confronted with the same old problems of expanding data volumes, poorly applied rules and procedures for naming and capturing data and lack of funding. A couple of years back, a variety of ‘solutions’ were suggested—usually combining outsourcing with re-engineering the workflow. Such solutions tended towards a ‘production line’ approach: ‘Taylorism’ applied to managing the upstream workflow.

Taylorism

Frederick Taylor—the original management guru—wrote his ‘Principles of Scientific Management’ in 1911. Taylor advocated** developing a ‘science’ for every job, including ‘rules, motion, standardized work implements, and proper working conditions’. With great prescience, Taylor also advised ‘selecting workers with the right abilities for the job, training them and offering proper incentives and support’.

Re-engineering

Such notions were central to industry for the best part of the last century—from Henry Ford’s production lines to W. Edwards Deming’s quality management and maybe even to our upstream workflow re-engineers. But the ‘production line’ approach implies a considerable degree of stability in work processes. There is no point in retooling and training everyone unless you are going to be manufacturing some product for a considerable time. Likewise, there is no point establishing a set of data management procedures if your data sources are going to change—or if new technology is going to come along and change the way you work.

Evolving workflow

This is the problem of applying Taylorism to a moving target. And upstream ‘targets’ have shifted considerably in the last few years—with much more post stack data online, horizontal wells with ghastly data management issues, multi-z and image logs—and with time lapse and four component data on the horizon. All of which is set in a context of exponential growth in data volumes.

Red herring

An illuminating discussion followed Yogi Schultz’s talk at the PNEC—when Ian Morison of the Information Store questioned the notion that data volumes are the problem. Morison argues that if it were just a matter of increasing volumes, then our IT solutions would be more than capable of keeping up (thanks to Moore’s Law and growing disk capacity). Morison put his finger on what is undoubtedly the real issue in data management: the increasing complexity of upstream data and workflows.

Domain knowledge

Data complexity defies the Taylorism approach. If you are trying to collate GIS data from multiple coordinate reference systems then you really need a good understanding of geodesy. You are also unlikely to apply exactly the same skill sets two days running. Modern logging tools defy the simple depth—value pair paradigm and require serious domain knowledge for their management. The management of multiple pre and post stack seismics likewise requires a goodly degree of geophysical knowledge.

Mergers

But it can be done! What makes for good data management is a well-funded project and here, demonstrable progress is being made. Both ExxonMobil and Chevron-Texaco presented major data clean-up projects at the PNEC. These centered on the merger of well data from ‘heritage’ companies and are great examples of what can be achieved when adequate resources are applied to such problems. The mergers have had the effect of a shot in the arm for data management. They appear to be succeeding where years of pontificating and theorizing have failed.

The point?

The cleanup of the majors’ heritage data sets are arguably the big drivers in data management today. They are spinning-off a new breed of software tools and contractor know-how as a new micro-industry is born. Above all, I think the major’s approach shows that spending fairly substantial amounts of money on data clean-up is really part of the cost of doing business.

Fabric of management

As we map the processes developed for well header data across to the more complex parts of the workflow, the move away from Taylorism will be even more pronounced. We are no longer looking at a ‘sausage machine’ approach to data management—but to the incorporation of domain knowledge into the fabric of data management.

Just do it

It is the combined requirement domain knowledge and grunt work that makes it hard to get traction for data management—but the majors are showing the way. So my advice to you all is—just do it!

* Petroleum Network Education Conferences—Philip C. Crouse & Associates.

** Source www.cornell.edu. Google ‘Taylorism’.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.