PNEC Data Integration 2005, Houston

Phil Crouse’s PNEC Data Integration Conference broke all records with around 250 registered. Although the same topics preoccupy the data management community there is a shift from noting to fixing problems. Underlying the sameness is a ‘phenomenal change.’ Data quality has become part of the workflow with companies deploying what Shell refers to as ‘gold standard’ reference metadata. Outsourced seismic data management is a reality. And IT solutions are leveraging fancy middleware, Portals and W3C standards like XPath to link to disparate data sources.

Randy Petit (Shell) and Dale Blue (Landmark) reported on Shell’s subsurface data management environment. Shell’s New Orleans and Houston data has been migrated to a combination of in-house storage on Landmark’s corporate data store (CDS) and its outsourced solution, the PetroBank master data store (MDS). A combination of PowerExplorer and Landmark’s advanced data transfer (ADT) package is used for data browsing and OpenWorks project building. ADT leverages a corporate ‘gold standard’ of reference values. Merging data from multiple external data sources involved reference data clean up and data blending according to business rules. ADT uses XML to move data between applications. Power Explorer and the CDS are now being rolled-out in the EU by Shell International.


Pat Ryan described Calgary-based Nexen’s data management framework which supports its 280 ‘best of breed’ G&G and engineering applications. Nexen’s Data Application Separation layer (DASL) leverages Tibco middleware to link well data objects with OpenWorks, GeoFrame and an in-house developed PPDM repository. This allows for sharing common UWI, header information and survey data between applications. The system provides audit triggers for data loading, managing coordinate reference systems and other reference data through ‘well defined business rules’. The model is being extended to pipeline and facilities. DASL captures user ‘context’ at login. A ‘reference clearing house’ provides consistent metadata. ‘Discretionary access control’ is provided through Oracle Roles.

IHS Energy

Bart Torbert described IHS Energy’s use of an XML-based metadata schema – a ‘virtual unified data model’ to leverage legacy data models like IRIS21, PIDM, PPDM and a plethora of smaller databases. A high level logical model uses a ‘meta schema’ approach and the W3C XPath protocol to route requests to the appropriate data source. Applications work with ‘objects and ideas used,’ not database tables.


Hamish Wilson reported on a Paras’ study which found ‘no correlation’ between data management/IT spend and business performance. Today’s data management is about low level ‘drain cleaning.’ Wilson wants to get onto a ‘higher plane’ and see how data management could make a difference to the business. Reporting and compliance imply a direct link to technical data management. How does a company justify booked reserves for a particular field to the SEC? This is a ‘profound question’ that requires reservoir characterization and simulation results, seismic data etc. Interpretation results should be captured in real time, capturing versions and keeping them up to date. Paras offers metrics to track performance including a ‘scorpion plot’ combining ERP and technical information.

A2D and Volant

David Hicks (A2D) and Scott Schneider (Volant Solutions) presented the ‘log data procurement challenge’ of squaring ease of availability with secure access control. Companies should enforce a ‘single point of procurement’. Volant’s ‘order once, route anywhere’ system deploys web services-based ‘resource adaptors’ (for OpenWorks, A2D and Recall). The system provides transformation services between LAS, WellLogML, SIF and RasterML. A business logic layer can be extended to integrate ‘best places’ for data storage.

Data quality

Mike Underwood updated last year’s presentation on ChevronTexaco’s data quality effort. Chevron’s workflow has been improved and automated. Data cleanup is performed with Innerlogix tools QCLogix, QCAnalyst and DataLogix. These can check data types including well metadata, deviation survey, horizon picks, production data etc. QCLogix measures differences between objects in data stores in terms of consistency, non null values, validity and uniqueness. QCAnalyst gives a map view of wells by activity, color coded as percentage error. The system is currently being rolled-out throughout North America following a successful GOM pilot. CRS consistency is also tested. Data with the wrong CRS was about to be loaded to Open Works. But the system picked the error up in the metadata.


José Luis Figueroa Correa described Pemex’s @ditep corporate data portal which has centralized regional E&P data into a single database. Oil production data is managed through a link between Pemex’s SNIP production database and Finder (Schlumberger is Pemex’ corporate-wide IM partner). Pemex has tidied up its document warehouse with barcodes and a new filing system—replacing the ‘anarchy’ of the past. The future will include ‘smart’ data loading services, data quality metrics, visualization and the development of better DM/IM skills within the company.

Panel Discussion

Madelyn Bell (ExxonMobil) noted relatively little change from year to year at the PNEC. But this hides a ‘phenomenal change.’ The focus has moved from data models and middleware (now mature), to the value and completeness of data. Exxon is making it easier to retrieve, refresh and reuse old studies. Bell queried why E&P companies are not more proactive in mandating data quality standards for vendors. Pat Ryan (Nexen) agreed that terminology is crucial. Document management is hard to link with geoscience systems—but the need is now to access G&G and corporate documents and contracts. Paul Haines (Kerr McGee) suggested information management was ‘more about people and process than about the data’. Kerr McGee encourages users to ask ‘WIIFM’ – what’s in it for me? Kerr McGee’s unstructured information is being migrated to a semi-structured, indexed data database. Haines advocates the ‘Goldilocks’ approach to data – ‘not too much, not too little, just the right amount - of data and process’. Katya Casey (BHP) emphasized that data preparation is ‘essential for good interpretation’. Companies understand the importance of metadata, but ‘vendors don’t supply it!’ A plea echoed from the floor, particularly for geodetic information where a ‘standard data model/exchange format’ is needed.


David Zappa spoke to the thorny issue of knowledge management (KM) for Halliburton’s top brass. Zappa has been working on getting top management to ‘take the pulse’ of the organization, identify technology gaps, commercialize new products and even to help succession planning. Halliburton’s KM system uses Plumtree with links to other DMS, collaboration and project tools. KM offers managers a ‘window into operations,’ addressing recurring issues efficiently. Communities of practice are used to spot rising stars and ‘thought leaders’.


Pam Szabo outlined StoneBond’s Integration Integrity Process (IIP). The upstream typically deploys hundreds of applications with manifold dependencies—making it hard to manage change. IIP models complete information systems and dependencies and when a change is suggested, the system reports on its impact on inter-connecting applications. IIP relies on metadata and ‘inference technologies.’ One example showed the impact of changing well names in a production monitoring application as data is transferred to a production optimizer. Recursive algorithms are used to manage ‘trees’ of cascading impact.

BHP Billiton

In designing BHP Billiton’s Petroleum Enterprise Portal, Katya Casey took inspiration from eBay and Amazon. The Portal embeds Schlumberger’s Decision Point, SAP Business Warehouse, Microsoft Exchange, Documentum and other tools. The Portal streamlines the workday, reduces clutter and offers effective access to information by aggregating content from multiple systems (without knowing passwords) and serving this to applications. ‘Everything I need today is right in front of me.’


Jeff Pferd (Petris) believes that we need knowledge ‘waypoints,’ a ‘semantic GPS’ to lead us to the meaning of information contained in documents, data, earth models etc. Petris is working on dictionary-based systems to bridge documents and structured data. Sources include the PIDX Petroleum Industry Data Dictionary, PPDM’s Glossary and the Schlumberger Oilfield Glossary. Books such as the Oxford Earth Sciences Dictionary help too. These have been leveraged to map a corporate core sample library’s
terminology across to GeoFrame and OpenWorks.

This article was taken from a longer report from the PNEC produced as part of The Data Room’s Technology Watch Report service. For information on subscribing to the Technology Watch Report service, please email or visit and follow the Technology Watch link.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.