ECIM 2012, Haugesund

NDB on application selection for Cairn. Shell’s enterprise architecture. Multi domain data management for Statoil’s injection/disposal well monitoring and BP’s seabed survey database. ExxonMobil’s ESRI/VoyagerGIS spatial data framework. ConocoPhillips’ Petrel project management. Western-Geco’s seismic data management. ISO ‘semiotic framework’ for data. Shell’s data quality guru.

Cairn Energy was confronted with a common operator’s problem—how to select an appropriate application portfolio for its exploration unit. NDB’s Jonathan Jenkins of NDB described a novel process, ‘application speed dating’ which was developed to circumvent ‘longwinded and partisan vendor presentations’ and to be in a position to chose, if not Mr. Right, at least Mr. Goodenough. An earlier attempt at application selection failed as testers got bored, scope crept and vendors provided ‘mini demos’ rather than tests. Data flow was overlooked and multiple Oracle instances were a nightmare.

Cairn started over with a more structured approach, pre screening vendors with a check list and establishing key workflows for testing. Even then the going was not easy—Jenkins described geoscientists as ‘artistes’ who ‘combine art and science in way that most of us find annoying!’ Enter the speed dating paradigm, with NDB as match maker. NDB provided a controlled environment for presenting results—and a facilitator (Jenkins) who was there to calm the passions and listen to the quieter voices. Scoring included data management and workflow tests. The outcome was that speed dating worked fast and excluded one vendor whose data management capability was lacking. Jenkins also cited the Aupec benchmark study as of use in determining which vendor’s tools are in the ascendant and which are on the wane.

Shell’s Lars Gaseby cited an Accenture study which found that ‘the cost of poor data is hidden in business processes and data maintenance and integration costs.’ Currently, reactive ‘band aid’ is often applied to link reporting, finances, SCADA and geoscience systems. Moreover data cleansing is often done in reporting systems—leaving the data source dirty. A significant part of people’s jobs remains reformatting and accessing data from foreign systems. Hence Shell’s interest in an enterprise architecture (EA). EA means defining a data architecture in a way that supports the business as a whole. Shell is building on a previous DAMA-derived data framework which defines data and data value owners. The new ‘data centric approach’ derives architecture from data and function—a different approach to the previous ‘systems driven’ architecture. EA requires strong business involvement—IM/IT ‘should be a follower.’

A presentation by Statoil’s Frode Uriansrud showed how monitoring of injection and disposal activities cuts across a wide range of disciplines. Injection, into suitable geologic horizons, of produced water, slop, cuttings, H2S water and CO2 is a cost effective and accepted technique. Developing a disposal well involves all the usual data sets from high resolution seismics, through logs to well tests. Injection likewise involves a plethora of measurements and data. Monitoring has led to the identification of issues such as direct hydraulic fracturing through the caprock to the sea bed, leakage along faults and well integrity problems. Data collected includes pressure and flow, visual inspection by ROV, bathymetry and environmental monitoring for hydrocarbons. Data is collected for each batch pumped and pumping stops if a pressure drop is observed. Data collected for activities such as pipeline and cable surveys is ‘re purposed’ for Frode’s team. All this needs good data management and cooperation across disciplines.

Max Gray’s presentation focused on ExxonMobil’s brand new enterprise spatial data framework. Over recent years, ESRI’s ArcGIS desktop has seen significant take-up in the upstream with up to 500 users, many occasional. But there is a huge gap between the richness of AGD and familiar tools like BingMaps and Google Earth. ExxonMobil set out to build a GIS infrastructure for both sets of users. This leverages ArcGIS Server alongside VoyagerGIS. VG is key for data management and discovery—cataloging and exposing data to users. The ArcGIS Silverlight API was used to develop web apps with some geoprocessing capability. Exxon distinguishes ‘foundational’ from non foundational data. Foundational data has a wide audience and is used across the enterprise. Non foundational data is reserved for local and or specialist use—users need a very good reason to classify data as such. The majority of GIS data is foundational. Geospatial metadata standards are used to classify data according to 14 themes (addresses, basemaps, cadastral, etc.). Exxon’s central GIS database is being used to wean users off Google Earth and on to a combination of ESRI data, Bing Maps and Exxon proprietary data.

Stein Sigbjoerensen explained how ConocoPhillips manages Petrel projects. CP began using Schlumberger’s Petrel some ten years ago as a stand alone tool with no data management as such. As usage grew, the company encountered problems with data sharing and with users knowing what was already available. Project size was growing and projects were very slow to open. CP, with help from Blueback Reservoir and Schlumberger, embarked on a project to create new a Petrel data environment to support its Ekofisk team—along with best practices and procedures for data population and a retrofit of current projects to the new environment. The result is that CP has now integrated Petrel into its data infrastructure which centers on Landmark’s OpenWorks/R5000 data store. OpenSpirit is used to get data into a Petrel reference project. The data management group maintains an asset master project for Ekofisk. Users’ projects can be created empty from a template or cloned from the master project. When work is done results can be fed back to the asset master and the user’s project deleted.

CP’s data environment also includes Schlumberger’s ProSource, GeoFrame and Techlog. The Petrel data environment is via Blueback’s Project Tracker which updates results, template and master databases at regular intervals. Care is required when using OpenSpirit to place data into Petrel—Sigbjoerensen recommends keeping the number of attributes transferred to a minimum. After a refresh, users are given a month’s grace before projects are deleted. To date CP has deleted 70 projects—with so far no complaints. While the system captures snapshot datasets at bid rounds and other stage gates, ‘full circle’ back population of the OpenWorks database appears to be work in progress.

Rick Johnston offered some insights as to how WesternGeco manages its huge seismic data library. This includes culling of old 2D data as surveys are reshot in 3D. But generally, old 3D data is kept since different recording geometries make for a variety of target illumination. WG has five processing hubs and moves data around the world for work load balancing. WG’s big data is getting bigger with a 3x hike for Isometrics data and a 12x increase for IsoGrid data. WG uses the new SEG-D Rev 3 tape standard. Cataloging data begins in the field before data delivery. A recent remastering/cleanup program has unified WG’s media and is now saving the company $4 million per year, a two year payback. WG has destroyed four million tapes in the last ten years. The company has 85 petabyes of ‘active’ data in its Houston hub which would take around 20000 days to read! The library will grow by almost five petabytes this year. WG has some 15 petaflops of HPC capacity world wide (at the hubs and on its vessels). But the library is getting smaller with new high capacity media. Houston used to have a 12,000 sq. m tape store. This is down to 200 m2 after the last ‘crunch.’ Interestingly, the largest prestack survey was a 150,000 channel onshore campaign for Saudi Aramco. WG is now offering clients ‘virtual’ data delivery—they get a delivery copy. WG maintains the original and manages entitlements—a kind of iTunes for seismics.

Shell global quality guru Kishore Yedapalli observed that typical workflows involve checking and fixing errors after delivery. Folks do not in general reach out to the data supplier to get things fixed up front. Contract owners need to provide better requirements to suppliers and keep the communications channel open after delivery. Shell’s goal is of a single version of the truth for data within and across upstream business—and to avoid recourse to massive Excel spreadsheet/macros as a ‘solution.’ Yedapalli gave an enthusiastic endorsement to Exprodat’s data quality toolset. The big picture of a single, summary data quality KPI per organizational unit gets management attention. Exprodat also acts as a data quality dashboard for Shell’s 200 major OpenWorks projects—showing which are improving and which are getting worse.

Those interested in data quality will be interested to learn of the ISO 8000-8 emerging data quality standard. Tor Arne Irgens (Norwegian Defence Logistics) and colleagues Trine Hansen and Atle Kvalheim from DNV explained that the value of such standards for the military was, inter alia, in avoiding ‘friendly fire’ incidents and targeting errors. Irgens cited the 1996 IFIP/Frisco Report as the foundation for the analysis. Frisco provides a terminological foundation for information technology a.k.a. a ‘semiotic framework for information and data quality.’ This comprises three layers, ‘syntactic,’ ‘semantic’ and ‘pragmatic.’ The intent is to build ISO 8000-8 into data acquisition contracts and thereby achieve ‘trusted data.’ The ISO standard will be formalized by year end 2013.

Ole Christian Meldahl (Schlumberger Water Services) observed that when water is concerned, things quickly get emotional, to the point where it ‘may be hard to have a rational discussion.’ Water is key to shale gas operations, coal bed methane and heavy oil. Sourcing, spills and flow-back water all needs managing and inventorying. This challenges traditional data management as many different specializations are involved—biology, well, injection, quality etc. Moreover there is ‘a complete lack of standards’ in water management, apart from ‘roll your own’ in Excel*! In the meantime, Meldahl suggests you make your own standards. Water management is deceptively similar to petroleum engineering. But it has a different history and culture. With water injection taking place near habitation, the ‘risks are increasing tremendously.’

Exprodat’s Ian Milligan and Walter Jardine (BP) described a real-world trial of the OGP’s new seabed survey data model. The SSDM was published in April 2011 and is used for site and route surveys and to ‘de-risk’ drilling the tophole. These activities leverage high resolution seismic as the key data set to identify shallow geohazards (gas, boulders, faults to surface). Other unexpected stuff that has been encountered include a 100 year old telecoms cable that got wrapped around a drill bit and an unexploded WWII bomb within 40 m of the Forties pipeline. Site surveys include sparker surveys, sonar, high resolution seismics, coring and environment sampling. There are lots of different equipment and sensors involved. The data is valuable but it can be hard to access legacy information. GIS is an excellent media for collating all of above—hence the BP pilot of SSDM format in the N.Sea ETAM area.

The SSDM is a simple ESRI Geodatabase with subtypes and attribute domains for data validation and symbologization. BP’s implementation was extended with an interface to the Pipeline open data standard spatial data model. Around 10 GB of ETAP legacy data acquired by several contractors had to be ingested. This included data as delivered from contractors, survey reports, charts, bathymetry, geotechnical logs in Excel and one GIS file. Each data type was converted to a geodatabase before consolidation to the master ETAP SSMD repository. The toolkit included ArcMap, ArcCatalog and ArcToolbox along with some Exprodat custom tools. ET GeoWizards and PetroGIS also ran. NitroPDF proved useful to extract tables. Loading ETAP’s 70 plus surveys was not without its problems—both from a software and data quality standpoint. It was a ‘fiddle’ to get all the information together and much legacy source data is ‘not really amenable to GIS.’ All in all the project was a success and is now being deployed on the Clair field. Visit the ECIM home page here.

* Although the OGC’s WaterML may be of interest here.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.