Expert Center for Information Management 2009, Haugesund

Norway’s ECIM upstream data conference, with 300 attendees, is billed as the largest in the world. We report from a sample of the eight parallel ‘workstreams’ on moves in production data management, Petrel’s database extension and an upstream cloud computing trial.

The 2009 Expert Center for Information Management (ECIM) conference saw some 300 information architects and data managers from over 30 companies gather in Norway. The head count lead the organizers to claim that ECIM is now ‘the biggest oil and gas data management conference in the world.’ Certainly it is the data management show with the largest number of workstreams—eight!

Knut Mauseth (Shell) provided the keynote address—a limpid resume of Shell’s analysis of the future of the industry, mankind and the planet. Shell’s analysis sees world population growth as inevitable—and looking out to 2050, has the world population rising by 50%. The analysis further considers that ‘energy raises folks from poverty’ and that oil and gas will continue to play a major role. This inevitably leads to concerns about CO2 and global warming. Shell sees two polar scenarios—a ‘scramble’ (everyone for themselves) and a ‘blueprint’ for sustainability. The latter may be hard to achieve in a democracy—but will entail a combination of carbon tax, regulation, cap and trade, sequestration, energy savings and renewables. In case you think this is pie in sky, Mauseth introduced the large scale carbon capture and sequestration experiment at Mongstad*, Norway where StatoilHydro, along with partners including Shell, is testing different flue gas capture technologies from Alsthom and Aker.

Ian Barron (RoQC) described how the Statoil—Hydro merger has involved a ‘reduction in truth en route to higher data quality.’ The title is slightly misleading as the data merge targeted a reduced number of ‘versions of the truth’ rather than a decrease in truth per se. StatoilHydro’s data merge started by identifying all versions of truth for each data item, determining the validity of each and finding and flagging the best. The result was a single data set of much higher quality and value of any of the input data sets. En route, data sets with uncertain provenance and lacking in metadata were discarded. The process was partially automated with lookups of approved Landmark project and stratigraphic names. A technical evaluation of the ‘best’ data sets was set off against what users want to keep. This involved a major data mapping exercise across GeoFrame, OpenWorks and Petrel projects—all collated to a staging database before filtering, standardization and final QC prior to capture in the corporate master OpenWorks database.

Jan Erik Martinsen (KPMG) and Morten Mønster Jensen (Abbon) investigated mature field data management. Setting the scene, Martinsen revealed that a 2009 KPMG survey of oil and gas CFOs determined that the financial crisis is indeed impacting the upstream, with companies in ‘wait and see’ mode, focused on cost cutting. A third expect no profit/loss in the next three years.

Jensen estimated that in the North Sea, a typical brownfield stands to gain a few percent of production with better analysis of production data. A holistic decision support system is needed to span production and pipeline data management, terminal management and FPSO accounting. A better common understanding across departments is needed to optimize production. The goal is to align production data management with gain/loss management and to be able to perform holistic analysis and recommend loss mitigation strategies. This implies a more integrated asset model—with tuned and verified network models. Enter Abbon’s productized solution ‘Optimum Online.’ Hafsteinn Agustsson’s presentation covered StatoilHydro’s PetrelRE (reservoir engineering) data management. PetrelRE inhabits a workflow that includes OpenWorks, Statoil’s ‘Prosty’ application and various reports and ‘unofficial’ databases. PetrelRE sees a move from a workflow controlled by multiple ASCII files to a cleaner hierarchy of data objects manipulated through a single interface. This automates job ordering and ‘forces users to structure data.’ On the downside, there are compatibility issues with RMS, with RESCUE transfer and legacy ASCII flat files. Discussions are underway with Schlumberger to add more data functionality to Petrel. The issues are of considerable import as ‘official’ models need archiving for stock exchange reporting compliance. Schlumberger’s Ocean infrastructure is being leveraged to add in other tools such as MEPO—used to control PetrelRE for history matching. Agustsson concluded that ‘for the first time we have a single application that does the whole job—from pre processing through simulation and post processing—replacing multiple legacy applications.’ This has lead to a more holistic model treatment at the expense of some data management issues currently being addressed.

Todd Olsen presented the Petrel ‘DBX’ Database extension, Schlumberger’s answer to Petrel data management issues. Olsen acknowledged ‘Data managers see Petrel differently from users.’ Data management will be different from OpenWorks or GeoFrame and Schlumberger plans to support users with best practices and education on Petrel and its data. But as users already know, Petrel tends to create a multiplicity of projects. Users may be ‘successful’ while the organization may struggle. The key to Petrel data management is the Windows globally unique identifier (GUID), a unique, machine generated, identifier that identifies each and every version of all objects in Petrel. When an object (well, horizon) is loaded, it gets a GUID that is persistent throughout Petrel. Petrel uses GUIDs to ‘remember’ objects and workflows that were used to build other objects, providing an audit trail of activity. Olsen presented a typical workflow involving multiple interpretations of the same seismic data set combined to reservoir models. The example made it clear that even with the GUID, managing even a relatively simple data use case is not for the faint of heart. The upside though, according to the oft repeated Schlumberger mantra is that ‘it’s not just data, it’s Petrel data!’ In other words, the GUID is a window into a slug of potentially informative metadata about every component of an interpretation.

The other facet of Petrel DBX is of course the database, a new Seabed-based Oracle database that is used to mirror whole Petrel projects. This is currently a subset of Petrel objects, but with time the idea is to expand the footprint and support full bandwidth data management of everything seen in a Petrel project. Petrel DBX will be available with the 2010.1 release (December 2009) with limited data type support.

In the other corner, or rather ‘workstream,’ Landmark’s Susan Hutchinson was presenting OpenWorks R5000 data management strategy. Prior to R5000 OpenWorks data duplication was widespread. R5000 introduces an underlying Oracle database and from now on, OpenWorks projects become views into the data. This hides complex reference system and other data issues from end users. Interpreters ‘see through’ the database to seismic data files. R5000 also expands coverage—especially in seismic data management—and adds GeoTIFF, interwell attributes, fracture job monitoring and basin modeling from a joint development with StatoilHydro.

Jan Åge Pedersen (Tieto) described StatoilHydro’s production data management effort which began in earnest in 2007. Production data management has proved harder to achieve than geosciences data. This is because production assets are independent and people are not incentivized. Production data is heterogeneous, people are entrenched and recalcitrant to new tools. Excel and Power Point are the tools of choice. Production data comes from many incompatible sources and is combined in an ad hoc way to suit engineers’ requirements. Real time data will be even more ‘messy’.

StatoilHydro is now rolling out a ‘vendor and asset neutral’ production data model, leveraging Energistics’ ProdML. Data from SCADA systems, the historian and production accounting are normalized and pushed to the enterprise service bus. Initial management has been supplied by the central data management group, but ultimately the plan is to involve asset personnel in localizing the production models and handing over to a new race of project production data managers (PPDMs). Pedersen noted that ‘it is proving hard to find PPDMs with the appropriate skill set, you can’t just extend the exploration data management function.’ Statoil already has a network of project data managers for exploration data.

David Holmes and Jamie Cruise (Fuse Information Management) wound up the proceedings with an enthusiastic presentation of potential uses for cloud computing in the upstream. In a sense, cloud computing is not really new to the upstream as users of Norway’s Diskos data set know. But Fuse has been pushing the envelope of cloud computing with a test bed deployment of a seismic data set on Amazon’s Elastic Compute Cloud. This offers compute resources for 10 cents per hour and similarly economical data storage. The Fuse test resulted in a monthly bill of $14! A more serious proposition, storing a Petabyte in the cloud, would come out something in the range of $2 million over three years. ‘Traditional’ data hosting would cost 50% more. Amazon also offers bulk data loading and unloading services at $40/TB, avoiding potential ‘lock-in’ costs.

* www.oilit.com/links/0909_1.

This article is an abstract from The Data Room’s Technology Watch from the 2009 ECIM. More information and samples from www.oilit.com/tech. .

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.