Andrew Marks (Tullow Oil) recalled the days when Tullow was ‘sick and tired’ of local businesses operating on their own and initiated an IM program (before Marks joined) called ‘One Tullow IM’ with the aim of ‘unified knowledge sharing via the Tullow Intranet.’ At the start of the project, Tullow had dispersed teams and multiple reporting lines. Now, 18 months later, Tullow has ‘standards, policies, processes and procedures’ (SP3) under development, and has defined roles and responsibilities defines.
Marks, who left Lasmo 8 years ago, ‘when Finder was new,’ asks, ‘How far have we moved since then? Have core interpretation applications changed significantly?’ Marks appears to think not—although ‘other technologies’ have come along to help. One such technology is the GIS Portal, now ‘well established,’ such that you can ‘see anything anywhere.’ Desktop GIS should complement the traditional G&G lifecycle—so that you can grab a piece of data—interpret—and move on.
Tullow’s intranet portal is built atop LiveLink DMS, Open Works, Kingdom and Petrel. A GIS front end allows for selection of basic well information and download to Excel. Logs can be viewed in application viewers although much project technical data remains local to interpreters. In the past, teams were burdened by monthly reporting. Senior management needs to read 10 pages per day—much of which is duplicate information. So Tullow now publishes ‘journal type’ information. Traffic lights show how production is going—allowing for real time decisions rather than a wait for a monthly report 6 weeks late.
Alan Smith (Paras), who was interim OMV CIO last year, presented a paper authored by OMV’s Franz Schmidt on the IM aspects of OMV’s 2004 takeover of Rumanian state oil company, Petrom. In Rumania ‘nobody really knows how much oil is produced.’ Petrom has tens of thousands of producers with no detailed information, no SCADA, no networks. Tank levels and phone calls is ‘all you’ve got.’ OMV is now working on a global program to address data ownership, standards, quality and ‘anarchic unauthorized updating’ of error-prone systems. The aim by 2010 is to recognize data/information as assets and ensure data correctness and storage in the right place. The vast majority of Petrom personnel had never even seen a PC before, so training, language localization and just ‘keeping things simple’ are important. Petrom is moving from its legacy in-house software to TietoEnator’s production reporting system. Pipelines are being mapped and incorporated into network diagrams for roll-out in 2008. GIS is used as an integrator and SAP is now a major component of Petrom’s IM—used to match production information with financials. Applications management in OMV is also to be addressed—with a move to a ‘true data,’ single version of the truth paradigm.
Al Kok works in SaudiAramco’s Exploration Data Management division providing quality assured data services and knowledge-based data management to exploration. The division collaborates with data producers for data capture, edit and QC and currently manages over 900 exploration and delineation, 9,500 development and 2,000 water wells. In 2007 Saudi Aramco drilled 600 wells and 1,070 wellbores using 128 rigs (up from around 50 in 2001). Keeping pace with the activity increase has been a ‘significant challenge’ for Kok’s department. Aramco’s Well data environment is an Oracle corporate database (CDB) with a large data footprint. A separate database holds well log data. There are multiple data acquirers, owners and loaders—each owner does their own data loading. Drilling engineering and wellsite geology track loading and check for data completeness. The output from the CDB is quality assured, project ready data for Aramco’s interpreters. Well defined processes ‘prevent errors rather than fix problems,’ eliminating cross reference conflicts and ensuring that ‘employees understand what’s going on.’ Processes are documented, errors targeted and data is ‘continuously improved.’
Robert Best presented Neuralog’s work getting a handle on PDVSA’s million logs and thousands of seismic sections that represent 70 years of activity. In 2004, PDVSA kicked off a legacy log data management project with Neuralog. PDVSA wanted open standards and ‘open GIS.’ A Gerencia de Operaciones de Datos Departamente (GODD) project team was formed from data management and IT. Today, cleansed and QC’d digital data goes to the PPDM 3.7-based NeuraDB relational database. This is synchronized with PDVSA’s Finder. Oracle BLOB storage is used for bulk data and content and ESRI’s ArcGIS provides a GIS front end. The result adds value to the PDVSA dataset by enhanced physical and logical data security, improved data access and usability and better interoperability with other repositories and applications. The system is now being stress tested as many companies are giving back fields (and data) to PDVSA as they do not consider the new government terms acceptable.
Agustin Diz described Repsol-YPF’s IM effort, particularly in support of performing reserve estimation and portfolio analysis and often on a tight schedule. In the past the company often started studies over—without realizing that reviews of what was done before were available. Transferring such knowledge can avoid ‘costly mistakes.’ G&G data is generally stored satisfactorily. But reservoir data—like pressure build up tests and interpretations is stored (or not) ‘all over the place.’ Drilling data management is ‘so so.’ Production data is OK at the macro level but poor at detailed allocations. Document management is improving (especially engineering documents for facilities). But it is hard to deploy data and document management systems that support the workflow—‘There is no easy answer, data management has to be a part of everyday work.’ The current solution leverages Microsoft Sharepoint pending deployment of a workflow tool—candidates under evaluation include Orchestra and PointCross.
Han de Min introduced Aspentech’s ‘Operations Domain Model’ that orchestrates E&P ‘real time and right time’ processes and captures ‘enterprise configuration data.’ According to d Min, Aspentech’s Hysys flagship is used to design 75% of oil and gas facilities globally. De Min is unsure if real time reservoir modeling is achievable. What is required is a ‘sustainable scalable asset register over the full life cycle of a facility.’ ‘Handover is the problem.’ Today’s service-oriented architecture is the ‘most sustainable’ way forward. De Min envisages a standards-based publish/subscribe bus with the data historians beneath and visualization and applications above. StatoilHydro is already using the system—’Norway is ahead of the game here.’
Katya Casey thinks that BHP Billiton’s subsurface computing strategy has proved fruitful. BHP’s new president (from Exxon) is promoting ‘functional excellence’ in subsurface computing which translates into global standards and a cross-discipline team charter. BHPB is consolidating corporate databases in headquarters, while maintaining local data ownership. BHPB ‘believes in metadata,’ POSC’s process taxonomy and PPDM’s discipline taxonomy have been leveraged in a Verity-based E&P metadata catalog. A GIS portal was built with Google Earth atop of ESRI SDE and Oracle Spatial. ‘ESRI doesn’t own GIS!’ BHPB is now sharing data management practices developed in exploration with its production engineers.
Casey calls for an open discussion of the state of vendor data management including datums, record quality and ISO standards metadata on each record. Casey deprecates the ‘inefficiency of bulk vendor data subscription updates.’ The solution also uses Schlumberger’s Ocean API, considered as an ‘open’ development platform for E&P.
Marco Piantanida presented ENI’s web portal that acts as a front end and launcher for ENI’s many applications. The Portal uses PowerHub and DecisionPoint XML services. Landmark’s Team Workspace portal solution was extended with ENI’s own technical and scientific portal. ‘A database only gets noticed when it becomes a part of the portal.’ Is it easy? No. Piantanida was amused with the talk of ‘web services.’ This project relies on direct Oracle connections. Some tools can be invoked in context. But some monolithic applications don’t allow this and may require a high-end PC, making them unsuitable for most users. Contrary to the marketing pitch, web services and middleware do not simplify interoperability—the same dependencies mean that you have to update everything on upgrade.
Hans Tetteroo described how Shell uses ESRI to analyze, manipulate, collect and display data. But final maps (often PDF files) are stored in the Map Management System (MMS). Previously, map management was done on the desktop with an in-house developed tool, ‘Mercator’. But support proved ‘unsustainable,’ performance inadequate and there was no global reference data system. In Shell, the majority of users have locked-down PCs. Most applications need to be GID-scripted (a major operation!) A ‘next generation,’ map management study was undertaken in 2005. This came out in favor of a web-based solution.
Flare’s E&P catalog was identified as a potential solution requiring additional development. LiveLink is used for publishing and to provide an audit trail and version management. ‘Users don’t want to see GIS systems,’ so the aim is for users to be able to generate (for instance) an emergency response plan for a facility and have the system populate as much information as possible automatically before building the map. But the real grey hairs come when legacy data is included. The system has been successfully piloted in EP Europe and is now rolling out around the world. Global support is assured by Shell’s ‘GRASP’ global rollout applications and support program.
This article is taken from a 12 page report produced as part of The Data Room’s subscription-based Technology Watch Service. More from www.oilit.com/tech.
© Oil IT Journal - all rights reserved.