The 15th edition of SMi’s E&P Data Management London conference offered a good snapshot of the state of the art, underscoring the trend to maturity of recent years. A good example of such was Jeanette Yuile’s presentation of Shell UK’s approach to data management in support of emergency response. Moving the data discipline into such a mission critical area involved changing culture. Data management used not to be perceived as a good career path compared with geoscientists and other ‘fancy people.’ Users were in general ‘too tolerant of bad practices.’ So Shell has now elevated the profession with better career paths and used a Kaizen-inspired approach to address data ‘waste,’ effect small changes and measure the results. Yuile’s vision is of DM as a ‘formal garden’ with clear boundaries and patterns and the data managers as gardeners working to ‘evergreen’ the data. The result is quality data and documents served from a central file plan and data atlas backed up with information quality metrics and a business rule repository. It was then feasible to leverage this ordered dataset on an emergency response portal with access to 20 key data types providing immediate access to a standard set of verified subsurface and facility information for use in case of a well emergency. The data improvement project and ER portal was initiated by Shell’s drilling and development teams and is to be deployed globally. Yuile attributed the success to Shell’s new information management culture, now ‘just the way we do things.’
David Lloyd and Malcolm Bryce-Borthwick updated GDF Suez UK’s 2012 presentation on the use of information management frameworks, in particular with the novel use of an ITILv3-based ‘partnership project management development framework,’ a set of Lego components to be built into different work streams. An assessment of GDF Suez’ data quality by Venture Information Management found the ‘usual things.’ Data was loaded directly to projects bypassing the data team leading to ‘skepticism and lack of trust.’ There was ‘severe Petrel project infestation’ with over 1000 projects on disk making it hard to know where an interpretation actually was. Developing the Cygnus field mandated improvement and got strong support from the CEO. GDF Suez is now working on a data quality framework and on project rationalization. This involves a migration from OpenWorks to a ‘scalable long term replacement.’ Following the lead from Paris HQ, the UK unit opted for Schlumberger’s ProSource and InnerLogix for QC. Work stream rationalization leveraged Blueback Reservoir’s Project Tracker, ‘an absolute lifesaver.’ This catalogued and rationalized 1,100 projects down to under 200 active on disk. Now new projects are monitored at creation to ensure that regional reference projects of QC’d data are used right, offering users a ‘friendly conversation’ if needed. Along with its QC role, InnerLogix is used to transfer data from ProSource to Petrel and Kingdom—OpenSpirit also ran. Approved tops and horizons go back to the reference project via InnerLogix too. ProSource had, to an extent, been ‘oversold’ and work by third parties Venture and DataCo was required to debug. Borthwick advised, ‘Be wary of vendor claims.’
Mario Fiorani reported that, a couple of years back, ENI’s users were having a hard time accessing basic geoscience data and were asking for access à la Google maps. Enter ENI’s ‘InfoShop Maps’ offering Google style search across ‘official’ data and text sources. Some 3.6 million items have been associated with an XY location and indexed with Microsoft Fast. The solution was considered more flexible than using MetaCarta. Geo-indexes are stored in a 1TB geodatabase. Fine tuning the geo-index took 60% of the eight month project. InfoShop Maps was developed by Venice, Italy-based OverIT.
IPL’s Chris Bradley and Trevor Hodges outlined how Composite Software’s data virtualization technology has been deployed by, inter alia, BP to hide the ugliness and complexity of SAP or Oracle LIMS. Virtualization ‘wraps’ such sources with a PPDM-based data model that feeds reporting and other apps. Hodges observed that, ‘If you have LGC apps they are moving towards data virtualization’ and/or a ‘data access layer.’ BP has around 1,000 applications in its exploration portfolio. Prior to the virtualization project, BP’s decision makers were creating their own data collections and storage systems leading to ‘inconsistent reporting’ and low confidence. All of which was accepted as normal. Composite’s solution has brought about a ‘90% gain in productivity and a 40% reduction in development costs. BP has implemented a high quality data model** addressing the MRO*** space and is now working on real time drilling with Witsml. Bradley concluded that data virtualization is only one component of enterprise information management, referring to a suite of papers he presented at another data conference recently and which we will be reporting on next month.
Samit Sengupta (Geologix) related a tale of ‘how Witsml saved the day’ for a deepwater West African operator. Rig sites still usually only supply binary Wits which makes it harder to use and error prone. Geologix managed to add metadata and translate real time Wits feeds to Witsml, creating curve data and mud log objects on the fly. A ‘cloud’ infrastructure, WellStore On-line feeds conditioned Witsml on to users, with some log processing for gas, pore pressure and net pay en route. The system is still exposed to the inbound Wits feeds and care is required on a crew change when the channels may get swapped.
Jess Kozman (Westheimer and Adnoc unit Mubadala Petroleum) described a ‘standardized data platform for an ambitious company.’ Previous case studies have failed to establish a correlation between data management and financial performance. Kozman believes he has discovered why. He has added a ‘complexity’ metric to the analysis, encompassing ‘technology, company size, geographic diversity and focus.’ For instance, the complexity of a pure play domestic exploration shop will be less than an international outfit.
Armed with his findings, Kozman showed management how they could increase production with better data management. The study also pinpointed a lack of resources (not technology) as Mubadala’s true problem. A change management process à la Harvard Business School was enacted. This process focused on three ‘quick win’ projects, all people/process related rather than software or technology. ‘Folks expected me to recommend a new data model ’ While there was no technology spend in the first six months of the project, a company-wide rollout of SharePoint 2010 was hijacked and used to plot operated licenses and track progress with graphics and Google Earth.
Jim Whelan took a very top down approach to ExxonMobil’s data management, drafting a letter to be signed by CEO Rex Tillerson that established a strong data ownership and governance model. This has resulted in a standard, global environment of processes and tools and an ongoing effort to continually enhance data in Exxon. Incoming data goes to the ‘data room’ for QC and legal ownership verification. Raw data goes to Exxon’s ‘InfoStore’ environment before loading to Petrel. InfoStore comprises the Exxon subsurface master DB, a log database (Recall), a seismic database (PetroWeb) and a standardized LAN. Interpreted and cleansed data goes to the ‘cross function database’ (XFDB) which is used in exploration, production, development and by the asset. Chunks of XFDB data can be carved-out for sale as required. The system has proved its worth in the non conventional arena (see this month’s lead). A good dataset can add tens to hundreds of millions of dollars to the sale price in a major transaction. Front end tools such as ‘ShowMe’ (SharePoint-based) offers a GIS-based interface to Exxon’s ‘prize jewels.’ Blueback’s project management package is being tested on Exxon’s plethoric Petrel projects (5,000 at the last count), ‘There is so much to do and so little money!’
Kishore Yedlapalli revealed that today Shell has around 50 petabytes of data online and this is set to grow tenfold in the next three years. In general, the industry’s data is in poor shape. Often, quality is not even measured and folks shy away from corporate data sources. Many are too busy to reach out to data suppliers to explain requirements and fix errors up front. The push for improvement is coming from the top. CEO Peter Voser said recently that Shell needs to ‘improve processes and use data better.’ Shell is implementing a single top-down business oriented KPI per organizational unit, along with bottom-up reports of errors and remedial hints. These are rolled-up into global, regional, asset and data source traffic light/KPIs. Data management is an ‘eternal’ issue, and should not be treated as a ‘project’ but as continuous improvement. Despite Shell’s KPI fixation, ‘You should not believe in green traffic lights, often only a small data sample is actually checked.’
Neuralog’s Robert Best reports that support for Finder will stop at the end of the year, although no one’s sure which year! In any event, the time is ripe for a shift from an ‘end-of-life’ legacy system like Finder to a PPDM-based solution (read NeuraDB). Such a modern solution allows reference value management, business rules and versioning to promote operated wells over public data sources. Upstream IM presents a broad problem requiring a customizable solution. Mapping from Finder to PPDM can leverage NeuraDB, Informatica or ETL. After migration workflows can be tested to SAP, the CDB and on to Petrel. A straw poll showed that maybe three still use Finder although there are still users in the States and lots in the Middle East.
Dave Wallis (OFS Portal) traced the history of e-business standards in the upstream ending up with PIDX, the global forum for IT standards for oil and gas e-business. PIDX was originally based on EDI but is now ‘all XML.’ It has been successful, one major’s annual PIDX invoices are worth $1bn and Chevron uses the protocol for 98% of its business. PIDX documents exist for purchase orders, invoices and field tickets. For smaller suppliers, connectivity extends to QuickBooks and Excel. PIDX also manages units of measure, currency and non repudiation. The free, technology-neutral standard works with SAP and Oracle Financials. PIDX 2.0 has just launched with mobile transactional capability and the inclusion of UNSPSC codes for ‘more granular’ asset tracking. Downstream, PIDX maintains the refined products code list for the industry covering ‘100% of the US Market.’ The next PIDX meet in April will be in the prestigious RAC club in London.
*
special interest group.
** a reference to Matthew West’s book?
*** maintenance, repair and operations.
© Oil IT Journal - all rights reserved.