PNEC Data Integration, Houston

Hadoop and ‘big’ oil and gas data. Factory data management for shale gas. Talisman’s data decade. W&T Offshore and 3Gig’s Prospect Director. Qatar Shell and the upstream taxonomy. BP’s new data organization. Schlumberger manages the managers. Shell’s Petrel reference projects. Exxon’s production data framework. Aramco improves BAD! Petris, Neuralog and Petrosys strut data stuff.

Some 500 delegates from 30 countries attended Phil Crouse’s PNEC Data Integration conference in Houston last month. What’s new this year in data management? There were quite a few companies climbing aboard the Hadoop bandwagon—but we are holding off on our Hadoop reporting until we can see beyond the buzzwords. The shale gas bonanza has caused a rapid evolution in data management to adapt to the new ‘factory drilling’ paradigm. Data management is itself maturing. The evaluation and execution of complex projects is now mainstream.

Talisman’s Lonnie Chin provided the keynote with a look back over ten years of information management. The data management landscape of a decade ago covered much of what is of concern today—master data management, centralized stewardship, federated data integration and spatial. Since then, data intensity has risen with novel data types and more sophisticated requirements—including shale gas, crucial to Talisman’s international development. This, mobile devices and performant users are calling for better information quality. Or as Chin puts it, ‘Stop giving smart users dumb apps!’ On the technology front, Chin highlighted ESRI’s move to Microsoft Silverlight, ETL technology for tying disparate geochemical data sources together and Spotfire and INT for data analysis and visualization. Software usage is evaluated with CrazyEgg’s ‘heat mapping’ technology. For Chin, today is an ‘exciting and evolutionary time to be in E&P data management!’

Brian Richardson presented (on behalf of Gerald Edwards) Qatar Petroleum’s data management effort on the North Field—the largest non-associated gas field in the world. North Field has several major joint venture stakeholders each with their own databases and special requirements. A multidisciplinary E&P database (MDDB) has been developed to allow individual JVs to use any application or database. QP itself focuses on operational and archive data for project oversight. Partners Qatargas and RasGas need more detail and are ‘mining the MDDB constantly.’ Sharing data means more scrutiny and a need for diligence. The North Field is a $40 billion capital project and the funds are there to do the MDDB right. Alongside G&G data the project includes well test, laboratory information systems and more. Data entry includes electronic sign-off.

Carol Hartupee presented W&T Offshore’s prospect information management system developed with help from co-presenter Kandy Lukats’ company, 3GiG. Small companies like W&T need prospects to survive, but best practices for prospect management can be hard to identify and capture. Enter ‘software-led design,’ a new way to build and customize information management systems. W&T has leveraged 3GIG’s Prospect Director to replace key decision-support information locked away in spreadsheets. A ‘parking lot’ concept evaluates why a prospect did not work and determines its fate. This could be ‘straight to the graveyard’ or into storage for ‘resuscitation’ if economics change and trigger renewed interest. Lukats says it is important to keep things simple and build an 80% solution that is usable by all—especially the CEO.

Andrew Lennon presented Qatar Shell’s use of Flare Consultants upstream taxonomy to improve subsurface and wells document and records management. In an internal study, Qatar Shell’s document management was found to suffer from uncertainty as to which versions were final and where they were located. QS appointed dedicated technical data and document managers and brought in Houston-based Certis to do a top down analysis and sell the project. Lennon observed that much information management is simple, it just needs done well as in a hard copy library. Document management processes have been designed to be technology independent. A good thing as the current LiveLink repository is being replaced by SharePoint. Folder structures have been simplified and made more consistent. Publishing means preparing documents for ‘consumption’ by thousands of potential users. Titles, authors and other tags are added and the granularity of publishing established as file, folder or a pointer to a hard copy location. The whole system runs under control of a ‘semantic map,’ an ontology and knowledge representation derived from Flare’s Catalog. This works on input terminology with automatic classification and for search. For instance, a search for ‘Eocene foraminifera’ will find ‘priabonian.’ ‘Sand prediction’ is recognized as a production technique and ‘failure’ as a problem. Certis is involved with publication of QS’ legacy data. Version control was rarely used before, it is now, along with e-signatures to speed up the process. As one reservoir engineer remarked, ‘We are finding stuff we never knew we had.’ Tag clouds and tree maps have proved useful—for instance to spot missing documents in a colored QC matrix.

Jean Trosclair revealed how Shell manages Petrel reference projects. The popular Petrel has had reference projects since 2007—Shell was an early adopter and has been pushing for unit of measure and cartographic awareness. As new features are added, awkwardness and complexity also rise. Cartography in Petrel still requires oversight. Interoperability with OpenWorks requires OpenSpirit links. On the other hand, the reference project provides excellent audit features—‘CSI Petrel’ is great for data forensics. A reference project has clean and final data only and one owner. Assets such as complex fields, salt bodies and the like are especially amenable to the reference project approach. Upstream of the reference project, data is reviewed intensively to establish best well paths and curves are checked for ‘final’ status, loaded to Recall and populated ‘safely’ with OpenSpirit. Seismic attributes and perforation data likewise undergoes a thorough review prior to load. Drilling proved to be ‘an unexpected source of accurate gyro surveys,’ as the pre-sidetrack survey is likely to be much more accurate than the initial survey. Trosclair related an interesting Gulf of Mexico geodetic anecdote. Prior to 2000, many wells were drilled using the ‘Molodensky transform’ which gave bad coordinates. Shell resurveyed its platforms and changed data in its internal databases. The situation regarding vendor data was ‘a huge mess!’ The Macondo incident afforded a 14 month breather for Shell to fix the problem. Issues remain in reconciling engineering and geoscience databases. ‘Consistent, enforced well header data is critical. Verified auto synch is the ultimate goal.’

Nadia Ansari (ExxonMobil) presented the results of her University of Houston thesis on a ‘conceptual data framework for production surveillance.’ Forecasting needs data, but work practices and a myriad of tools deny a simple approach. Systems provide overlapping and mutually exclusive functionality. Most focus on presenting data to engineers for decision support and are ‘high maintenance’ solutions. Ansari’s data integration framework includes a ‘forecast’ metadata type and a single shared repository. The system will in production this summer. The project is looking to confirm that the act of forecasting can be codified and to develop ‘organizational data mining’ and rollup to the corporate level. ‘Reservoir engineers spend too much time doing data management in today’s high maintenance environment—we need to speed up the process.’

ExxonMobil’s John Ossege and Scott Robinson have proposed a methodology for quantifying the cost of poor quality data and hence to evaluate the economics of data cleanup. Data cleanup frequently unearths lost proprietary data. Enter the data metric ‘X’ equal to the amount of retrieved lost data that equates to the cost of the cleanup. X is calculated for various projects and projects with low X are prioritized. The method has proved successful in identifying projects with large amounts of boxed paper records where frequently, data is lost ‘in situ.’

Rusty Foreman and Maja Stone presented BP’s new data management strategy ‘for the long haul.’ A 2009 study made BP decide that it had to ‘do something about data management.’ Too many good projects wilted and died from unsustainable data issues. The business case at its most simple is that BP spends around $1 billion per year buying and acquiring data. If data was a physical asset, nobody would think twice about spending the money. But data does not ‘corrode’ visibly. Earlier work by Gavid Goodland had established a data management operational model for upstream technical data. This is now being extended with governance, professional discipline and performance management. Change management specialist Stone outlined BP’s three year execution plan. BP’s renewed interest in data management has sparked off a major recruiting drive, a newly defined career path and training program. BP has also upped its PNEC attendance from ‘around one’ a couple of years back to the current 33!

Day two started out with Jim Pritchett’s (Petris) view from the vendor trenches. Data management has come a long way since PNEC began. It has ‘taken longer than we thought,’ but now ‘we have commercial products and interest in data management is at a peak.’ Integration and workflow management reduce errors and inefficiencies and are real money makers. But even today, projects are challenged by the lack of standard naming conventions. Against this is the need for speed especially in shale plays where data quality issues and constant infrastructure changes abound. But the reality is that most failures are due to lack of budget support for the true cost of a project. Complex operations, the changing roles of stakeholders and systems, and the use of untested tested APIs lead to a situation where the data manager is both a ‘negotiator and technical policeman.’ In the face of which, management may tire and pull the plug.

John Berkin (Schlumberger) observed that ‘data managers don’t leave because of the pay, but because their job does not evolve and they have no prospects.’ Schlumberger now has a career development path for its data managers. The ‘business impact through local development and integrated training’ (Build-IT) program includes course work, on the job training and self study. The program gives IT folk a basic introduction to geosciences even though ‘explaining a deviation survey to an IT guy’ is a tough call. Schlumberger’s ‘Eureka’ technical careers program and the competency management initiative do the rest. Nirvana is the ‘by invitation only’ status of a Schlumberger Fellow.

Rodney Brown presented ExxonMobil’s open source library of Microsoft .NET components for handling Energistics’ WitsML and ProdML data standards. These are available under an Apache 2 license on Sourceforge.

Fast and furious is not only the preserve of shale gas players. Shell/Exxon joint venture Aera Energy drills and logs upwards of 50 wells per month. Robert Fairman advocates a simple solution to a complex situation, ‘get it right first time!’ Aera thinks of the field as a shop floor/factory. The POSC/Epicentre data model is still in use, now a data warehouse with 20 billion facts! Around 2 km of log curves and 1.3 million facts are added per month. Aera has evolved an elaborate data quality process with definitions of ‘quality’ and assigned roles. In fact data quality and data management are used interchangeably. In the Q&A, Fairman described an Aera data manager as ‘part petrophysicist, part geologist, part driller and part IT.’ The discipline itself is in its own ‘center for process excellence,’ outside of both IT and the business.

Muhammad Readean presented Saudi Aramco’s data quality improvement effort including the use of Jaro-Winkler distance estimates of metadata string similarity, data mining to predict missing values and rule-based classification of curve mnemonics.

Volker Hirsinger (Petrosys) outlined new ‘FracDB’ hydraulic fracturing data workflows developed for major operator. Fracking involves a vast range of techniques and details to capture and there are ‘a lot of eyes on the fracking process.’ It is better to stay ahead of pressure from the regulator and optimize capture today.

Tarun Chandrasekhar presented Neuralog’s work for Continental Resources on data management for shale gas exploration. NeuraDB has proved ‘robust and easy to use.’ The project also saw the port from Oracle to SQL Server. PPDM’s versioning capability is used to cherry pick data from different sources according to Continental’s business rules. TechHit’s ‘EZDetach’ application is used to extract emailed partner data into staging folders by region, well name before loading to NeuraDB. The process was designed to be ‘non disruptive’ to existing workflows. LogiXML was used to develop custom web forms.

Jawad Al-Khalaf outlined Saudi Aramco’s ‘BAD’ approach of Business process, Applications and Data to mitigate data growth. Aramco is adding 12 petabytes of seismic data this year and Disk space is very expensive when the whole workflow is considered. Storage virtualization is being used, along with de-duplication and data compression. Making users pay for what they store proved effective—on one file system, 46 terabytes of disk was unused for over a year. Cleanup saved 54% and simple Linux compression another 14% for a total saving of 59%.

Finally a quote from a happy PNEC 2012 attendee, ‘Best conference yet—they keep getting better. Really well run. Relevant, useful topics.’ Scott Robinson, Exxon-Mobil. We couldn’t agree more! More from PNEC.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.