Our best paper award for the 2013 PNEC goes to Martha Gardill for her presentation of Pioneer’s four year journey to data virtualization. Pioneer’s activity in unconventional exploration has seen the deployment of multiple ‘best of breed’ applications leveraging different architectures and data integration mechanisms. The downside of best of breed is that the onus falls on the operating company to bring it all together. This can be tricky as the authoritative source of data may not be evident and there may be overlap in application capability. At the start of the program in 2009 Pioneer evaluated three options—direct access to data stores, build a data warehouse, or develop a ‘self service’ data system. The latter was chosen partly because of application scalability issues and also concerns over compliance issues with multi-user access to data. The solution, Pioneer’s ‘self serve data system’ (SSDS) leverages Composite Software’s data virtualization engine alongside Tibco’s ActiveMatrix Business Works business automation platform. SSDS has democratized data access for Pioneer’s users. Power users can develop views, access applications and use reporting tools. Most use cases are for read-only access but the Tibco messaging bus also allows for update. In the Q&A, Gardill revealed that the biggest challenge was finding the right data virtualization tool.
Ian Barron (RoQC Data Management) asked, ‘Is Petrel data management still an oxymoron?’ Schlumberger’s Petrel interpretation flagship has been ‘well received by users but reviled by the data management community.’ The problem is that with Petrel, it is too easy to get data in and copy it around. Soon, nobody knows which is the correct version of the truth. The current dogma is that all Petrel items need quality flags and metadata, but no user does this.
However, things are changing as Schlumberger gears-up for ‘serious’ data management. Current ‘straw man’ functionality allows standards to be embedded in Petrel deployments, helping to find and fix non compliant data. In Petrel Studio, standards can be broadcast to local projects. Third party tools such as Blueback’s project tracker can help de-dupe project data and check coordinate reference systems. RoQC’s eponymous toolset further enhances data audit, looking for unlikely or missing values. Barron concluded that Petrel data management has come a long way in the past year and foresees more improvement in the very near future.
The rapid ‘factory drilling’ approach to non conventional development in the US is impacting data management as Tiandi Energy’s Richard Ward outlined. Working for Hess Corporation, Tiandi has developed a streamlined approach (a.k.a. the data factory) to gathering and consolidating legacy data, much of which has been given a new lease of life as source rocks are re-evaluated for their reservoir potential. Unconventional legacy data is a ‘back to the future’ problem—the Bakken shale was first drilled in the 1920s. Core data is of particular interest to non conventional operators—for total organic hydrocarbon evaluation and rock mechanical studies. Rather than seeking to capture all this diverse information in ‘the mother of all databases,’ Tiandi has developed a simple transmission standard so that teams working on core data records can capture the required information which can then be consolidated and loaded to Perigon’s iPoint. The workflow allows Tiandi process ‘hundreds or thousands’ of wells per month. The actual standard transmission format may be as simple as an Excel spreadsheet.
Fred Schwering described how Talisman Energy was ‘breaking down the silos’ between geosciences and engineering with Landmark’s DecisionSpace collaborative well planning application. DecisionSpace needs feeding with data—surface data from ArcgGIS, subsurface G&G data and drilling specs for well path planning. A rather complex workflow involves combining GIS data into a ‘feasibility layer’ showing possible pad locations. Geoscience interpretation from Petrel and drilling specs are imported via OpenSpirit, using OpenWorks as a staging database. Once everything is in the same place DecisionSpace generates ‘really accurate’ drilling-ready well plans. The approach supports expensive, complex non conventional operations. DecisionSpace ‘knowledge nuggets,’ ad hoc well bore annotations, are used to flag anticipated drilling hazards.
Ernie Ostic gave a thinly veiled commercial for IBM’s InfoSphere metadata workbench as a means of tracking information ‘lineage.’ In general, the value of information degrades with time as crucial details as to provenance may be lost. This is a critical situation in the event of a safety incident—an HSE report may be available but is it up to date and authoritative? Data lineage needs to be ‘ingrained’ in the enterprise culture. InfoSphere empowers users to capture lineage and reduce information latency.
Petrosys’ Volker Hirsinger reported on a thorough test drive of various cloud-based data storage options available to smaller multinationals. While the regular internet is OK for smaller documents, sharing large SEG-Y files or large databases is harder to do without an IT-supported WAN. Cloud-based storage is an attractive proposition. But not all clouds are equal. They may not behave as advertised and some lack an intuitive interface. SharePoint repositories can be hard to set up and have proved unstable. The ‘cloud’ may be a complex ecosystem of multiple stakeholders making for multiple potential points of failure. Public clouds like Dropbox are much faster than private clouds but may have file size restrictions. Legal issues and security are further concerns. In the end, Petrosys uses multiple cloud-based workflows in different contexts—along with FTP and USB/disk data transfer. More from PNEC.
© Oil IT Journal - all rights reserved.