Equinor has open sourced Seismic-ZFP, its seismic compression algorithm and Python library. David Wade was enthusiastic as to the benefits of machine learning and big data but warned of the cost of ‘seeing all the benefits ending up in the pockets of cloud vendors’. While cloud data upload may be free, storage is costly, as is data egress. The trick is to compress data and cut storage costs. Enter LLNL’s ZFP algorithm for lossy compression of arrays of up to 4 dimensions. Equinor has adapted seismic cube volumes to suit disk access and enable extraction of arbitrary lines. The SeismicZFP header holds SEGY header plus SGZ metadata. Compressed data quality shows ‘no meaningful difference’ at 8:1 compression. ‘Even 16:1 is OK’. Compressed data quality is ‘fine’ for machine learning applications. It may even be a pre-requisite for the efficient implementation of some. Download with ‘pip install seismic-zfp’ or from GIT.
Vidar Furuholt (Aker BP) advocates data liberation and the free flow of information between upstream applications. Aker BP’s SWAP is a proof of concept of such ‘industry 4.0’ principles, with data organized in a data layer, and decoupled access via an API. ‘Real data liberation in E&P can only be achieved with standards’ (like OSDU – see below). SWAP promises a vendor-agnostic workflow framework linking apps to data with a GUI/Python scripting endpoint. The system behaves as a workflow pipeline, but with added vendor neutrality. SWAP does not touch either services or data, issuing requests on a message queue, triggering one service after another and presenting a composite output to the user. The system can be built with open source libraries or by assembling cloud services such as Kubernetes/Terraform/Docker. SWAP runs under a Python/Jupyter Notebook and leverages a Google cloud pub/sub mechanism with support for Google cloud storage buckets. Use cases include seismic volume transforms and filtering with AkerBP’s own frequency match algorithm. Currently, seismic volumes reside in Cognite Data Fusion as a CDF Data set. Furuholt acknowledged Baringa’s help developing the solution.
Carlo Caso (Cognite As) described a test of the SWAP concepts performed with AkerBP. Using a test data set of 2,000 SEGY datasets and 3D surveys, Cognite reproduced the DISKOS dataset in the cloud. Performant data access is provided with protocol buffers and gRPC, a high-performance remote procedure call framework. End users can quickly discover and run multiple vendor services. Cloud cost models are different and impact data flow. Data egress can exceed storage cost. The cloud is optimized for web services, not HPC. Not all legacy apps compatible with remote storage. Legacy apps often have their own data store and their developers may have a vested interest in the status quo. But industry is asking for open APIs, open standards for data formats and models and data adaptors for legacy apps. AkerBP and Cognite are making their open API public. Cognite is also an ‘active member’ of OSDU and has contributed its API to the initiative, where it is presumably seen as an alternative/competing data component to Schlumberger’s OpenVDS.
ENI’s Marco Piantanida warned that use of Energistics’ RESQML reservoir model data format can lead to an unmanageable mess of RESQML files. A better solution is to use the Energistics transfer protocol (ETP) to allow apps to listen to each other without file data exchange. ENI has productized the approach as ‘Geo-Apps’ offering data exchange and tracking for RESQML. Binary and other log and fault data go into the official ENI data repository via a RESQML disaggregation layer. An ‘e-RESQML’ GUI controls the workflow with a neat exploded graph representation of RESQML contents. The system adds a model tracking database to RESQML. TechEdge, Kwantis and Oracle were involved in the project.
Caroline Le Turdu gave an unashamedly commercial presentation of Schlumberger’s ExplorePlan. ExplorePlan was developed to combat over-optimistic pre-drill reserves estimates as a joint Equinor/Schlumberger AI program. The solution combines Schlumberger’s Delfi ‘cognitive‘ E&P environment along with apps including Petrel and GeoX. The solution shares prospect data with team members for peer review along the exploration ‘funnel’. Results are aggregate as dashboards in Spotfire. The system helps operators build on existing knowledge, remove bias and get better at estimating.
Gael Joffre (OMV) presented a straightforward application of machine learning which sets out to use analytics to replace well tests*. Gas fields in New Zealand are automated and flow is comingled. Well head gas meters experience time delays and water encroachment and the situation is complicated by the pipeline retention time between platform and shore. OMV uses PI historian data and a ‘hybrid’ interpretation platform built with scripts and solvers from Cognite CDF, Power BI, Grafana Azure and Google to distinguish between flow disturbances and stable periods.
* A similar approach is currently proselytized by current SPE Distinguished Lecturer Roland Horne in his talk ‘Big Data and Machine Learning in Reservoir Analysis’.
Jonah Poort (TNO), speaking for a consortium of Wintershall, Shell, Total and others also presented on data-driven detection of well events in mature gas fields. The work is part of a larger design and maintenance program for ‘geo-energy assets’. Gas production is plagued by undesirable ‘off-normal’ events - salt, asphalt, gas/water coning. These are usually identified by human inspection. Salt precipitation is observable at the surface as decreasing flow and wellhead pressure. Historical data was scanned to locate target patterns. The algorithm picks likely matches and an operator gives a yea/nay for the match. A sliding window compares snapshots of the data with the target pattern, leveraging ‘dynamic time warping’. After tuning the system found all known salt events in the data, even with 10% added noise. The test also identified 10 hitherto undetected events, 8 of which were confirmed by the operator.
Ralf Schulze-Riegert (Schlumberger) presented on work performed for Norwegian Petoro* on expert-guided machine-learning for well location under uncertainty. The work was performed under the ISAPP** Consortium’s Olympus Challenge. Production and injection scenarios for the North Sea Olympus development were modeled as 50 equiprobable subsurface realizations giving production forecasts from 2016 to 2036. An expert interpreter identified one promising location for a drill site, and ML was used to find lookalikes. The system includes information on economic demand, well costs, costs of shared facilities and more. An interactive workflow was used to answer questions such as ‘find 90% of wells that meet economic criteria at over 80% probability’ in what was described as an ‘expert-guided ML approach’.
* Petoro manages the State’s financial interest in Norwegian North Sea joint ventures.
** Integrated systems approach to petroleum production.
A panel discussion titled ‘Digital Ecosystems – Quo Vadis?’ homed-in on a discussion around graph technology. Andreas Blumauer from the Semantic Web Company was asked to provide ‘concrete examples of automated metadata extraction from unstructured subsurface data’. Blumauer intimated that this was ‘a little bit under NDA’ but provided a pointer to SWC’s large lithology knowledge graph developed for the Austrian Geological Survey. There was also a lot of interest in OSDU. Schlumberger’s Jamie Cruise opined that these new platforms are very flexible as there are no schemas ‘baked into code’. OSDU started with seismics and wells and is now extending to production and ‘thinking about new energy, methane emissions’. ‘There is some mistrust in OSDU’ with regard to Schlumberger’s dominant position. But ‘there is room for competition’. Target’s Ali Al Mujaini stated that ‘OSDU needs to be much more open and not just for the supermajors. People still struggling to deploy on the cloud and benefit from AI’.
Finally James Elgenes (Equinor) gave a showman’s view of digitalization in exploration. Digital technology at the Johan Sverdrup field has ‘boosted earnings’ by over $200 million since startup. Equinors’ formula for such success is quality data, data science and analytics, discipline experts, new ways of working (fail fast) and external collaboration, notably with Microsoft, provider of Equinor’s Omnia cloud database. Equinor’s enthusiasm for AI and digital transformation has led to new investments in companies such as KoBold Metal and Earth Science Analytics. OSDU is a ‘key component’. Equinor believes in data sharing* viz. Northern Lights, Sleipner CCS and Volve. Reworking old data and reports has led to new discoveries. Data needs to be churned systematically. Agile teams involve 7 personae (computer geoscientists, analytical geoscientists, data scientists and last (and hopefully) not least, the subject matter expert). The latter is ‘an engaged specialist open to new ways of working’. Examples of Equinor’s work are ‘WELP**’, well extraction log processing i.e. the automatic creation of composite logs, well analytics with (and without) ML for poroperm determination and seismic data analytics with deep learning on post-stack data. Another theme is the value of information, to ‘spend money wisely’ and to keep tabs on oil product quality in different geographies. ‘You need to get people competencies right such that everyone has a place’. Data literacy is now a key competence.
* For an analysis of Equinor’s data openness read Matt Hall’s blog.
** See also this work by Erlend Viggen.
© Oil IT Journal - all rights reserved.