2020 Energy Conference Networks Machine Learning in Oil and Gas

Quantum Reservoir, ‘oilfield data is convoluted’. Shell Tech Ventures’ cash for innovators. WalMart’s NexTech unit minimizes vendor dependence. Riverford on Bureau of Economic Geology’s TORA, ‘big data for small companies’. Warwick, Neo4J Graph Technology for leasehold analysis. LANL’s ‘fat neurons’, physics-informed neural nets. Texas A&M, drones, AI and oil spills.

David Castiñeira (Quantum Reservoir Impact*) gave the keynote on ‘practical and value-additive ML and AI for oil and gas’. AI/ML terminology is confusing and definitions are elusive. Following a long (1980-2010) ‘winter’ AI has come back to life thanks to more data and better computers, to the extent that there is talk of an ‘AI bubble’. Castiñeira walked through the different approaches to machine learning and potential applications in oil and gas, suggesting that a virtual assistant for reservoir management might be possible. One issue is the fact that oil patch data is not just ‘big’, it is also ‘convoluted’ i.e. complex. It can be hard to settle on exactly what oilfield KPIs to model and optimize. QRI’s ‘augmented AI’ embeds intelligent workflows, data-driven models and automation. Where components of the workflow are amenable to first principle analysis, conventional reservoir engineering is recommended. Elsewhere, in more poorly conditioned areas, statistics and full-blown ML can usefully be added. The latter raises the issue of model ‘explainablity’, addressed by chaining smaller component models that are easier to comprehend. Data processing involves a lot of moving parts. Extracting data from a wellbore diagram involved PyWin, Google Cloud Vision API, Poppler for PDF to HTML conversion and JSON. Castiñeira presented the results of a well spacing optimization study for a Permian basin operator. Here a sparse, convoluted data set meant that classical reservoir modeling was impossible. Unsupervised ML was used to perform decline curve analysis on public and operator data. Ensemble model aggregation produced an optimized well spacing. The QRI method is under a US patent application. Other examples of QRI’s work included an OCR/NLP analysis of PDF drilling reports to drive drilling efficiency.

* Last year, QRI teamed with Emerson to provide ‘AI-based analytics and decision-making tools for exploration and production’.

Andrea Course explained why Shell is Investing in machine learning and how interested parties can apply for some of Shell’s largesse. Shell sees start-ups as where the action is. They can move fast, take more risks and (maybe) disrupt whole industries. Investment from Shell’s Technology Ventures unit can provide venture capital for innovative companies across the energy sector. The Tech Works program applies technologies from outside oil and gas to ‘solve today’s energy challenges’. STV currently holds participations in some fifty companies around the world working in oil and gas, new energies and other sectors. In the AI/ML space, Course cited Bluware (geoscience), Askelos (digital twin), Innowatts (electric load forecasting), Cumulus (leak mitigation) and Veros (rotating equipment monitoring).

Anna Jarman (Walmart Global Business Services) provided an outsider’s view of innovations in emerging energy technology. Walmart established its NexTech unit to ‘ensure that Walmart associates were actively engaged at the front end of the technology wave to minimize our dependence on vendors for thought leadership and innovation’. WalmartTechATX is a satellite of NexTech that is responsible for the enterprise technology functions that keep Walmart running. The focus is a mix of data science and emerging technology, with solutions that leverage natural language processing, machine learning, cloud computing and AR/VR. Solutions rolled-out to date include conversational chatbots for user interaction. These are built with Microsoft LUIS to determine user intent and offer a friendly user experience. XR Tech is to provide ‘immersive augmented analytics’ that will allow users to view huge datasets and ‘explore big data across many dimensions at once’. Anticipated usage includes spotting zero-day cyber threats using graph datasets. Jarman warned that ML ‘is not a magic bullet and cannot solve every business problem’. ML excels in areas where rules are difficult to apply or where data sets are large, mixed (convoluted as Castiñeira might say) and where outcomes cannot be obtained ‘by applying a set of explicit rules’. SVM*s are the workhorse of Walmart’s AI, used to classify and explore intelligently its vast inventory. SVMs can be trained via linked data relationships, reducing cost and increasing the accuracy of predictions.

A talk from Walmart is always a coup for a conference organizer. Last time we reported on Walmart, at the 2006 PNEC, Nancy Stewart reported on Walmart’s extensive use of a humongous Teradata warehouse to analyze what was not yet known as ‘big data’.

* Support vector machine

Bill Fairhurst (Riverford Exploration) observed that some big data approaches may be beyond the capabilities of smaller oil and gas companies. There are however some that are well suited to their needs and that can be applied to enhance technical interpretations and economic outcomes. Most domain experts are cognizant with statistics and other analytical tools, and have been using them for decades. Today, ‘analytics’ is heralded as a needed ‘transformational event’, even though early adopters have seen a 70-90% failure rate! Fairhurst presented the Texas Bureau of Economic Geology’s TORA (Tight oil and gas resource assessment) consortium, a major oil-sponsored initiative to investigate past, present and future recovery from unconventionals. This extensive study uses the previous year’s drilling outcomes, forecasts of prices and costs, to derive a resource portfolio. A profitability map suggests optimum drilling locations. Unsurprisingly, high probability locations are found in ‘proximity to recently drilled areas with past experience [ and at ] locations most attractive from economies of scale point of view’. A similar approach is applicable to small company portfolios using relatively straightforward multilinear regression models. A study of the Rodessa sandstone investigated poroperm variability from the date of first production, depth and other variables and found geologic and reservoir engineering relationships that were not expected from the geological interpretationy, and that explained the differences in production. Fairhurst concluded that while domain expertise is the key, statistics, machine learning and analytical models can assist in understanding and communicating variable relationships. ‘Smaller, independent oil and gas firms can perform similar analyses [ as the majors ] for successful long-term outcomes’.

Conrad Hess teamed with Chris Buie (both from the private equity Warwick Group) to present an analysis of leasing behavior using graph theory and network analysis. Warwick’s activity involves the consolidation of operated and non-operated working interests in premier low-cost oil basins in the continental US. Data Science is central to Warwick’s ‘micro-aggregation machine’. Key to the analysis is an understanding of who the competition is, and where are they leasing. This involves large (convoluted?) datasets, that are easily processed by conventional landman software. Enter the graph database, in particular, Neo4J* graph analysis. Neo4J’s networked nodes and relationships are amenable to modeling lease holdings and corporate connections. The techniques available to data in a graph structure ‘can reveal insights not visible to tabular data’. Jaccard Similarity is used to search and disambiguate thousands of entities and return similar names. In practical terms, when Warwick is offered a deal it can quickly find out other potential offers and similar available leases. Natural language processing and Spotfire also ran.

* For a backgrounder on Neo4J read our report from the 2016 Graph Connect conference. Also of interest in the context of oil and gas lease management is lease management software developer Michael Porter’s blog on Grandstack.

Monty Vesselinov presented the results of Los Alamos National Laboratory’s work comparing unsupervised and physics-informed machine learning analysis of unconventional oil and gas production. LANL has developed ‘patented, open-source’ unsupervised ML methods for tensor factorization, coupled with custom k-means clustering. The approach is said to be computationally efficient and adapted for terabyte datasets utilizing GPU’s, TPU’s and FPGA’s. Vesselinov’s preference is for ‘physics-informed’ neural nets that include prior knowledge of a problem. Physics-informed layers (aka ‘fat’ neurons) capture important processes such as flow, stress and displacement. The technique mandates ‘differentiable programming’ in JULIA. Since unconventional production forecasting is challenging and physical processes such as fracking are poorly understood, the approach uses large public datasets and data science to predict system behavior from observed oil and gas production. More from the project’s Git repository.

Zahra Ghorbani and Amir Behzadan (Texas A&M University) showed how AI can be used on drone-collected RGB imagery to automate oil spill detection. VGG16 convolutional neural networks were trained with a dataset of some 1,300 images. The spill classification model gave an accuracy of 92%. The approach was presented at the 5th World Congress on Civil, Structural, and Environmental Engineering (CSEE'20).

More from Energy Conferences Network.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.