More from the 2018 IFPen DataSciEnergy event

Paraview - Total’s big viewer for big data. Données Brute applies game theory, data science to the energy transition. Total’s random forest classifier bests the industry-standard approach to distillation column flood mitigation. IFPen uses a simple response surface for rapid evaluation of proposed well locations.

Mélanie Plainchault and Bruno Conche (Total) observed that as data sets get ‘bigger,’ users may overlook key information. This has made visualization and rendering a big research topic across seismics, core (digital rock), reservoir grids and LIDAR all of which can involve multi giga or terabyte data volumes. Total’s visualization effort lies at the intersection of its computing and data science activities. The tool of choice is Paraview, a parallel visualization application developed by Sandia National Labs. Paraview has been used to interact with a 500GB post stack seismic dataset and with NNTU’s 90 million cell Johansen grid, an open data set from a North Sea CCS project. Paraview uses the HDF5 grid standard. Eclipse data files are converted using ResInsight’s GRDECL utility. The INT Viewer also ran. Computer tomographic core scans are analyzed with Paraview’s scripting capability and TTK, the open source Topology Toolkit.

Mathieu Anderhalt’s ‘DonnéesBrutes’ (raw data) startup is applying game theory to energy (renewables) asset management. Game theory applies to situations where there is imperfect information and many actors, such as utilities arbitrating between different electricity sources, buyers and storage possibilities. The technique studies a large, repetitive ‘game,’ with learning updates at each step until a ‘Cournot-Nash’ equilibrium is reached. Anderhalt is also working on a ‘counterfactual regret minimization algorithm’ to track regrets (buyer’s remorse) from past plays and nudge future play away from regret-generating plays. Tools of trade include Kafka, Apache Storm for computation and Hive/Hadoop/HDFS for storage. All is rolled-up in a new ‘Green Like You’ solution that promises ‘data science for the energy transition.’

Nathalie Behara (Total) has used a Random Forest classification algorithm to successfully predict flooding in a refinery distillation column from small changes in flow regime. Flooding is a serious problem. Once it reaches a runaway state, it can take several hours to get back to normal operations. Over a seven month period, some 77 flood events were studied. Previous empirical/first principle methods predicted floods but gave unacceptably high false alerts. Behara developed a data-driven model using SciKitLearn’s Ensemble python module that outperforms the ‘FCAP’ empirical model. The systems is now being fine-tuned for on-site deployment.

Delphine Sinoquet (IFPen) presented the optimization toolset deployed in the IFP’s Cougar JIP. These can replace an expensive fluid flow simulator with a simple response surface model and allow for what would otherwise be too compute-intensive approaches to sensitivity analysis. The approach is used in well location selection, moving a well around the response surface and observing the production forecast change. An IFPen tool ‘HubOpt’ uses SQA (sequential quadratic approximation), a technique that Sinoquet has previously applied in a reservoir characterization context.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.