Jeremy Graybill of Anadarko’s Advanced analytics and emerging technologies unit reported data science successes US onshore basin screening, drilling cycle time reduction and in developing surveillance logic to reduce offshore production losses. Anadarko deploys workstations with Nvidia P6000 cards along with high end DGC computers, each with 8 Volta GPUs. Google’s dedicated 180 teraflop TensorFlow processors in the cloud have also been used. Graybill has used deep learning networks to propagate formation tops across a basin and to QC large volumes of well logs.
Mauricio Araya Polo presented Shell’s work on ‘deep learning-driven’ geophysics, to perform feature detection directly from the data and to ‘avoid the laborious processing/interpretation/modeling loop.’ Shell uses neural nets to reveal features such as faults and stratigraphy in raw, unprocessed seismic traces. Here, Araya Polo invoked ‘Hornik’s universal approximation theorem’ which has it that neural nets ‘can compute any function.’ The technology is embedded in Shell’s GeoDNN in-house interpretation workflow. The technique was co-developed with MIT’s Chiyuan Zhang using synthetic data since labeled examples were not available. Deep Learning can also be used as a pre-conditioner for full waveform migration. Araya Polo views the techniques as ‘disruptive,’ heralding a ‘major labor force change.’ The first GeoDNN results were presented back in 2014. Real data testing and extension to 3D is still underway.
Curt Smith presented Microsoft’s apparently plethoric hardware and software offerings that make up the Azure cloud. These span from entry level virtual machines for everyday workloads, through GPU or FPGA-enabled boxes providing microservices for ‘AI/Edge interfacing’ right up to ‘a real Cray computer configured to you own spec.’ Deployment can be truly cloud, on-premises, or a variety of hybrid offerings including a time-variant ‘hybrid burst’ solution. Deployment is managed with Microsoft’s CycleCloud templates for Hadoop, TensorFlow and Spark - almost anything except Windows!
Jonathan Mitchell described Park Energy’s use of neural nets and a ‘long short-term memory network’ (LSTM) to analyze pumping data. The LSTM was trained on plunger lift data from 4,000 wells to ride roughshod over the noise and predict production 90 days out. These compared well with conventional decline curve analysis, especially when averaged over the whole field.
Valery Polyakov (Schlumberger) observed that, paradoxically, high performance computing is not the primary focus of the massive resources of the cloud. Google, Microsoft and Amazon have instead offer high availability microservices. Leveraging the cloud for oil and gas requires a different approach. This is now facilitated with the Kubernetes framework of clusters and containerized Docker-based deployment which provides a higher level of abstraction than individual virtual machines. Key to the adoption of Kubernetes for HTC is a queue manager. Schlumberger has one, although it’s not clear that this will be open sourced.
For its part, Chevron uses Altair’s PBS Pro in a similar context, as Philip Crawford revealed in another presentation. More from the event home page.
© Oil IT Journal - all rights reserved.