HPC in O&G at Rice

Imperial College on Devito Python framework for HPC. Chevron trials seismic imaging in the cloud. Halliburton’s ‘generic and holistic’ distributed HPC. Nvidia on Rapids.ai, ‘open’ data science for GPUs. ExxonMobil moves multi-petabyte dataset across Houston. Shell trials latest AMD chips.

Gerard Gorman (Imperial College London) presented ‘Devito’, a ‘high-productivity, high-performance’ Python framework for finite-differences that is used by DownUnder Geosolutions and Shell. Devito is an abstraction layer for processing kernels that avoids ‘impenetrable code with crazy performance optimizations’. Devito generates optimized parallel C code and provides support for multiple architectures including Xeon and Xeon Phi, ARM64 and GPUs (real soon now). Devito was used by Georgia Tech’s Seismic Library for Imaging and Modeling (SLIM) to develop its JUDI seismic migration code base. Shell, DUG and Nvidia are to kick off a consortium to support open source development. More from Devito.

Ling Zhuo and Tom McDonald presented Chevron’s trials of running seismic imaging applications in the cloud. This currently runs on Chevron’s on-premise HPC clusters. The software runs in a master/worker mode that initiates multiple seismic migrations during a workflow that is an ‘ideal’, embarrassingly parallel candidate for the cloud. The test compared a 900 node public cloud with a 128 node/24 core on-premises machine. Performance speedup was an underwhelming 2x due to the performance bottlenecks of sending large data sets to multiple nodes simultaneously and a single-threaded task scheduler. However, ‘HPC in the cloud is feasible if you know your application well’ and can manage communication patterns, I/O and computing requirements. Performance speedup is limited without changes in application design. Chevron is now revisiting its application architecture and design assumptions and performing a cost analysis to find the most effective solution.

Halliburton’s Lu Wang and Kristie Chang presented a ‘generic and holistic’ high performance distributed computing and storage system for large oil and gas datasets. The cloud provider-agnostic solution leverages a generic processing microservice running on a stable distributed cluster computing framework. The stack includes Apache Spark, Hadoop, Cassandra, MongoDB and Kafka to name but a few components. Users interact via Tensorflow, Scikit-learn and Keras. Seismic processing runs on RabbitMQ microservices. All of the above can be deployed either on premise or in the cloud.

Ty Mckercher presented Nvidia’s ‘open’ data science a.k.a. Rapids that exposes Cuda-based tools for analytics (cuDF), machine learning (cuML) and graphs (cuGraph). Along with these, PyTorch, Chainer Deep Learning, Kepler GL visualization. Apache Arrow in GPU memory and the Dask scheduler also ran.

Ken Sheldon (Schlumberger) enumerated some of the challenges facing reservoir modeling for unconventionals. User workstations have limited capacity and simulation times are ‘hours to days’. Also, engineers are not software developers or system administrators. Fracture simulations are best performed in a quasi-batch mode aka ‘simulation as a service’, offering parametric sweeps for sensitivity analysis. Schlumberger provides access to remote HPC services, tightly integrated with the fracture design workflow in a hybrid cloud. Intriguingly, Sheldon reports that ‘different hardware can yield different results’ (and this is not unique to cloud solutions) and QA/QC can be challenging. And ‘Peter Deutsch’s Fallacies of Distributed Computing still apply’.

Mike Townsley explained how ExxonMobil moved its multi-petabyte dataset across Houston and into its new Spring Campus data center. XOM’s total HPC bandwidth is currently in the 50 petaflop range which would put it around system #10 in the TOP500. The new machine is a Cray Discovery 3. The move involved the creation of an 800Gb/s network between data centers (24 LNET routers) and an in-house developed high-performance Lustre-aware copy tool. The transfer worked at 1-2PB/day, slowed by metadata and small files.

Ron Cogswell presented early results from Shell’s trials of the latest AMD HPC chips warning that ‘all projects have minor hiccups at the start and this one was not different in that regard’. Background is the observation that the addition of cores to the Intel platform over the years has moved many algorithms from being compute-bound to being memory-bound. The AMD platform promises a greater memory bandwidth. The tests performed on the first generation ‘Naples’ architecture showed a reduction in flop performance for seismic imaging, mitigated by the higher core count and more memory. But ‘current pricing allows us to get more cores on an AMD node to make up for it’. ‘For small jobs that could live in cache, Intel is the way to go, but for our seismic code we need the higher memory bandwidth’.

More from the Rice Oil & Gas HPC home page.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.