2020 Rice High Performance Computing in Oil and Gas conference

The 2020 Rice University (Houston) HPC in Oil and Gas conference was held as real event just before lockdown. Chevron presented an HPC facet of OSDU, the open subsurface data universe. Petrobras presented seismic trials of the Atrio Composable Cloud. Imperial College (London) gave an update on Devito, the BP/Shell-backed HPC Python framework, stressing the ‘growing importance of open source software in industry’.

Christine Rhodes and James Clark (both with Chevron) presented on the Open Subsurface Data Universe’s venture into high performance computing. The OSDU HPC Project sets out to ensure that future OSDU releases are aligned with the specific requirements of upstream workflows, running either on premise or in the cloud. The OSDU/HPC reference architecture should support future OSDU goals such as on-demand computing, AI and analytics, and HPC ‘edge’ computing.

The idea is to ‘ensure that traditional HPC workflows are not hindered by emerging OSDU technology or data standards’ and also (rather enigmatically) that ‘emerging HPC technologies are not strictly bound by OSDU standards which may hinder innovation.’ During 2020, OSDU R2 will see the light of day with a common code release spanning OpenVDS (Bluware’s SEG-Y substitute format – see this edition’s lead) and Schlumberger’s OpenDES API. Before year-end 2020, R3 will see a multi-cloud ‘deployment-ready’ edition. The intersection of the interpretation world of vanilla OSDU and HPC is work in progress. The current OSDU data store is object based and accessed through domain specific API’s and data management services and may have limited intersection with oil and gas HPC. Storing pre stack data is considered a ‘stretch goal’ for OSDU so current HPC workflows might be connected as external data sources. Most majors are on board the OSDU HPC Project as is Schlumberger (but not Halliburton). Amazon and Azure are on the list (but not Google).

Paulo Souza Filho (Atrio) and Luiz Felipe (Petrobras) presented tests of large-scale seismic processing in the public cloud. The Atrio Composable Cloud (ACC) exposes a unified way to launch and manage computation workloads on multiple target machines – on-premises or in the cloud. Number crunching can be farmed-out to AWS, Azure, Google Cloud, Open Stack and other services. A ‘pop-up’, disposable cluster can be launched from the Atrio app store. The system evaluates the time and cost of running say, a reverse time migration in a selected container prior to run time. When all the parameters look right, the job is run. In the Petrobras trial, a 5 petaflop cluster was assembled in the public cloud, made up of 320 NVIDIA V100 GPUs and a 100TB Lustre file system, achieving 99% of the on-premises performance. But whereas the pop-up cluster was created in one hour, ‘procuring a 5PF system would take months’.

Fabio Luporini and Gerard Gorman (both from Imperial College, London) provided an update on Devito, now in V 4.1. Devito is an abstraction layer that hides HPC code complexity by automatically generating GPU codes ‘without the excruciating pain’. The open source Devito consortium has financial support from BP, Down Under Geophysics, Microsoft and Shell. The high-performance Python framework is driven by commercial and research seismic imaging demands. The authors observed that ‘open source is still a novel idea in this industry despite clear evidence from the tech industry that it is a critical business strategy, please engage’. More from the Devito Project.

Download these and other presentations from the Rice O&G HPC home page.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.