Nvidia GPU Technology Conference

Multiple high performance computing presentations from majors and researchers in reservoir engineering and seismic imaging shows 10x-up speed-up over boring old CPUs.

Nvidia’s GPU technology conference, held early this year in San Jose, California is a showcase for high performance computing (as opposed to graphics) using Nvidia’s parallel processing technology. The subtext for most all presentations is that the GPU is a practical route to HPC and can provide more flops per dollar and speed than the conventional CPU of the Intel variety. While one should not expect balance from this gathering of enthusiasts, the conference hosts an impressive line-up of technologists from a wide range of industries. Here are some highlights from the oil and gas track.

Chris Leader’s (Stanford) PowerPoint spectacular shows how GPUs can ‘greatly accelerate’ seismic imaging with potential for an order of magnitude speedup.

Thor Johnsen and Alex Loddoch (Chevron) showed that running high resolution (120Hz) computations on very large synthetic seismic models such as the SEG SEAM II implies eye watering amounts of RAM (768GB!). Using smart data management across host and disk memory makes this feasible. 16 Kepler GPUs achieve 20-30x better throughput than highly optimized CPU code running on a dual socket Sandy Bridge. The approach is amenable to cloud-deployed GPU nodes.

FEI’s Nicolas Combaret described a Stokes equations solver for absolute permeability (see also our article on BP’s digital rocks on page 12) as deployed in Avizo Fire from FEI’s Visualization Sciences Group. A Stokes solver coded in Cuda (Nvidia’s GPU programming language), running on a Quadro K6000, showed a 10x speedup over the same calculation on a dual 4 core CPU machine.

Massimo Bernaschi of the National Research Council of Italy has used Cuda Fortran90 and C kernels to model hydrocarbon generation and expulsion. A novel data structure holds all 200 variables used in the calculations and is accessible from both C and Fortran. The authors conclude that using a cluster of GPUs as a farm of serial processors offers both high performance and strict compatibility with legacy codes.

David Wade presented Statoil’s ‘end-to-end’ implementation of reverse time migration running on the latest generation of Kepler GPUs.

Phillip Jong revealed that Shell has been working with Nvidia on its in-house interpretation system (GeoSigns). Nvidia IndeX, a parallel rendering and computation framework, is a key component of Shell’s toolset.

Hicham Lahlou (Xcelerit) is modeling reverse time migration applications as ‘dataflow graphs’ to expose parallelism, memory locality and optimization opportunities. The code generated is portable across different hardware platforms.

Garfield Bowen (Ridgeway Kite Software) showed how GPU memory limitations in large scale reservoir simulations can be overcome with a simple scale-out strategy. The solution was demonstrated running a 32 million cell case on 32 Tesla GPUs. More, much more from the GPU technology conference homepage.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.