Fatmir Hoxha—Our current R&D focus is on depth imaging and pre stack migration. We started using GPUs last year, learning how to program them and we got our prototype running earlier this year. The ‘killer app’ is finite difference modeling—the key to reverse time migration (RTM) where we were surprised to find that GPUs gave an immediate 30 times improvement over the CPU.
What exactly is compared here?
A single core of a 64 bit AMD CPU against a single NVIDIA Tesla C870 GPU Card with 128 cores.
Is that fair?
That’s not the point—it is consistent. We were interested in testing code across two architectures, the NVIDIA card does its own parallelization—parallelizing across multi-core CPU architectures is a different story! In a production environment, taking account of the restricted memory bandwidth of the GPU cards (limited to 1.4Gigs), we still found a 10 fold speedup—with no tweaking of how the code runs across the 128 cores.
So are GPUs to replace the thousands of clusters in seismic processing shops around the world?
Probably not. GPUs are great for programming some tasks such as RTM. But even this does require a lot of GPU-specific development using NVIDIA’s CUDA API, for memory management and scheduling. For companies with masses of legacy code it is unrealistic to imagine that this can be effortlessly ported to CUDA. Apart from anything else, there is a skills shortage and companies don’t want to get dependent on a few CUDA developers.
But won’t there be a parallelizing Fortran or C compiler for CUDAs?
Probably but this will never remove the need to tune code to the GPU architecture. It’s unlikely that the benefits will be that easy to realize.
What about floating point and double precision math?
CUDA provides single precision floating point math which we find enough for RTM.
© Oil IT Journal - all rights reserved.