At the 2013 Nvidia GPU Technology Conference in San Jose, California earlier this year, Halliburton’s Joe Winston showed how Nvidia’s Index, a subsurface data graphics accelerator unveiled at last year’s SEG, has been integrated with Landmark’s DecisionSpace Desktop. Landmark is moving to GPU-based processing to speed rendering of large data sets and also to prepare for a move to the cloud. At issue is the visualization of hundred gigabyte data sets on megapixel rated displays. The use of Index is part of a trend to complex, multi core, heterogeneous computing. In fact two Nvidia software tools are used. Dice (see below) and Index. The latter implements volume visualization through graphics primitives for objet rendering and lighting control. Winston’s 50 slide presentation shows how Index has been integrated with the DecisionSpace scene graph and the various tricks and transformations that this entailed.
Jon Marbach presented TerraSpark’s work in implementing seismic attribute computation on the GPU. TerraSpark is using GPU acceleration because 3D seismic is essentially ‘image’ data, using image-processing inspired, data parallel algorithms amenable to GPU-based computation. Computing seismic attribute volumes such as horizon tracks, curvature or coherency can be very compute intensive. Fault extraction can take hours of CPU time. Targeting Nvidia Fermi and Kepler GPUs with a modest GB of Vram, Terraspark has shown that GPU acceleration works. Most tasks achieve a 3-5x speedup (although curvature computation does much better at 32x). The porting exercise brought significant code quality enhancements. ‘Porting forces a hard look at the existing code base.’ But the results are improved algorithm accuracy and a better product. ‘Seismic attributes are a no brainer for GPU acceleration.’
Nvidia’s Stefan Radig showed how to port generic number crunching to large CPU/GPU heterogeneous clusters with the Dice library. The Dice API is claimed to allow domain experts to develop scalable software running on GPU clusters, without the need to manage low level parallelization or to handle network topologies. Dice is presented as an alternative to other cluster frameworks like Open MPI. Dice leverages an in-memory NoSQL database which provides resource allocation and scheduling. The database supports ‘ACID’-transactions in multi-user environments.
Max Grossman outlined Repsol’s hybrid implementation of a 3D Kirchhoff seismic migration algorithm on a heterogeneous GPU/CPU cluster. Migration is deemed to be a good target for hybrid execution as CPU-based systems can take weeks to process the massive data sets. Legacy implementations involve ‘pointer chasing,’ compute intensive kernels and multiple I/O bottlenecks. Repsol is adopting an incremental approach to the port—starting with a CPU-only development before moving to a hybrid CPU+GPU deployment with dynamic work distribution. Test results showed good results for migration kernel execution times (up to 35x). But overall, performance was hampered by the significant portion of non parallelizable code in the application.
Ahmad Abdelfettah (Stanford) described Saudi-Aramco sponsored work performed at King Abdullah University of Science and Technology (Kaust) on numerical techniques for reservoir simulation on GPUs. The project is part of Kaust’s ‘strategic initiative in extreme computing’ as well as Aramco’s ‘giga cell’ reservoir modeling project. Fluid flow modeling comprises ‘physics’ and ‘solver’ phases. As core counts increase, the latter comes to dominate compute bandwidth. Today, applications have yet to ‘feel’ this effect, but as core counts rise, they certainly will. Kaust has developed a library of basic linear algebra subprograms (Kblast) to prepare for the new massively parallel, heterogeneous environments. More from the GPU Technology Conference.
© Oil IT Journal - all rights reserved.