High Performance Computing Session

Ebb Pye’s Visualization Theatre ‘Academic Session’ investigates high end machinery and algorithms.

According to Paul Sava (Colorado School of Mines), the high performance computing industry is moving to petaflop machines, putting huge compute power in the hands of industry and academia and making possible elastic and anisotropic, reverse time migration and inversion. When you cross correlate two 4D wavefields you get a very big multi dimensional field. In conventional imaging, most of this information is dumped. Using more of this information is the way forward.

Weglein

Art Weglein of the University of Houston’s Mission-Oriented Seismic Research Program (M-OSRP) is likewise leveraging HPC to ‘responding to pressing seismic E&P challenges’. In the context of wider azimuth and finer sampling, Weglein cautioned that ‘no current migration algorithm will correctly model a flat bed beneath a sphere.’ Weglein was most enthusiastic about the possibility of seismic inversion without velocities, suggesting a method involving seismic events ‘talking’ to each other. A form of ‘seismic group therapy.’ Here, a ‘closed form’ processing technique using Fang Liu’s multi-agent genetic algorithm goes straight from recorded data to the depth model.

IBM

Earl Dodd described IBM’s move to ‘petascale computing.’ Commodity Linux clusters have ‘repealed’ Moore’s Law and now dominate the HPC landscape. Oil and gas HPC is the second industry to government spy satellites etc. Data growth is currently 100% annual – making for huge storage requirements. The need for speed is shown by the 45 petaflops required for the M-OSRP seismic experiment above and the ‘intelligent oilfield’ will require 1.7 x 1021 Flops (i.e. beyond petascale). Matching algorithms to fast evolving hardware like multi cores and GPUs is where it’s at.

BP

John Etgen, BP’s HPC guru, described the design and operations of BP’s innovative Wide Azimuth Towed Streamer (WATS) seismic survey in the Gulf of Mexico. The WATS technique was conceived by BP’s scientists working on BP’s in-house HPC cluster (OITJ Vol. 8 N° 4). WATS meant that BP had spent $100 million ‘on a scientist’s hunch’ and needed rapid feed back from the survey to check that everything was working. Etgen likes big memory machines and regrets the demise of the ‘traditional’ supercomputer. WATS processing involves ‘buying whatever it takes.’ Etgen sees ‘tension’ between the large memory requirements and the conventional COTS cluster community. The issue is that the tools aren’t there (for HPC on multi core etc.). Etgen also expressed concern over the quality of Intel’s Fortran compiler for HPC.

PeakStream

ATI unit PeakStream was founded mid 2006 to develop ‘stream computing’ solutions that allow ATI graphics processors (GPUs) to work alongside CPUs to solve compute intensive calculations. A speedup of 20 fold is claimed for seismic imaging. PeakStream is also working on IBM Cell BE-based computing. The hardware uses existing development tools and also the Brook language from Stanford researcher Pat Hanrahan.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.