SPE DEC07 high performance computing special session

Shell, BP and the US Council of Competitiveness outline trends in oil and gas HPC.

The special session on high performance computing (HPC) began with a video from the Kafkaesquely-named ‘US Council on Competitiveness,’ (USCoC—compete.org). The Dreamworks-produced video showed how HPC is essential to weather forecasting, the US Navy, medicine, the entertainment industry and, naturellement, seismics. Curiously, the video was narrated by a penguin, a reference no doubt to Linux’ ubiquity in HPC, but this must be the first time Linux is considered ‘the OS that dares not speak it’s name!’

Tichenor

The USCoC’s Suzy Tichenor believes companies need to ‘out-compute to out-compete’ and in this context, HPC is an ‘innovation accelerator.’ HPC provided critical compute horsepower for Chevron’s Jack development. Barriers to take up include lack of talent, lack of scalable production software and cost/ROI issues. These are compounded by what Tichenor describes as a ‘bi modal’ market—with missing mid-range machines.

Shell

Jim Clippard enumerated some ‘petascale’ problems such as ‘seeing’ (seismic) and ‘draining’ (reservoir modeling) the earth. Compute-intensive reverse time wave equation migration ‘makes the invisible visible’ in the sub salt section of the Gulf of Mexico. Achieving such compute horsepower involves power and heat issues. Shell’s facility costs $20k/year in electricity. For Clippard, the future is parallel, even though programming such machines is a challenge. Bottlenecks such as memory and interconnect latency differ for different jobs and machines. There is a need to manage heterogeneity, ‘IT folks hate this!’

BP

Keith Gray has ‘one of the most fun jobs in the company,’ managing BP’s 100 TeraFlop HPC installation. BP’s focus is on subsalt seismic imaging and has ‘delivered results and shown breakthroughs to the industry.’ BP’s compute capability has grown one thousand fold in the last eight years. The seismic machine now sports 14,000 cores and 2 Petabytes of storage. All of which implies a significant effort in data management, code optimization and parallelization. There is also a need to strike a balance between systems that let R&D develop its ideas while production users have the scale they need. Some very large memory systems offer straightforward FORTAN programming for researchers. Gray believes we may have pushed too hard towards commoditization and are seeing fewer breakthrough technologies.

Software vs. hardware

A debate ensued on the need to progress application software as well as the hardware. Some opined that this should be left to the compiler writers to avoid the need to rewrite application code. On today’s clusters, there may be only one in four systems actually working while the other three hang around waiting for data. Steve Landon (HP) agreed that too much money was going into hardware over software. If this is not fixed, the yawning gap will stretch and the software for the Petabyte machine ‘will not be there.’ On the subject of architecture, BP’s decision three years ago to opt for a cluster over shared memory ‘irritated both its R&D and its parallel processing communities equally!’

Open Source

Developers working in the Open Source movement complained of the lack of feedback and collaboration in the industry. BP is very interested in Open Source. All BP clusters run Linux with a mix of open source and commercial debuggers and job schedulers. Shell is in the same boat but expressed caution regarding un-maintained code. BP is willing to ‘try any model—maybe to pay for open source development.’

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.