High Performance Computing

The EAGE’s interactive session ‘High Performance Computing for Seismic Processing and Imaging,’ chaired by Shell’s Jack Buur and Vertitas’ Ed Mariner, turned out to be a polite battle between proponents of Intel clusters and the super scalar brigade who put up a valiant defense of purpose-built high performance machines.

SGI

SGI’s Igor Zacharov noted a fundamental difference between seismic processing (which requires large bandwidth for scalability) and reservoir simulation (which needs low latency). This gives an advantage, in reservoir simulation at least, to conventional high performance architectures such as SGI’s 64 bit CC-NOVA. These present one machine to software developers, and there is no limit to the memory available. However, economics dictate that cheaper commodity hardware should be considered for some types of algorithms. Zacharov compared the current situation in upstream computing with that in CERN in 1991. CERN’s SHIFT project aimed at detecting high-energy events involved a highly parallel problem, similar to seismic processing. The initial strategy was to decouple CPU servers, from disk servers and tape drives. However as the project progressed, a need to re-group the sub-units was recognized. Zacharov sees similar issues with the present trend towards PC-clusters - ‘a repeat of the same game’. Zacharov further notes that while CPU development is moving quickly, the same is not true for disks, tape and networks. Zacharov (and SGI) therefore advocate scalable shared memory machines (SSMM) acting as file servers to data on disk. Cheaper clusters can provide the CPU ‘bang’. Zacharov concluded with a little trashing of 32 bit architectures, to conclude that there is still a role for SSMM within a Linux cluster, and that considerable expertise is required to deign and tune such systems.

EDS

Exploration Design Software (EDS) specializes in making large clusters efficient in production seismic processing. Chris Stork claimed the 1200 CPU cluster designed by EDS for Spectrum EIT is the fourth largest commercial computer in the world. Such computers offer unparalleled price/performance of $950/Gflop (compared with $4,000 to $10,000/Gflop for Unix machines). Clusters offer a scalable architecture and a mature development environment including compiler documentation and expertise. Stork reports a stable, reliable hardware environment (unlike some commercial attempts at clusters), ‘computer companies come and go, but platforms stay.’ Stork claims both Windows and Linux can be used effectively on clusters - the choice is ‘mostly a personal preference.’ Windows, now ‘fully stable,’ adds 4% to cost of hardware, and offers a 15% speed improvement for compliers and I/O. Most EDS clients start with Linux and end with Windows because of above price/performance advantage. Stork offered a comparison between high-end and entry-level hardware [see table]. At a price performance of $950/Gflop, the comparison comes out strongly in favor of the ‘commodity’ machine. Key software features such as crash/fault tolerance and load balancing are crucial. Asynchronous inter-process communications are mandated - staging 100MB of data and collecting later avoids costly Gigabit Ethernet. ‘Message Passing Interface (MPI) is not sufficient, because it needs Gigabit Ethernet.’ Other tricks of the trade include disk-caching and tuning with micro code to use the SSE capabilities of the Pentium III (offering a 2-8 fold speed improvement). EDS has tested two design approaches. A master/slave configuration with data on a RAID array had I/O bottleneck problems, whereas peer-to-peer parallel I/O requires identical machines. Stork therefore advocates a hybrid approach with several masters.

IBM

IBM’s John Watts described the Open Source movement as both the ‘new economic landscape,’ and, for commercial vendors, as ‘terra incognito.’ Notwithstanding the uncertainty, ‘all IBM hardware and software is Linux-ready.’ Watts recalled a recent past when the super scalar chip led to an ‘explosion’ in the Unix workstation market - with around half of the world’s seismics processed on IBM SP2 machines. Now we have ‘PC chips with attitude!’ Since the Standish Group reported in 2000 that ‘Linux was becoming very muscular in high performance computing,’ IBM CEO Louis Gerstner is spending $ 1 billion a year on Linux. Watts’ message, like SGI’s, steers between Intel cluster advocacy and a reminder that there is still life in the RISC chip which retains its place in the CPU ‘fitness space’ - especially with a 375 MHz RISC chip offering an order of magnitude better I/O performance over a 1GB Pentium. For its part, IBM brings a common file system, high availability and distributed system management and support.

EDS cost/performance comparison of Intel hardware

Commodity

High end

CPU

2 x 1 GHz

$ 700

2 x 1.5 GHz Xeon

$ 13,000

Memory

256 MB

$ 150

1 GB

$ 1,100

Network

100MB Ethernet

$ 50

Gigabit Ethernet

$ 900

Disk

40 GB IDE

$ 200

36 GB SCSI

$ 700

Motherboard

$ 800

$ 800

Name

None

$ 0

Branded

$ 1,000

Total

$1,900

$ 5,800

$/GFlop

$ 950

$ 1,900

PDM footnote: Sandia National Labs, a forerunner in the development of clustered supercomputers with its massive Intel-based ASCI-RED machine, has released its cluster controller software C-Plant into the public domain through the Open Source movement.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.