PDM - Why does Shell need a supercomputer?
Burr - We held a workshop earlier this year and realized that many new ideas for E&P applications were extremely compute intensive. Pre-stack depth migration, risk analysis and reservoir modeling were all pushing the envelope for Shell’s existing infrastructure. There was a general feeling that although many quality algorithms had been developed in house over the years, they were not being used to full effect, just because they were taking too long. So SIEP’s researchers decided to build a supercomputer.
PDM - What decided you to build not buy?
Burr- We decided to go for off the shelf hardware and open software to give us more long term flexibility. Open software makes us independent of vendor operating systems, and promises a better growth path. We have found that our algorithms have a much longer life span than hardware. Some of our software was originally developed over 10 years ago for Vaxes or for the Cray. Also Linux is getting very professional and reliable.
PDM - Have you had to adapt your algorithms for parallel computing?
Burr - Not really. We have two sorts of uses for parallel processing. One just involves running the same program on lots of data. This can be fairly easily shared out between the nodes and the results collated after the fact. Another kind of parallel process involves operations on different nodes sharing data during processing. This calls for more sophisticated programming with inter-process communication. Fortunately Shell has been working on this type of process for some time and we are in pretty good shape to implement this technology on the new machine.
PDM - Just how fast will the new machine be?
Burr - There are 1024 Pentium III single 1GHZ processor nodes. This
gives us a theoretical upper limit of 2 teraflops. Of course in reality the
machine will probably deliver 10-20% of this. © Oil IT Journal - all rights reserved.Click here to comment on this article
Click here to view this article in context on a desktop