NCSA Blue Waters runs reservoir fluid flow models

ExxonMobil’s John Kuzan talks to Oil IT Journal about record-breaking supercomputer trial.

ExxonMobil has used the National center for supercomputing applications’ Blue Waters supercomputer to benchmark a series of multi-million to billion cell reservoir models using its own proprietary code base. The parallel simulation test used all of Blue Waters’ 22,640 Cray XE6 32 core nodes and 4,228 Cray XK7 GPU hybrid nodes for an aggregate 716,800 processors. The system was used to help ExxonMobil ‘make better investment decisions by more efficiently predicting reservoir performance under geological uncertainty to assess a higher volume of alternative development plans in less time.’ Blue Waters’ parallel processing capability was used to speed modeling of multiple realizations of fluid flow models of various reservoirs which are currently ‘hampered by the slow speed of reservoir simulation.’

ExxonMobil’s scientists worked closely with the NCSA to benchmark a series of multi-million to billion cell models on NCSA’s Blue Waters supercomputer. The ExxonMobil/NCSA team tuned all aspects of the reservoir simulator from input/output to improving communications across hundreds of thousands of processors. These efforts have delivered strong scalability on processor counts ranging up to Blue Waters’ full capacity. In an email exchange, Oil IT Journal asked John Kuzan, manager, reservoir function for ExxonMobil Upstream Research for more on the trial.

Was the idea to run many alternative development plans on smaller models or to perform an exhaustive analysis of an extremely large model?

Yes to both. We have techniques for running coarser models that are trend-accurate or simply taking longer with fewer processors on very large models. In short, lots of options to quantify the subsurface uncertainty and have an impact on business decisions.

Was the standard Blue Waters configuration used in the trial?

Yes the standard Blue Waters configuration was used so we could test ability to have the code run on different architectures with minimal need for re-code.

How easy was it to port your proprietary code base to Blue Waters? Any insights on leveraging the GPUs for instance?

It was fairly easy, but I can’t get into the details on GPUs at this time. Give us six months and I’ll be able to comment more readily!

Are there plans to try out Blue Waters on seismic imaging?

Not at this point because it does not represent such a grand challenge. That is not to say seismic is easy, just that we viewed the seismic problem as easier to parallelize at such a massive scale.

The test sound like a success. Is Exxon Mobil going to scale up its in-house IT to match Blue Waters?

The test was a success. It exposed certain areas of the code that were bottlenecks and has allowed us to enhanced the performance when using a few thousand processors and given insights for performance enhancements with smaller models, too. Stressing our code has great value for robustness, stability, and scalability for the smaller models and even for far fewer processor counts. On the last question, I think business demand will drive the direction we go…

~

The $200 million Blue Waters was originally designed by IBM which pulled out to be replaced by Cray. The Petaflop-bandwidth machine was completed in 2012 but was not submitted to the authoritative Top500 list (see here for why).

Other NCSA partners include BP, Caterpillar, Cray, Dassault Systèmes, Dell, GE, and Siemens.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.