Oil IT Journal Interview - Bert Beals, Cray

Cray’s global head of energy talks to Oil IT Journal about machine learning in seismic imaging, Halliburton/Landmark’s iEnergy community initiative, the merits of CPU vs. GPU computing and PGS' in-memory processed Gulf of Mexico Triton mega survey. Cray’s initial support for iEnergy centers on seismic imaging but there are plans for to add support for reservoir modeling and interpretation.

Cray stole the show at last month’s Society of Exploration Geophysicists annual meeting in Houston with the unveiling of PGS’ work on machine learning-based seismic imaging (see page4). Cray is also involved in Landmark’s iEnergy community. Oil IT Journal asked Bert Beals, Cray’s global head of energy, how the initiative was going and what machine learning is bringing to seismic imaging.

Cray has engaged with Landmark for some years, notably with Steve Angelovich, SeisSpace product manager, on seismic processing workflows. Recently we have been collaborating on how to migrate software to current technology, with the resulting announcement of SeisSpace/ProMax being certified to run on our CS400 cluster supercomputer*.

What is your focus with iEnergy?

Many Landmark clients have not upgraded their hardware, especially processors, for a few years. It can be hard for a software company to advise on this. Which is where we at Cray can help, by removing hardware impediments to running the current versions of SeisSpace processing software and getting the full benefit of the latest technology.

What particular technologies are we talking about?

This depends on the client, but it might be a migration to the latest operating system, RedHat Enterprise 7 and or the new OmniPath architecture, Intel’s latest on-processor embedded interconnect.

You mean as an alternative to OpenMP?

No. This is down at the hardware level, more of an alternative to Ethernet. Interconnect has been a longstanding bottleneck in high performance computing. We are also helping clients analyze the potential benefits from new processors. If their current processors are 2-3 years old, what performance hike can they expect by moving to the current generation, say from the Intel IvyBridge to Broadwell.

Migrating a cluster in the current climate seems a bit improbable!

OK, the current downturn means that budgets have been budgets ripped apart. But Cray takes a long term view. I’ve been through half a dozen downturns since the 1970s. The key thing is to innovate your way out a downturn with smart technology. You can’t cost cut your way to profitability indefinitely.

We reported on PGS’ Abel supercomputer recently as being CPU-based (as opposed to GPU) for ease of programmability. Is that a fair categorization?

Yes it is. Abel turned seismic processing inside out. Instead of the old Beowulf cluster and the massive/embarrassingly parallel approach which involves significant data movement and reorganization, PGS has refactored its code to load everything into memory and process data in situ. The only problem was that the huge Gulf of Mexico Triton survey needed 600 terabytes of memory, which is why PGS came to Cray.

And what is new with Galois?

Galois adds more capability, allowing for different workflows from a separate system image that shares storage with Abel. It too is a CPU machine.

This is all very different to Cray’s GPU-based machines we hear so much about!

It depends on which press release you read! Yes, we do have a lot of GPU-based machines, but we also do a lot of CPUs. We are also one of the largest Knights Landing shops. KNL is a self-hosted multi core processor.

The one Intel has been talking about for years as an Nvidia killer**?

No comment.

All of this is for on-premises deployment. Elsewhere iEnergy seems to have quite a cloud focus?

We do not have a cloud focus currently. iEnergy runs Landmark’s application suites on whatever current technology you chose.

Does your work encompass interpretation? DecisionSpace as well as SeisSpace?

All this is in our game plan. First SeisSpace, the Nexus reservoir simulation and then DecisionSpace interpretation. We are also working on remote visualization which we demoed at SEG along with PGS. Here we use Nice Software*’s DCV high-end remote graphics.

Remote graphics! The philosophers stone of upstream IT as of 20 years back!

Yes! Actually, I used to be with Sun Microsystems. What has changed is that 20 years ago, the internet was not up to it. Now remoting the desktop can be done. Another thing, GPUs are getting a lot of traction in deep learning. Our new XC50 supercomputer features the latest Nvidia Pascal P100 accelerators. Also big GPU-based machines are not just for computing, you can do really high-end visualization of models as they progress, in true interactive simulation workflows.

~

* This month Cray joined the Landmark iEnergy community. iEnergy members can now run Landmark’s SeisSpace seismic processing software on a Cray CS400 cluster supercomputer. Cray claims ‘substantial improvements’ in run-time performance over other clustered infrastructures. The CS400 can scale to over 27,000 compute nodes and 46 peak petaflops.

** The earlier ‘Knights Corner’ edition was reported as a Nvidia Tesla killer back in October 2011. Both are still doing fine!

** Nice was recently acquired by Amazon Web Services.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.