Fraunhofer SDPA

PGAS-based Seismic development and processing architecture, ‘parallelization made easy.’ ‘Excellent’ performance sans tuning for multi core, parallel and heterogeneous GPU/CPU environments.

With ever increasing core counts, and thousand node architectures, there is a growing need for an efficient parallel programming paradigm. But, according to the Fraunhofer Institutes’s Franz Pfreundt, speaking at the High Performance Computing workshop at last month’s Society of Exploration Geophysicists convention in Denver, in parallel computing and HPC, ‘simple ideas don’t work!’ Parallelism and data management are hard problems. Simply put, the popular SeismicUnix (SU) package does not parallelize across terabytes of data. Today, parallelizing a good serial algorithm through MPI and integrating into a processing workflow takes months.

To address this, Fraunhofer has developed the Seismic Data Processing Architecture (SDPA). SPDA replaces the popular MPI infrastructure with a virtualized global memory for ‘persistent, performant and fault tolerant storage.’ The global memory is based on GPI/Infiniband (PGAS), works at ‘wire speed’ and scales to petabytes. ‘Where MPI fails, GPI takes off.’

Fraunhofer then addressed programming, ‘the hard part,’ separating workflow orchestration (a.k.a ‘coordination’) from algorithm development. Algorithms are written as modules in high level languages. SDPA coordination and low level memory management uses modified PetriNets and ‘easy to learn’ abstractions. SDPA allows any programming language to be used—Fortran, C or Java. SU modules like segyread can be piped through sugain etc., just as in regular processing. Real world workflow complexity for data load, output and storage is handled by SDPA, leaving geophysical coding to the geophysicist. Orchestration, described in XML, provides auto parallelization taking account of hardware constraints such as GPU/CPU availability. SDPA’s ‘magic’ is to recognize data and task parallelization. SPDA uses a functional language, described as ‘close to the compiler.’ Programs can be dynamically re-written on the fly to optimize for hardware configurations. A parallelized version of SU ‘su-parallel’ was developed in 2 hours during a Statoil workshop.

Fraunhofer is now working on a library for Kirchoff migration using a simulator to adapt code to machine configurations. Some benchmarks compare well with hand-crafted optimization. Pfreundt concluded that SDPA is ‘parallelization made easy’ and provides excellent performance without much tuning. Pfreundt told Oil IT Journal ‘The solutions we have developed make the system highly productive for the seismic processor. Getting optimal performance out of a GPU or CPU multithreaded module is left to the HPC expert. We want to make the life of a geophysicist easier and stimulate algorithmic innovation delivering a quick path to cluster wide parallelization.’ More on the SDPA Consortium from www.oilit.com/links/1011_0.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.