Matlab’s video ‘how to’ on GPU-based seismic processing

80 minute video provides introduction to Matlab, virtual arrays and CUDA/GPGPU acceleration.

Matlab developer The Math Works has put a seismic data processing case study online to demonstrate the use of Matlab on large data. The demo shows how to manage out of memory data using a memory mapped file and customizing the object for array indexing. This enables reuse of the memory mapped file inside functions or with parallel computing without needing to rewrite code or recreate the memory mapped file on each worker manually. The demo also shows how to speed up the solution of the wave equation using a custom Cuda kernel.

We followed the 80 minute video authored by Matlab’s Stuart Kozola who showed how many data types—from spreadsheets, databases to SEG-Y files are too large to fit into memory. The work around is to create ‘virtual arrays’ which are amenable to parallel computing with GPUs. The demo also provides an introduction to the use of the Matlab desktop, an integrated data and development canvas onto which files and folders (such as the SEG velocity model) can be dragged and dropped.

A spinoff of the approach is that any experiment (here using code from Gerard Schuster’s book on seismic interferometry) is self documenting and ‘reproducible.’ The demo works through several seismic techniques including shot simulation and gather generation, finite difference modelling and wave equation solutions. Much use is made of videos to illustrate both wave front propagation and computational progress.

A 20GB data set on disk is addressed as one big virtual array for seamless processing. A Matlab ‘pool’ can be defined. Here, four machines with four cores make for 16 ‘workers.’ Parallelization is said to be good and even more speedup can be obtained by offloading calculations to GPUs.

The Matlab parallel computing toolbox runs on a local machine, on a cluster or on the Amazon web services cloud. Matlab codes is portable across all platforms with GPGPU support. Fore more performance, Matlab code can be compiled to C and invoked along with Cuda kernels from Matlab. Watch the archived webinar and check out the code.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.