Fifth High Performance Computing in Oil & Gas Workshop

Rice meet hears of ‘new dawn’ in HPC, Eclipse benchmarks, ‘single chip cloud’ and call for OpenCL.

Bill Brantley described AMD’s ‘Fusion’ accelerated processing unit (APU) and its ‘direct connect architecture 2.0,’ as a scalable design that supports up to 16 cores per CPU. The APU heralds the ‘dawn’ of a new era of heterogeneous computing.

Owen Brazell reported on benchmarking of Schlumberger’s Eclipse and FrontSim reservoir simulators running across various multi-core chips including Intel’s Nehalem and AMD’s Shanghai (AMD’s latest 12 core Magny-Cours was not ready in time for the test). Various million cell models were run, most showing tail-off at around 16 or 32 CPUs. The conclusion was that increasing cores per socket without an increase in memory bandwidth is no use for distributed codes. Multi-threaded codes such as FrontSim do benefit from the new architectures. Multi-core is the future and software developers will need to re-code to reap the benefits.

Paul Fjerstad described tests of the joint Chevron/Schlumberger developed Intersect simulator on a super-giant oilfield. The ‘next generation’ simulator uses large scale parallel simulation. A new solver has already show a threefold speed-up over conventional simulators. A deviation from optimum scalability was noted, stemming from the serial part of the program as one processor works while others are idle.

Tim Mattson unveiled Intel’s futuristic concept chip, the ‘Single Chip Cloud’ (SSC). Intel’s first ‘tera-scale’ computer, the 1997 ASCI Red, had 9000 CPUs and required one megawatt of electricity and 1,600 square feet of floor space. The SSC is a terascale computer on a chip—a 48 core CPU requiring 97 watts of power and occupying 275 sq. mm! Intel plans to release 100 SSC’s to partners for research into ‘user-friendly’ programming models that don’t depend on coherent shared memory. Mattson then turned to the topic of ‘software in a many cored world’ noting that ‘parallel hardware is ubiquitous, parallel software is rare.’ He enumerated some 95 attempts to find a parallel programming model to conclude that ‘we have learnt more about creating programming models than how to use them.’ OpenMP was cited as a poster child for open systems and Mattson concluded with a plea for a similar push for OpenCL. ‘If users don’t demand standards, we, industry and academia, will proliferate languages again and our many core future will be uncertain.’

Dave Hale (Colorado School of Mines) asked, ‘Who will write the software for multi core/parallel machines?’ Most geosciences students program in MATLAB and lack the skills or ambition to program computers in the ‘fundamentally new ways’ required to exploit modern hardware. One solution is to recruit science grads to work alongside geoscientists, but Hale favors a different approach—that of getting geoscience students excited about computing. One is tempted to suggest a third way—get MATLAB to sort the parallel programming mess out! More from links/1003_11.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.