Oil & gas high performance computing

Around 100 attended Microsoft’s High Performance Computing (HPC) for Oil and Gas Forum held in Houston last month. Microsoft unveiled an HPC toolkit developed by Cornell’s CTC HPC unit which vaunts Windows 2000 Advanced Server as an HPC platform. But in HPC, Microsoft is the underdog fighting the dominance of Linux. Although various heavyweights from Intel, HP and others rallied to the cause, their true affiliations don’t really beat scrutiny. The Cornell unit is Microsoft-funded. Intel is agnostic and made no real attempt to justify Windows as an HPC platform. HP again is agnostic and was bent on selling its ‘Agile’ infrastructure rather than defending Windows.

Microsoft energy manager Marise Mikulis welcomed attendees to the High Performance Computing for Oil and Gas Forum. Mikulis announced a ‘new era’ in Microsoft’s dedication to the E&P industry. A Global Business Unit will focus on E&P and Microsoft intends to ensure that its technology meets E&P needs—both upstream and downstream.

Rankich

Microsoft HPC manager Greg Rankich noted that the research community ‘still develops in FORTRAN’ and that parallel programming hasn’t changed in 10 years and still requires specialized skills. The plethora of tools and approaches make HPC a fragmented market with many applications but a lack of integrated solutions. Microsoft is working with customers to optimize deployments by investing in companies developing cluster management solutions. Microsoft has been working with the Cornell Theory Center (see below), HP, Intel and other customers and is giving away a toolkit to help users get started with cluster-based computing.

Cornell

Roger Lang (Cornell Theory Center) wants to make HPC simple enough to be used by ‘the masses’ through standard off-the-shelf tools. CTC has demonstrated the world’s first Windows-based CAVE visualization system and is ‘trying to make clusters easy to build’. The advent of Windows 2003 ‘reveals a new roadmap’ for HPC with Enterprise Web Services making HPC resources transparent across the enterprise, promising ‘excellent reliability and scalability’. Lang deprecates cross-platform enterprise Web Services as requiring complex systems integration and producing a custom, hard to deploy product. On the other hand, Microsoft’s .NET framework offers ‘end-to-end integration for scalable Web Services’.

Intel

Intel’s HPC architect David Barkai recalled that in 1980, a gigaflop cost around $5 million. Today the same compute power costs $2,000. Clusters now offer around 10 teraflops (TF). By the end of the decade (2010) Intel foresees 30 GHz processors. Barkai states that 56 of the top 500 supercomputers sites are Intel-based (up from 3 in ’99). Barkai believes HPC requires ‘ecosystems’ for industry and end users and Intel’s HPC program is ‘more than silicon’. HPC is differentiated by computational intensity large scale applications and random, dynamic data access patterns. The community today is composed of ‘pre-early adopters’ and expert users. Intel wants to make technical computing adoptable by a larger market. Common off-the-shelf components make HPC clusters affordable at the department or project level.

Challenges

Existing challenges will get worse as Moore’s Law drives on. I/O and interconnect bandwidth will have a hard time keeping pace with processor speed and memory size. Cluster management also raises issues of performance monitoring and maintenance. Barkai enumerates reasons for using Intel’s HPC solutions without advocating a particular operating system.

HP

HP’s Oil & Gas industry director Michele Isernia presented HP’s ‘Agile Computing’ initiative which aims to loosen the tight relationship that software has with hardware. Software should be ‘owned’ by the user, who can use whatever computers or appliances are available. Users are freed from the burden of carrying electronics and computers will ‘recede in the background and become pervasively useful instead of a constant annoyance’. Components include the Agile Client and the Agile Data Center built from ‘industry standard’ servers. Infrastructure is designed for use-on-demand by focusing on services, shared resources and pay for use. A pool of virtual servers can expand or shrink as required. HP technology to achieve this is the Utility Data Center a ‘complete solution for virtualizing data center environments’. Like Barkai, Isernia offered more reasons for using HP than Windows!

CMG

Calgary-based Computer Modelling Group (CMG) provides reservoir simulation with its flagship IMEX II GEM and STARS products. CMG’s Jim Erdle explained that a driver behind HPC in reservoir modeling is to avoid time consuming upscaling by using geocellular models for simulation. This is being achieved by a combination of 64-bit computing, shared memory parallel processing, and dynamic PEBI grids. Erdle believes that 1-5 million cell models will soon be commonplace. CMG is a Windows shop and appreciates the easy to use GUI and the speed of Windows on the Itanium. CMG claims a world record for its 112 million cell simulation. [actually this was achieved on an IBM cluster running IBM Unix (AIX) - see Oil ITJ Vol. 7 N° 11!].

ModViz

VP Shing Pan introduced ModViz (a Siemens spin-off) as a developer of hardware and software for large scale visualization with PC technology. ModViz’s goal is to provide a scalable software platform to enable high performance real time visualization. ModViz synchronizes 3D data across a cluster in real time and is partnering with HP to ‘make scalable visualization a reality’. Modviz’s Renderizer Visualization Cluster software for Open Inventor is currently shipping in Linux and will be available for Windows ‘real soon now’.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.