The 17th annual Energy High Performance Computing Conference,
hosted at Rice University by the Ken Kennedy Institute was a
rather subdued affair with attendance well below the announced ‘500
leaders and experts’. While the title and intent of what was previously
the Oil and Gas HPC Conference has shifted to ‘Energy’, its geophysical
ancestry and footprint is still in very much in evidence. Stalwarts
John Etgen (BP) and Henri Calandra (TotalEnergies)
reminisced on past seismic glories and attempted to figure out ‘what
really matters to our industry in HPC and exascale computing’. For the
last 30 years, HPC has been the perfect tool to integrate more physics,
implement more complex algorithms and solve our (seismic) problems. The
large capital intensive bets on HPC infrastructure can ‘make a real
difference’ if they are done right, but can also ‘crater the whole
company if you get them wrong*’. HPC systems evolved in parallel with
acquisition systems, larger field data sets and the evolving hardware
landscape, particularly with the advent of the GPU. The high point of
seismic HPC was successful full waveform inversion, circa 2017. Etgen
mused that today, the full physics approach is being eclipsed by
machine learning at least in the eyes of the younger generation.
Seismic exploration is still the dominant workload although other HPC
use cases like wind, solar and new material research in chemistry mean
that seismic will become a smaller slice of the pie. Analytics and AI
are the direction things are ‘mildly trending’ pending the arrival of
‘something disruptive to that none of us are anticipating’! Managers
(like Etgen and Calandra) of large in-house compute centers are
naturally interested in how cloud computing will impact their
bailiwick. For Etgen, ‘We are at peak cloud hype right now’. Interest
in the cloud has risen greatly but it is likely to stay flat in the
future. For those working in HPC, new energies will offer new ways of
making money. Maybe not as much money as the hydrocarbon business has
done in the past. But for those who can pose fundamental problems and
challenges in a physical language, solve them with numerical
algorithms, make predictions and inform investments judgments and
decisions, ‘you have a bright future ahead of you and you will always
be in demand’.
* Some chapter and verse on this would haven been nice although almost any defunct seismic contractor might fit the bill!
Samir Khanna (BP) addressed the role of HPC in the energy transition announcing with some embarrassment that much engineering today doesn’t actually need HPC! HPC applications can be found in complex situations involving multiphase fluid transport, pigging operations, digital rock analysis and wind turbine models. Khanna showed an impressive animated PowerPoint slide with spinning turbines, turbulent fluid flow, electrolyzers and more. Offshore wind is the fastest growing business in BP. Turbines and windfarms are getting bigger and ‘there are lots of things we don’t understand’. How turbines interact, how windfarms interact. In particular, blade rotation is not usually taken into account in turbine models. BP has developed a fully-coupled dynamic model of a floating offshore wind turbine* for better risk/resource assessment. A somewhat more esoteric application was developed for BP’s Castrol unit where the formulation of Castrol ON has been adapted for use in data center cooling applications including crypto mining! In the Q&A, Khanna revealed that BP uses a commercial base for its modeling, adding its own user defined functions. He did not say which package this was (incredibly, he was not asked during the Q&A although we put in a query via LinkedIn that is so far unanswered). He also addressed the issue of sharing the BP seismic-designed HPC installation with computational fluid dynamics work. ‘Hardware that is good for seismics is not necessarily good for us’.
* Based on the U. Maine’s VolturnUS design.
Dan Stanzione offered the view from the Texas Advanced Computing Center. The day before his talk, NAIRRTF, the National AI research resource task force recommended a multi-billion research program to Congress. Rather tongue-in-cheek, Stanzione decided to ask ChatGPT to defend the proposal. Its response was the ‘same as most congressmen would give, just a little more polished’. ChatGPT cited a McKinsey study that promised a ‘$13 trillion value’ from future AI applications and ‘lots more apple pie’. So what does this mean for HPC? It is foolish to separate the two. AI and conventional HPC both are multiplying matrices. How long before ChatGPT realizes that ‘they are sparse’? How will AI change your job? Standzione has tested ChatGPT on writing matrix multiplication code. It does a good job and can port code to many different languages, some quite specialized CUDA, AMD HIP and even Argonne’s PETSc scientific libraries . It won’t work for thousands of lines of code, but ‘go function by function, fix minor bugs and it will be a lot faster than porting the code by scratch*’. Things are getting to the point where ‘if you are programming and you are not using it, this is kind of malpractice’! ChatGPT was also tested as a provider of technical support? It did very well, most advice was correct and pointed to the right documentation. However it can fail spectacularly. Viz the query, ‘Can you use OpenOnDemand at TACC?’ The correct answer is No. But CGPT said yes, and came back with detailed instructions, made-up documentation references and a plausible but fictional URL. ‘We have reached a milestone in AI. Machines now lie with confidence**. They are also bad at math.’
* This does raise the question as to whether writing code from scratch is really slower than ‘fixing minor bugs’.
** See too the editorial in this issue on ChatGPT as a world-class bullshitter!
The sessions from the 2023 Rice Energy HPC Conference are online on the Ken Kennedy Institute YouTube channel. Next year’s Rice HPC in Energy Conference will be held in Houston on March 5-7, 2024. Sign up for the mailing list here.
© Oil IT Journal - all rights reserved.