Before we gave Schlumberger last month’s lead we did
ask
politely for more information on the breakthrough cloud-based
infrastructure. None was forthcoming. However, Schlumberger’s
information-retentive guardians of the truth forgot to tell
partner-in-crime Google who’s SVP Urs Hölzle has
been blogging away
regardless. Hölzle, reprising his address to the private,
clients-only
Schlumberger Forum in Paris last month, provided more on
Delfi’s
innards. So, according to Google, the Delfi E&P data lake is
based on Google BigQuery
(data warehouse), Cloud
Spanner (RDBMS) and
Cloud
Datastore (NoSQL) ‘with more than 100 million data
items, some
30TB of petrotechnical data.’ According to Hölzle,
Schlumberger’s
petrotechnical flagship, Petrel, and the Intersect simulator are
running in the Google cloud, ‘integrated into
Delfi.’ WesternGeco’s
Omega geophysical data processing is ‘running at a scale not
possible
in traditional data center environments’ thanks to Google
cloud-based
Nvidia GPUs and ‘custom machine types’ giving a
compute capacity of
‘over 35 petaflops* and 10PB of storage.’ Other
novel tools include
TensorFlow,
open source artificial intelligence, used for log QC and
interpretation and also for 3D seismic interpretation.
Hölzle reports that Schlumberger has deployed the Apigee
API management
platform (acquired by Google in 2016) to provide ‘openness
and
extensibility’ allowing clients and partners to add their
intellectual
property and workflows into Delfi. Read Hölzle’s blog
here.
Chatting with some Schlumberger people at the SEG it was unclear how much, if any, of this laundry list of Google’s software is fully deployed in Delfi.
* Although SLB’s Paal Kibsgaard reported that Schlumberger had a 27PF capacity way back in 2013.
© Oil IT Journal - all rights reserved.