On our raison d’être

Oil IT Journal editor Neil McNaughton walks through the current issue explaining the whys and wherefores of our reporting. Our scope has expanded since we started out in 1996. This issue provides a snapshot of evolving oil and gas information technology in the face environment pressures and the onslaught of data science.

In this editorial, I’d like to walk through the current issue and explain to subscribers and non-subscribers something of our philosophy, where we are now, how we got here and where we should be heading. Oil IT Journal started life as Petroleum Data Manager, back in 1996 when data management in the oil industry became an ‘issue’. Around year 2000 we rebranded as Oil IT Journal, expanding coverage to include all things digital that impact oil and gas exploration and production. That includes geoscience, GIS, reservoir and petroleum engineering, plant and process, finance and more.

I am a rather literal sort of person. When I first heard talk of interoperability and ‘breaking down the silo boundaries’, I thought, ‘way to go!’. So, our broadened scope set out to offer specialists in one domain, a glimpse at what is happening in the other silos. This is still our guiding principle, for yes, the silos are still there and there are probably more of them today than when we started out.

So to start at the top. In this issue’s lead we report from the LBC* Canadian Onshore Well Site/Emissions Reduction conference which provides a situation report from the intersection of regulations, emissions mitigation technology and software. Emissions control is fast becoming a key facet of oil’s license to operate: a silo worth well watching and participating in.

We report on recent developments in OSDU, the open subsurface data universe where you can read about the jostling for standards supremacy (call it ‘collaboration’) that is ongoing between The Open Group, Energistics IOGP, PPDM and D-WIS. OSDU is expanding to silos well beyond its initial subsurface scope. Expanding scope is a high risk, high reward activity! One wonders if the ‘fail fast’ approach will be followed at the macro-level (unlikely).

Our review of Enders A. Robinson’s Basic Wave Analysis is quite an eye-opener for those stuck in the latest silo of ‘data science’. Want to write code for full waveform inversion? ‘Forget about AI and machine learning’. What you need is ‘geophysical theory as conveyed by books and journals’. ‘Since the 1950s, geophysics has seen major advances from the use of computers. None of these major advances have been the result of machine learning’.

A bold statement! But Robinson’s sentiments are echoed in our report on the SEG’s ‘Energy in Data’ webinar on physics-based models (à la Robinson) vs. data-driven models. This included some interesting discussions on the applicability of machine learning in different situations. Where all you have is data and not much idea of the physics, well, you don’t have much choice. Elsewhere, hybrid data and physics models are desirable, although they may be hard to realize. In circumstances where the data is ‘small’, a ‘big’ neural net may not be such a great idea.

Our report from the ECN** ML in Oil and Gas event offers plenty more on the data-driven side of the equation. Oil patch data is not just ‘big’, it is also ‘convoluted’. Presentations look into where best to apply ML and how to explain the results to a skeptical scientist. We hear from Walmart on an issue that has been central to corporate IT since our early days; how much work do you do in-house and how much do you outsource? Walmart established its NexTech unit to ‘minimize dependence on vendors for thought leadership and innovation’. But not all companies have such resources. Some may be more interested in Riverford Exploration’s exposé on big data for small companies. The ECN event also covered one of the few uses of graph databases that we have come across (although we have reported on the technology before). A Lawrence Livermore presentation returned to the physics/data conundrum to advocate ‘physics-informed’ neural nets and ‘fat’ neurons that include physical processes within the model.

Conferences and publications organized by the learned societies have various rules regarding ‘commercial’ papers and presentations. The result is that the software used in a particular study is downplayed or even unmentioned. The orgs (and oils) also have difficulty with data release and publication. What do we do? Well, we always try to tell you what software is being used, whether commercial or open source. We add links to the vendor’s website (at no charge) and/or to a Git repository for the code. Our aim is not to promote any particular software but to provide a useful service to the reader. Scientific publishing is not immune to commerciality and hyperbola, as we show in the article on The Turing Institute’s finite element modeling ‘breakthrough’, an illustration of how we try to separate the facts from the marketing/science/computing confusion. Speaking of confusion, our report on the engineering construction standards space shows more jostling for position between USPI NL, IOGP/CFIHOS and even the venerable ISO 15926!

So there you have it. Since we started out as Petroleum Data Manager in 1996, we have undergone plenty of our own ‘good’ scope creep, into fields well beyond data management. We tell things as we see them, hopefully as they are, and we always appreciate feedback and correctives if we get things wrong.

* London Business Conferences

* Energy Conference network

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.