Editorial - from Fourier and Nyquist to ... kriging, big data and ... welding

Neil McNaughton ties-up some lose ends in his exposition of numerical sampling before weighing-up the merits of a digital education with the lost art of welding.

Back in my 2017 editorial ‘Digitalization,’ from Harry Nyquist to the ‘edge’ of the internet I said that I would return to the topic of mapping things of temporal and/or spatial extent. The issue is at the heart of ‘digitalization’ and is, or should be, of concern to those working in just about any science and engineering field. As the digital world takes over from reality, questions arise as to the fidelity of the digital representation, the ‘twin’ if you like.

To avoid ‘aliasing’, when sampling a time series or when making maps from discrete point measures of topography, gravity measurement or radar altimetry, it is necessary to filter-out higher frequencies. This leads to a ‘world view’ that that is constrained by the sampling interval. A constraint that is OK-ish if you are sampling your data at a suitably high frequency. A seismic record every few meters will do nicely when mapping a large object of interest. But what happens when there is no seismic information. Perhaps because it is too expensive to acquire, especially now that those acquisition folks have gone “asset light”, or when seismics does not do a very good job of imaging the target. That does happen. Or, in the case of shale exploration, seismic acquisition may or may not fit well with the ‘factory drilling’ paradigm.

If there is no seismic, then you may just have to make do with well information. To keep things simple, I am going to consider vertical wells (remember those?) drilled at regular intervals across an area of interest. If there are relatively rapid lateral variations in the target, say with lateral changes in facies between wells, then you are potentially in a Nyquisty-kind of situation. One approach would be to contour say the formation tops (or net to gross or whatever you are looking for) only taking the well spots into account. This would produce a smooth, low-pass filtered picture which you may feel, especially if you have a geological background, is unrealistic.

Geologists may look at the well data and decide that, for a given sedimentary interval, the well data suggests a particular environment. Perhaps a shoreline with marine stuff on one side, and duly eyeball the extend of say, the facies of interest and locate future wells accordingly. This approach involves adding ‘prior’ information to the raw data, here the prior is the fact/notion that there is a limit between say, a sand-prone facies and deeper water shales.

As any geologist knows, this is an oversimplification driven by the low spatial frequency of the well data. Reality is likely to be much more complicated with inter-leaving facies, with outliers and in general, lateral changes that happen at a higher spatial frequency than the well data can capture. This is a situation that the folks involved in hard rock mineral exploration know well. The approach here is one of statistics. In fact, it is a specialist sub-area of statistics called geostatistics that was developed by South African mining geologist Danie Krige and later developed, notably in France at the Geostatistics Center of the Paris School of Mines and its commercial spin-out Geovariances. Note that the ‘geo’ in geostatistics is not geology but geography, in that these approaches have application in many other fields where there are spatial variables, the Paris Mines site gives examples including site pollution, air quality, and epidemics all of which are amenable to the approach.

To return to our geological example, there is one very convenient ‘prior’ than can be added into a study. Back in the 19th Century, one Johannes Walther observed that the vertical and lateral distributions of sedimentary facies are related. Walther’s ‘law’ allows us to use the vertical observation of facies down the well bore as ‘prior’ information to inject into a statistical model. As I was mooting this editorial, the folks at Agile posted an elegant description of the use of Markov Chain statistics to map/predict facies in a direct application of Walther’s law.

Geostatistics, à la Krige, coming from the mining industry, does not have a sedimentological orientation so the sequence/stratigraphical approach is less obvious. Mining geostatistics’s prior is the assumption that we know something about the likelihood of a given value occurring at a given distance from any point in the sample space. The technique uses a ‘variogram’, a spatial probability diagram, to characterize and map a parameter in sub-observational detail. This may be useful if your data is varying in a more random manner, say something like the porosity in a vuggy limestone where there is less stratigraphical order to leverage as a prior.

The notion of ‘characterization’ is important. Both Markov and Krige provide data where none was observed, either spatially or in the future. They fill in the missing information with data that has the same or similar characteristics as that observed elsewhere. Statistical techniques can be used to ‘characterize’ data from control systems which may be useful in a digital twin. I heared recently from a cyber security expert that the developers of Stuxnet had cut and pasted bits of plant time series data to spoof plant recordings. This actually made it rather easy to detect. It would have been harder to detect if the data had been ‘characterized’, ie made-up to be statistically similar rathe than a copy.

Geostatistics today is a bit old hat and seems to have been overtaken by ‘data science’ with a proliferation of estimating and forecasting techniques used in time series (like production forecasting) and other fields. One interesting gotcha in the big data approach is that if, as might seem a good idea, you try to impose too much science, in the form of prior information, the machine learning may not work so well. The more constraints there are, the harder it is for the model to converge. Also, doing a lot of statistics does not change the fundamental issue that characterization does not equate to reality. An AI/ML model is judged on how successful it is in fitting data, not on how ‘scientific’ it is.

Finally, who is best qualified to do this modeling stuff? With today’s enthusiasm for AI and data science there is huge pressure for knowledge workers (geoscientsts and engineers, even pumpers) to ‘learn Python’ and become data scientists. The notion that Python programming is now a necessity to advance your career is now quite widespread, I’ve noted entreaties to ‘teach Python’ at business schools.

But geologists have a lot of other stuff to learn that may be more directly targeted at doing their job. They may have to do arduous trips to the Caribbean to commune with the sedimentological processes that are current day analogs for their reservoirs. There are thin sections to study, crystal forms to learn and vast bodies of knowledge to ingest. Learning your domain specialty sans big data and AI is still a noble goal. If nothing else, it helps decide if the priors that your modelers are using are correct and/or if the results that the machine spits out are plausible. But the pendulum of science and things digital has now, I suspect, swung too far in the direction of AI. In defense of this thesis I offer an observation from a completely different field.

France is a world leader in the field of nuclear power. It is also pretty good at churning out mathematical and scientific whizzes. I recently attended a presentation on the future of quantum computing where it appears that the government is ready to put a few zillion Euros into quantum computing R&D. Meanwhile, back in Flamanville, the latest nuclear build is a decade overdue and a few billion Euros overbudget. This is in part down to a failure in some high-tech welding that is now buried in concrete and which is going to have to be dug out and fixed. I may be crazy, but I see this as the way of world. If it’s digital it gets attention and funding. Welding is a lost art.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.