Big data and legal risk. World’s future energy needs.

Neil McNaughton wonders what the regulator thinks of big data-derived stochastic reserves numbers. Lone Star reports on similar warnings from the Association for Computing Machinery. Mark Razmandi warns of bias in SCADA analytics. Also... the Church of England blesses fracking (official!). Oft-cited IEA fossil fuel forecasts sneak-in 2 billion world population hike for 2040!

It may be something of a canard but fairly often in data conferences there are vague warnings of potential non-compliance issues that stem from the fact that today’s reserves reporting, ultimate recoverable numbers and so on are obtained in part through various numerical modeling techniques. The warnings come from those in the data management community who are anxious not to be in the firing line if and when the SEC, or whoever is your regulator, comes back with embarrassing questions as to how such numbers were arrived at.

Consider for a moment ExxonMobil’s humongous calculation performed recently at the NCSA (see page 5) which briefly monopolized all the processing power of the Blue Waters supercomputer to produce an optimal development plan. One that might include some reportable reserves numbers. What if, some ways down the road, the regulator gets back and asks for justification of the reported numbers. Operator fires up the behemoth, re-runs the analysis and comes up with a different number! For yes, these calculations are statistically derived, using either old fashioned Monte Carlo techniques or more fancy big data-esque computation.

OK, perhaps this is rather contrived. Exxon was more interested in testing its modeling on a really big computer than in reporting. More seriously, the issue of risk, big data and analytics was highlighted recently in a release from consultants Lone Star Analysis. Lone Star reports that while ‘a consensus is beginning to define acceptable practices and define legal risk for users of big data and analytics,’ such new developments ‘pose significant legal risks for careless uses of big data.’

In the US, the Association for Computing Machinery (ACM) recently issued a statement on ‘Principles for Algorithmic Transparency and Accountability.’ The ACM outlines seven principles for big data users in business and government. These include an awareness of possible bias in a model, the need for access to data, calculations and accountability for AI-based decisions. Big data users need be able to explain procedures and decisions and to assure auditable data provenance and model testing.

While this may be of less application to oil and gas, Lone Star also flags up recent EU legislation on data privacy. Big Data advocates who want to ‘discover’ new relationships and patterns in consumer data will have to be careful how they explore the digital unknown universe, while regulators insist that such data is only used for ‘specified, explicit purposes and only those purposes.’

Lone Star CTO John Volpi concluded, ‘We always advocate the use of redacted and abstracted data and the targeted use of machine learning over brute force big data. As society comes to grips with the risks of big data done badly, we should not be surprised to see more regulations in this space.’ And CEO Steve Roemerman added a warning on the risk of litigation, ‘The ACM principles may help US plaintiffs who feel they have been harmed by big data and AI. If models are not transparent, auditable and explainable, such risk cannot be effectively determined.’

Another contribution to the analytics discussion came from Anadarko’s Mark Razmandi in a presentation given at the LBGC Wellsite automation conference held last month in Houston (report on page 6). Refreshingly, where others like to present oil and gas as a technology laggard, Razmandi has it that oil and gas is the original big data proponent. What has changed in recent years is a shift to ‘comprehensive mass data acquisition paradigms and progressive analytics.’ In a companion presentation, Bias semantics in big data, Razmandi describes ‘objective’ bias, due to the particularities of data acquisition and the more familiar subjective bias of those seeking to make more of the data than they should. And this is no abstract concept, Razmandi’s field is scada systems engineering and oilfield automation.

~

A rather amusing claim appeared in the latest issue of the IOGP Highlights newsletter where I read that the ‘Church [of England] blesses fracking.’ In the same newsletter it appears that, ‘progress [has been made] in standardizing Christmas trees.’ Nothing like having God on your side!

On a more serious note, I have heard and read a lot recently that oil and gas has a good future. Global oil production is forecast to stay around the 100 bopd mark out to 2040, making the noble goals of COP21 appear somewhat illusory, which is of course great news for the industry. However, it is worth looking at how the IEA, the source of various forecasts from BP, ExxonMobil and others, derives its numbers. Embedded in the IEA’s forecast is an increase in the world population of around 25% from today out to 2040.

Some of the growth is forecast to be from poor, low-to-no fossil fuel users, to which the IEA offers sympathy but little else. But in my admittedly rather cursory scan of the IEA’s recent output, the massive growth in world population (from 7 to 9 billion) appears to be a given, and is hardly commented.

Doing something about world population growth is probably about as hard as achieving significant COP21-style decarbonization. On the other hand it is curious that the IEA, and for that matter, pretty well everyone else, does not appear to be interested in trying to do something about both of these interlinked issues. From COP to POP?

@neilmcn

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.