Oil and gas a digital laggard? Big data disruption? ... maybe

Neil McNaughton reports on recent developments in the big-data-internet-of-things-advanced-analytics space. Is oil and gas a technology laggard? Is the digital grass greener on the other side of the fence? Will disruptive technology like the WebIsADatabase replace the knowledge worker? He also gets political with some AI-related news hot-in from the French election.

An apology to our upstream subscriber base. But as focus shifts from exploration to production, and as what exploration that is being done (onshore US) seems to be making do with less G&G, there is less information coming our way ‘E’ end of the E&P spectrum.

Elsewhere the information vacuum is filled with contributions from what can loosely be described as the big-data-internet-of-things-advanced-analytics brigade. Some of which is interesting, but all of which is getting a bit tired. Too much razzmatazz, not enough meat. No proof of the pudding. No killer app to replace the knowledge workers that we all are which I suppose is just as well!

We do of course continue to report dutifully on the outpourings of the big data etc. movement in the hope and expectation that eventually, something of interest will turn up. But so far, we, like others, are pushed into the uncomfortable position of ‘reporting’ on stuff that has not yet happened. All those jobs that will be lost to AI at some time in the future? All that ‘digitization’ stuff you thought you’d already done. Well seems like you haven’t even started!

On which topic, we hear from the consulting community that oil and gas is a technology laggard. A report from EY recently popped into my inbox where I read that ‘The digital revolution disrupting so many industries has been slow to make its presence felt in the oil and gas sector. [ … ] The energy industry has traditionally lagged behind other sectors when it comes to adopting technology for above ground uses. [ … ] But the dramatic changes digital is bringing to the modern enterprise can’t be ignored forever, and oil and gas executives are beginning to recognize the promise and challenge of adopting a digital strategy.’

Strong stuff indeed. But what exactly is all this stuff going on in all those other ‘disrupted’ industries? It so happens that I also recently chanced on an opinion piece in Nature by one Andrew Kusiak who, as professor of industrial engineering at the University of Iowa, ought to know. Kusiak has it that ‘… big data is a long way from transforming manufacturing. Leading industries - computing, energy and aircraft and semiconductor manufacturing - face data gaps. Most companies do not know what to do with the data they have, let alone how to interpret them to improve their processes and products. Businesses compete and usually operate in isolation. They lack software and modelling systems to analyze data.’ So much for the grass being greener on the other side of the fence!

Another popular argument doing the rounds has it that oil and gas has piles of data just hanging around doing nothing. The implication being that, if you not doing stuff with all your data all the time, then you should be replaced by a robot that can! I’m not sure exactly what is amiss with this notion, but I suggest that it is a bit like asking why you are not doing something right now about all the air that surrounds you un-breathed. Not perhaps a perfect analogy but you get my drift.

I may have misunderstood the import of another announcement, the first release of the WebIsADatabase (Wiadb) a linked, open database from the Data and web science group at the University of Mannheim. The dataset contains nearly 12 million ‘hypernym’ relations collected from the web. Hypernyms, aka ‘is a’ relationships, tell you stuff like ‘red is a color’ or ‘iPhone 4 is a smartphone.’ The researchers provide their scraped data along with confidence scores, ‘rich’ provenance information and interlinks to other ‘LOD’ (linked open data) sources like DBpedia and Yago. The whole dataset of over 470 million RDF triples is provided as linked data, Sparql endpoints or as downloadable dumps.

Is it possible to consider the whole of the web as a reliable information source? If you believe in the ‘wisdom of crowds’ then maybe. Personally I am skeptical. I wrote about the usefulness of the web as a source of information in my March 2008 editorial ‘Heat pumps, phlogiston and the world wide web.’ I concluded then that ‘on the web you are more likely to find bull from enthusiastic amateurs than gospel from the experts. There is a distinct weighting of the knowledge scales in favor of the unqualified hordes.’

In a way, the bigness of the dataset means that ‘knowledge’ eventually becomes a matter of opinion. The web is more of a massive voting machine than a repository of knowledge and is easily biased by misconceptions or gamed by the unscrupulous.

Speaking of voting, as you probably know I live in France where there have been some interesting happenings on the political front of late. In the run-in to the elections there was some angst among the chattering classes in that a hitherto unknown (over here at least) Canadian polling institute (I will refrain from mentioning its name on the ‘do not feed the trolls’ principle). These folks seemingly used big social media data to correctly predict the result of the US presidential elections last year.

In the run in to the French presidential elections, the same polling concern, using the same techniques, predicted a win for the National Front. This of course flew in the face of all the old-fashioned local pollsters. But what would they know with their ‘legacy’ technology of small samples of ‘representative’ individuals. The Canadian outfit was out ‘disrupting,’ scraping masses of information from the buzzing social media. Big data, advanced analytics, how could they go wrong?


Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.