Semantic Days 2010, Stavanger, Norway

Schlumberger, Statoil and Baker Hughes offer different slants on semantic technology’s impact.

Schlumberger Fellow Bertrand du Castel’s keynote, ‘Upstream ontologies: will we ever learn?’ described the industry’s long quest to overcome the barriers between oilfield senor data and expert decision makers. Size, remoteness, and data ‘invisibility’ combine to make the accumulation of knowledge difficult. From the data bases of the 1980s, through the networks of the nineties to the ontologies of the first decade of this century, the industry has come a long way. But the next challenge looms—that of more ‘human-centered’ automation and systems that can ‘learn.’

For duCastel, artificial intelligence (AI) is a means to automation. Expertise is enhanced by automation in data management, simulation, uncertainty management and prognostics. Experts make decisions and are part of the automation continuous improvement process. A multi-vendor asset is fully networked from down-hole to seabed and surface. Asset performance metrics and uncertainties in future performance are constantly updated. Automation plays a key role in a rolling simulation, uncertainty analysis, and optimization of asset exploitation.

Citing his own 2008 oeuvre ‘Computer Theology1’, Ducastel claims ‘There is much to human beings, of which little has been decoded. Artificial intelligence is remote, but leveraging what’s known is within reach.’ The big new things are ‘descriptive logic,’ a math breakthrough that has application across signal processing and control systems along with ‘Bayesian reasoning.’ All of which is driven by a ‘reasoning engine’ and an ‘ontology’ of upstream terminology. ‘The ontology describes sensor fusion and control activities in a uniform manner so that reasoning can automatically process data input into commands.

DuCastel’s presentation lays obscure patent pending claims to ‘stochastic grammars’ which appear to be workflow patterns (the example shown is in the drilling domain) driven by a kind of truth table of drilling status elements. Input to the ‘reasoner’ is a real time feed of weight on bit, rate of penetration etc. DuCastel operates at a level that sets out to fly above that of ordinary mortals. Thus (apart from references to computer theology) we learn that ‘description logic ontologies are monotonic’ while ‘the brain is stochastic and learns.’

Lars Olav Grøvik’s (Statoil) presentation as a bit more down to earth—although less ‘semantic!’ Statoil’s challenge today is information overload. The petrotechnical ‘wheel’ turns around data seismics, well correlations, petrophysics and reservoir engineering—to name but a few of Statoil’s workflow elements. This picture hides a plethora of domain specific, data hungry applications including Petrobank, R5000, Recall Geoframe, Energy Components, Spotfire and many others. Statoil’s onshore operations centers are predicated on the existence of real time data streaming from the field. But things can go wrong—with unstable data streams, poor connectivity and programming/setup errors. Not only does the smooth running of the data center depend on data, so does the future value of the business—along with data reporting to regulators and other stakeholders. Effective work processes mandate providing the right data to the right people at the right time—and with the right quality.

Quoting Chevron’s Jim Crompton (as reported in Oil IT Journal), Grøvik noted the ‘kink’ in the information pipeline. The kink is located between oilfield automation/real time systems and analytics and modeling. The kink is caused by disparate data formats, quality, poor master data, ‘shadow systems’ and system complexity. The situation does not appear to be improving any time soon. One data specialist in a large oil company estimated that half of all data used is not actually captured—and of that which is kept, 78% will never be looked at! Grøvik wound up citing Putt’s Law—‘Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand.’

Inge Svensson (Baker Hughes) enumerated no less than eight data integration strategies to conclude that ‘ontology-driven processes’ win out over traditional data integration as the number of data sources increases. One such is the AutoConRig (ACR) project—powered by semantic web technology. ACR sets out to automate drilling, open loop control and envelope protection, replacing verbal communication between service company and driller. The control can be extended beyond the drilling environment for integration with models, real-time surface and downhole data. BHI eats the semantic dogfood in its Sand Control domain taxonomy, a three level vocabulary of 1,400 terms developed in Protégé and Excel. This underpins the ‘Beacon’ knowledge management system and helps find the right information and or right people at the right time in an expertise and domain knowledge base. Svensson believes that domain taxonomies/vocabularies are extremely powerful, ‘We have found multiple uses in other application areas–but creating good taxonomies and getting consensus is hard.’ Partitioned taxonomies are probably required in technical domains. Few good tools exist for taxonomy management. Future work includes PPDM integration.

More from Semantic Days on

1 Computer Theology: Intelligent Design of the World Wide Web, ISBN 978-0980182118—

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.