2020 OSIsoft San Francisco Oil and Gas Track

OSIsoft on stagnating data lakes. Survey finds great interest in AI/ML, but none in the ‘DevOps & API’ category. Monico’s mCore enables Marathon’s MQTT scada migration. TC Energy uses Statistical Quality Control, in PI AF to detect pipeline anomalies. Tendeka uses PI to manage voluminous DTS data sets.

Back in 2019, OSIsoft issued a position paper covering its solutions for unconventional oil and gas. OSIsoft’s Cindy Crow is quoted as saying, ‘As crazy as it may now seem, the warning that neglected data lakes will stagnate into swamps has never been more pertinent. Businesses are getting overwhelmed, and the flood of data needs to be organized, analyzed, and acted upon.’ The ability to store and analyze data from expensive equipment provides a huge productivity boost to those assets, but ‘you can’t get more efficient by simply storing or purging data’. Enter the PI System, ‘the ultimate engineer’s toolkit, covering everything from data analytics to machine learning’. A survey of PI System users found that ‘predictive analytics and insights’ was the most popular requirement. However, there is virtually no interest in the ‘DevOps & API’ category, a finding that marks a significant difference from those in the OSDU camp (see elsewhere in this issue). OSIsoft claims that ‘all the pieces needed to solve operational problems, from condition-based maintenance to drilling parameter optimization, can be assisted by the real-time data platform’. OSIsoft captures key drilling events, stick-slip, and associated key attributes such as rate of penetration and key calculations such as drill string volume. PI System underpins bespoke deployments in companies such as Devon’s WellCon decision support system and YPF’s ‘iUP’ Intelligent Upstream web-based interface to its drilling and production systems.

Fast forward to the 2020 OSIsoft San Francisco virtual user meet where Ron Sprengeler (Marathon Petroleum) and Doyle Taylor (Monico Monitoring) showed how mCore SDR (Monico’s secure data router) has been deployed to stream multi-protocol field data from Marathon’s compressors into its monitoring environment of Aveva Wonderware SCADA and PI System. Previously, data from Marathon’s 850 compressors was ‘stranded’. Performance was monitored and optimized with sporadic visits to the field. This was fixed by adding an mCore device to each unit along with a cellular radio network for data backhaul. The solution provides consistent data from all units and is tolerant of data connection outages. As we have seen elsewhere, the MQTT/Sparkplug protocol is used to provide pub/sub data collection. mCore reports that ‘a lot of scada systems are going to MQTT’. Sprengeler added that the mCore unit has replaced lots of rugged PCs on site, which made for ‘too many moving parts’.

Ionut Buse, from TC Energy, a gas pipeline operator explained how he is performing condition monitoring using statistical and machine learning models embedded in PI Asset Framework (PI AF). Anomalies are detected using statistical quality control, a ‘highly scalable anomaly detection technique that uses descriptive statistics to compute static thresholds. Given the historical mean and standard deviation for a sensor, anomalies can be flagged when new readings fall outside a certain range. Machine learning can be added to the mix using for example, regression models to predict a sensor reading from one or more monitored variables. Clustering models can reveal subtle relations between many variables. New sensor data that is significantly different from existing clusters can be flagged as abnormal. The base data platform is PI AF. Other tools of the trade include REST Web Services (.NET Technologies, AF SDK, Accord.Net) and a custom front end built with AngularJS. Buse recommends spending time upfront working with subject matter experts to understand the asset fleet, key variations and sensor coverage. ‘Use AF template inheritance and keep templates small, avoiding tight coupling’. Simple algorithms deployed at scale are powerful and ‘provide tremendous value’.

Andy Nelson from high-end completions specialist Tendeka outlined some of the challenges in collecting, managing and manipulating complex data sets in the digital oil field. Tendeka’s hardware leverages fiber distributed temperature sensors (DTS) which generate very large amounts of data. This data needs to be moved from the ‘process control’ domain on the rig into the ‘office domain’ for analysis with a rules engine, PI system and Tendeka’s FloQuest application. DTS does not show the full picture of the well. DTD needs to be augmented with other data sources. In one (typical?) use case, PI System was leveraged to pull pressure data from more than 10 million data points and apply this contextually to a subset of more than 5 billion DTS measurements ‘all within seconds’. Storing temperature profiles and interpretation results in PI System has brought ‘significant improvement in interpretation and modeling accuracy’. More from the conference home page and from OSIsoft.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.