The third annual American Business Conferences Wellsite Automation event, held earlier this year in Houston, underscored a growing need to collect more data in the field and to bring it in-house for analysis, but not necessarily for automation. This has sparked-off a shift from more or less proprietary scada system data formats to more open, ‘internet of things’ style protocols with the expectation that the venerable MQTT will fit the bill. Another eye-opener is the real world deployment of open source software from the Apache/Hadoop ecosystem that is rivalling PI as a system of record for some adventurous operators. While automation use cases are multiple, the optimization of chemical delivery for production enhancement, stood out.
Our (unofficial) best paper award goes the joint presentation by Eric Saabye and Micah Northington from Whiting Petroleum who presented the exploratory, ‘Lewis and Clark’ approach to the digital oilfield. Whiting’s work on scada systems led to the realization that modern IT ‘could do a better job.’ Whiting previously had multiple systems designed for local operations and firewalls that made data inaccessible. The ‘expedition’ started with a consolidation of all data onto a historian comprising a Hadoop/MapR cluster running in a secure Tier 2 datacenter. Tools of the analytical trade included Open TSDB and the Apache Drill database, a ‘Schema-free SQL query engine for Hadoop, NoSQL and cloud storage.’
The system is fast and ‘hugely scalable,’ currently handling 1.2 billion records per week. Adding data context is key to the IoT*. A level transmitter may read ‘4,’ but the system needs to know that these are say, ‘feet,’ and what is ‘top’ and ‘bottom’ for the measure. In the past, this could involve a call to the control room! The move from operations-centric uses to fulfilling the needs of new consumers with different requirements means a shift from the ‘historization’ approach. Units and other metadata needs to be passed along the food chain to satisfy compliance and get meaningful information to the end user. Note though, that the novel IT/big data does little to help out with such contextual information which is managed by Whiting’s developers.
Whiting’s digitization journey started with production accounting. There are also opportunities in pumping by exception and environmental monitoring with the Grafana open source visualization tool. Open source is an issue for the IT department, although, ‘this is not the control room, the system runs a few seconds behind real time.’ There are issues with personnel who see their jobs changing. Also, the lawyers were comfortable with daily site visits but hesitate over remote readings.
What did not work? The vendor supplied tier 2 data center was costly. Trials with the cloud failed as Microsoft Azure did not support the required data rates. A commercial predictive analytics package (designed for rotating equipment) did not work. This is not to say that predictive analytics doesn’t work, ‘but if you want to do it, do it yourself.’ The program showed that data acquired for control may not be fit for purpose in the new use cases. ‘So fix it,’ change facility design, fix engineering, fix programming to ensure that you have data context. Go for IT/OT alignment and … ‘find allies, because the foes will find you!’
Whiting’s less-than-enthusiastic take on Azure may have made the next presenters’ (Steve Jackson, eLynx and Bill Barna, Microsoft) job a tad harder. A Microsoft/eLynx/U Tulsa announcement at last year’s ABC conference proposed to use the cloud to reduce operational downtime. It is now ‘100% certain’ that this will happen in next 12-18 months. Data scientists (like Barna) need access to all the data. But this is hard because of disparate IT systems, poor data and the lack of qualified people. A common use case is tank level forecasting. In normal operations, tanks may fill up, auto shutdown kicks in and the well stops producing. Data scientists use historical data to anticipate these events and tell truckers when to visit. Maintenance can be optimized using historical data to predict future failure. Streaming analytics on natural gas compressors can detect and identify minute temperature changes that indicative of something amiss. Pumps can be reconfigured more frequently. Data mining can include public domain data from the UT database, Frac Focus, DB and the RRC. Barna warned of ‘science projects’ led by IT but without support from the business. These may produce impressive results, a PowerPoint show, ‘bravos,’ and then, nothing! Bad data is a huge problem. Talent is a problem. Universities are cranking out data scientists, but without oil and gas job skills. Data scientists on LinkedIn are snapped up immediately and then ‘they jump from company to company.’
What is required is a templated, end-to-end solution. Enter Microsoft’s tank level solution in Dynamics365 for field service. But oftentimes companies don’t have access to necessary data which may be siloed in vendor scada systems. This is where ELynx comes in, also pooling data across 100 customers to build algorithms. Jackson reported that Elynx has monitored over 40,000 wells across the US for 400 clients, all of which is now in the Azure cloud. Proofs of concept are underway in plunger lift optimization and anomaly detection that can tell, 48 hours in advance, when a well is about to shut in. The real good news, according to Barna, is that ‘predictive analytics-based evaluations show increased reserves!’
Brandon Davis presented on the future of automation at Devon Energy. Devon has some 15,000 devices and 2.6 million tags spread across its North American assets. Radio (Cambium TDM) connects scada and other data feeds to OSIsoft PI for integration. Optimization tools have been developed for plunger lift, gas lift and automated well test. Trends can be tracked down to a single well and tank and are used in leak detection, water transfer line leak tests and greenhouse gas monitoring. Frack flowbacks are monitored along with oil/water inventory and haul trucks. Real time data is more and more the primary tool for optimizing operations. Future plans include a further shift to the IoT paradigm with edge devices for, e.g., protocol conversion to MQTT, edge computing and data push to a broker system. Devon anticipates that Emerson and ABB will soon be soon capable of MQTT data delivery to enable push with less polling and lower bandwidth requirements. A message-based system (broker) could be seen as a full replacement for scada, with fewer points of failure and easier integration, something Davis sees as ‘a pretty huge thing.’
Encana’s Eddie Carpenter rhetorically asked, ‘does automation reduce lifting cost?’ The answer is no, unless it is combined with many other considerations. These are buy-in from management, communications, the right installed equipment and a clear idea of what end result is desired. Encana’s East Texas automation project benefits from 100% reliable communications into its operations control center, experienced people and a central maintenance system. Replacing manual control with automatic chokes ‘has really paid off’ as constant monitoring keeps production on line. In the last two years, maintenance has shifted from reactive to ‘mostly proactive,’ and allowed a 20% head count reduction. Automation is seen as crucial to address Encana’s plan to add 60 to 100 wells per year for five years while keeping head count flat. So does automation reduce costs? Yes, with the above provisos.
Charles Drobny (GlobaLogix) provided an iconoclastic take on the proceedings, opining that preventative maintenance is unlikely to work since nobody wants to shut the process down! In corporate acquisitions and mergers, ‘nobody cares about the scada system.’ Its value is therefore zero! Despite the enthusiasm for the IoT and analytics, ‘none of this exists!’ Moreover, nobody can define the ‘efficiency’ that is the objective of the IoT. Cyber defense in oil and gas is weak or non-existent. In fact, it may not be all that important as there is no money in it! The best defense is a glue gun for USB ports in the field! The upstream procurement process is badly flawed. Folks hear of ‘cool stuff’ in a conference and decide to do a ‘project.’ Specs are written but not always with a very good grasp of the situation. ‘Customers don’t know what they want till after they’ve got it.’ This makes bidding problematical. Projects may include more than one company, but not all are equally involved in project specifications. Options are unveiled late and expectations expand as scope creeps. ‘We end up playing the change order game!’ The system is flawed and IoT won’t happen till it’s fixed. For Drobny, the fix includes blockchain, edge devices and starting small, ‘never across the whole enterprise!’
A key target for automation is monitoring and metering oilfield chemicals. Paul Gould presented Clariant’s digital strategy for the IoT and big data. Chemicals are used to mitigate paraffin and scale buildup, and to prevent corrosion. It is important to know how much chemical is enough and how best to apply. The problem is, ‘who pays for IoT chemical management?’ Budget is unlikely to come from maintenance or production, even less likely from capex. The answer is that it all (should) be decided in the C-suite! It’s about cash flow not opex. For smaller production site, a ‘cheap’ ($450) ultrasonic level sensor is good for up to 5 gallons/day. Atex (explosion-proofing) ups the price threefold, so it is better done outside the exclusion zone.
The IoT philosophy is important too, fully remote, edge or centralized control? Comms are a key consideration. Older sites may be retrofitted with wireless while more modern pads are wired. Remote control needs to be balanced with process latency. How fast do you need to react? Full remote control may be good for hard to get to locations. Good payback is obtained by adding tank level monitoring to continuously dosed wells to mitigate damage from high chemical concentrations. Gould enumerated a multitude of solutions for secure data transfer including segmented security zones, crypto AES, sFTP, and secure data services from OSIsoft, Veritrax, Tibco, Cygnet and the Skyhub low cost wireless cloud. Finally, read the DoE’s Secure data transfer guidance document for industrial control and scada systems.
Mark Bruchmann described Apache as ‘playing catchup’ in field automation. Although there is a standard minimum level of automation for every well, above that, what’s put on site depends on considerations such as, are wells interfering? what is the target production? and many other factors including proximity to habitation. Some horizontal drilling pads have multiple lease owners so metering is a big issue. Management bought into a major automation project in 2017 and Apache is now developing its standards and building radio towers. The company operates 450 compressors in Oklahoma and these are undergoing analysis to see how much power is being used. Problems include inconsistent data input, limited bandwidth, nuisance alarms, and a lack of manpower to develop analytics.
One early use case brought some surprises. Having installed guided wave radar for tank monitoring to spot leaks, the main observation was that in August, 60° API oil evaporates away! The remote operations center is working, monitoring slug catcher status, compressor downtime, chemicals, production and trucking via the GIS. Not all benefits are derived from data. Cameras are great too, especially those at Apache’s ten wells on a prison farm location!
Another Devon presentation, from Tristan Arbus and Jessamyn Sneed looked at practical application of machine learning and AI and … ‘the rise of the machines.’ Devon’s main tool is the SAS Analytics Suite. This has been used to perform text mining in drilling operations, pinpointing causes of NPT**. The system has identified 30 or so words that drillers don’t like to hear, ‘repair,’ ‘troubleshoot, ‘found leak,’ and so on that correlate with NPT events and perhaps with individual drillers, ‘some have more issues than others.’ Concept maps help troubleshoot as events radiate from their root cause. Numerical analysis provides the ‘what and when,’ text analysis the ‘who and why.’ Challenging ESP*** failures were studied by a multi disciplinary team. This started by establishing standard definitions of failure modes. Initially successful predictions led to a published paper. But as time went on, new data failed to bear out the early success. Another project is investigating screen-outs (wells plugged with frac sand) using the OSIsoft AF database and open source SIMA (streaming in-memory analytics) on wireline data. Tests of over 100 models showed that neural nets were best.
* ‘Internet of things,’ taken as a catch-all term signifying an internet-connected sensor network.
** Non productive time/downtime.
*** Electric submersible pump.
The 2019 American Business Conferences Wellsite Automation event will be held January 30-31 in Houston.
© Oil IT Journal - all rights reserved.