The organizers reported over 1400 attendees at the 2016 SPE/Reed Exhibitions Intelligent Energy conference in Aberdeen. In the opening plenary, Schlumberger retiree Walt Aldred opined that now, at the bottom of the cycle, was the best time to try new stuff. Autonomous drilling systems will be out ‘in the next year or two’ and automation will be ‘pervasive’ across our industry.
Redburn Associates’ Rob West observed that E&P share prices have held up better since 2014 than the oil price would have suggested. Investors appear to be expecting a price recovery and have bought into the ‘cost deflation’ story. Intriguingly, the majors created most shareholder value in a ten-year ‘sweet spot’ from 1993-2002 when prices were low. Cost cutting includes a scale back from ‘over maintenance’ in the Macondo aftermath. Maintenance spend (as judged by shutdowns) doubled after Macondo but is now down to 2004 levels. Labor costs in bbl/employee quadrupled during the period from 1980 to 2002 but have halved since then.
A panel discussed how intelligent energy could show the way out of the downturn. Greg Hickey stated that BP has achieved a lot in the digital space but that ‘doing more of the same only take us so far.’ BP has set out on journey to ‘transform, and focus on margin’ by ‘digitizing the upstream with a manufacturing focus.’ Toyota was cited as an exemplar of what BP is trying to achieve but it is GE that is supplying the toolset. BP is to address the enduring problems of equipment downtime and drilling inefficiencies with ‘smart sensors, cognitive computing and wearables.’ Standardization will move work to where it can be best accomplished. Why has this not been done already? Essentially because digital platforms could not support such activity. The cloud has changed all this, specifically GE’s Predix which is to provide BP with across-the-board analytics. GE’s industrial internet will provide ‘digital twins’ of infrastructure and a test bed for field and plant-wide optimization. Pretty well all of the above exists today, but in silos. The key now is to integrate and automate, ‘make the computers do the heavy lifting 24x7.’
David Boyle (ConocoPhillips) offered some interesting observations on productivity. Despite the sexy control rooms and remote operations center, ‘tool time,’ the time offshore workers actually spend on the job was a ‘consistently embarrassing’ two hours out of a twelve hour shift. A renewed focus on identifying bottlenecks led to efficiency improvements and tool time is up to 6 hours per shift. Platform reliability is also up with shutdowns down from every 2 weeks to every 4-5 weeks.
According to Johan Atema, Shell believes in the ‘lower for longer’ scenario and wants to be making money at $40 oil. Which, incidentally, is not really a ‘low’ oil price, rather a historical average. Shell wants to change its ‘arrogant, inward-looking attitude’ by learning from industry and from the outside world. Shell’s digital effort already has its sweet spots of equipment monitoring and maximizing production at least cost. The data infrastructure achieves very high uptime. In Oman the focus is shifting from the large operations center to the ‘smart mobile worker’ with an augmented way of working. Operators may be kitted out with a thermal camera, a GoPro, iPhones, gas monitors and good communications. A permit to work may be issued on the spot as required.
Mark Edgerton (Chevron) sees intelligent energy as comprising a large, growing toolkit. All Chevron platforms operated out of Aberdeen have condition-based monitoring systems that communicate with Houston HQ and to equipment manufacturers. Production is now maximized in real time through multiple small tweaks. Edgerton doesn’t like the word ‘workflows’ but he does like what they do! Offshore data flows into the iOps real time center and is used to improve maintenance planning and find out which teams are most effective. The future will bring ‘more digitization and more opportunities.’
Andrew Hartigan (Lone Star Analysis) has applied a ‘dynamic bow tie’ approach to risk management, developed for the aviation industry, to a retrospective analysis of the Macondo/Deepwater Horizon blowout and fire. Lone Star’s technology translates bow tie diagrams into a ‘dynamic model of trigger events, activities and barriers.’ Model nodes are filled with auditable data and relationships mapped as connected lines and probabilistic math. Hartigan warned of ‘duplicitous data’ that is present in many nodes. In Macondo, decisions were made by people unaware of the current state of play. Pressure, flow and historical data were input along with subject matter evaluations. Rolling up the whole model Hartigan concluded that prior to the event there was a ‘30% probability’ of a blow out as compared to a ‘nominal’ 0.045% probability. Comment: ‘nominal’ 0.045% seems rather high while 30% is clearly too low! More from the paper SPE-181036-MS and from the Lone Star video.
Claude Baudoin (Object Management Group and Cébé IT) Observed that while the ‘money’ in the industrial internet is mostly in smart grid related activity, oil and gas should not think ‘we are different.’ An IIoT demonstrator at a refinery tracked employees and tagged high risk areas using ‘smart helmet’ technology and other wearables. On the other hand IT/OT convergence is exposing systems to the risk of hacking. In a survey, over half of industry respondents said that standards are important for the IIoT. But what standards? The Standards Leadership Council has carved up the standards space into multiple bailiwicks but this is ‘more in the intent than in the execution.’ The SLC is ‘work in progress.’ The Open Systems Interconnection model IoT standards are an ‘alphabet soup.’ We need IT and OT to collaborate on the IIoT, perhaps starting with the Industrial internet consortium’s free reference architecture. On the security front Baudoin cited the hack of a control system on the BTC pipeline, attributed to the PKK, the Kurdistan Workers Party. The skill set required as OT migrates into IT is broader than in the old days of scada. Companies should keep IT architecture, security, governance and sourcing in-house as core skills not to be outsourced. Oh, and ‘expect to be attacked.’ In the Q&A, Baudoin was taken to task for his ‘do not outsource’ dictum. He relented some saying ‘OK, just keep governance in house.’ But also cited the case of a chemical company that had all its IT in Bangalore and did not know enough about its systems to re-bid the contract – SPE-181107-MS.
Mike Hauser presented work performed at the Chevron-sponsored CiSoft Center for interactive smart oilfield technologies at the University of Southern California. CiSoft is working to ‘break down the silos walls’, to enhance efficiency and improve HSE in a move from ‘conventional’ data management to ‘smart’ IT. The lab’s output is commercialized through Hauser’s ‘CiSoft Solutions’ unit. Initial focus for commercialization has on four ‘high priority’ inventions. Hauser was not very forthcoming as to what these were, but a visit to the CiSoft website located PDFs describing ‘integrating data sources,’ a ‘smart engineering apprentice’ and ‘visual grammar,’ a.k.a. ‘data analytics for users without a technical background’ – SPE-181068-MS.
Despite its large footprint in the field, Yokogawa has not been terrible audible in the SPE/intelligent energy community. So it was good to hear Maurice Wilkins on the company’s role in automating procedures for efficiency and safety. Wilkins’ talk revolved around the importance of standard procedures that are about to ‘change the industry.’ Today the main cause of plant trips and accidents is human error and frailty. An ExxonMobil study of transient operations found that although they only represent 10% of a facility’s life span, they are responsible for 50% of incidents. Enter standard operating procedures (as practiced in aviation) and standards-based decision support. Standards of relevance include ISA 18.2 (alarm management), ISA 101 HMI management and the ISA 106/88 procedure automation standard (of which Wilkins is an instigator). Citing the Mogford report on the 2005 Texas City explosion which found that the plant’s systems were ‘too complicated to start up manually,’ Wilkins offered a quiet plug for Yokogawa’s Exapilot procedural assistant. Exapilot would have halted the plant operations as soon as it detected that the alarms were not working. The Abnormal situation management consortium also got a plug – SPE-181019-MS.
Eric Cayeux from Norway’s IRIS R&D organization has been researching automated drilling performance and risk. There are many sources of uncertainty in drilling but few are generally considered. Requirements for good drilling performance may be various and complex. Here Cayeux has analyzed the risk ‘big picture’ for an extended reach well to optimize the drilling plan in face of uncertainty. Monte Carlo simulation was performed across the wide range of inputs to see if safety thresholds are not breached and to figure the optimum path through the multi dimensional parameter space. Current drilling scenarios are ‘far too deterministic.’ The Iris DrillWell Center and software also got a plug – SPE-181018-MS.
Jim Crompton, presenting on behalf of Chevron/CiSoft, described SOSNet, a.k.a. the smart oilfield safety net. This combines a machine learning component developed at the USC data science lab which leverages a large image base of photographs of corroded pipe. The imagery was combined with physical inline inspection data and used to train a neural network to look for defects, rolling in equipment tags and other data sources. The SOSNet information bus links information across multiple heterogenous data sources via a ‘semantic asset repository.’ This is an ontology-based semantic-web style repository (a triple store?) that can be queried with ‘automatically generated’ Sparql. Information extraction from drawings and images is described as a ‘robust and fully automated’ process – SPE-181048-MS.
Asgeir Drøivoldsmo of Norway’s Institute for Energy Technology introduced the ‘Man technology and organization’ methodology for optimizing operations and maintenance staffing levels of greenfield projects. The corollary of a move from time-based to condition-based maintenance is that staffing levels are no longer pre-determined. To reap the benefits of condition-based maintenance, a flexible workforce is required. MTO advocates a campaign-oriented workforce with a minimal crew on site plus a ‘campaign crew’ on call – SPE-181102-MS.
David Cameron (University of Oslo) presented the results of the EU Optique program that promised ‘simple oil and gas-oriented access to big data.’ Optique provides ‘ontology-based data access.’ Disparate data sources can be accessed through a graphical query generator that understands terms like ‘well bore.’ Statoil is said to be piloting the approach as is Siemens for gas turbine maintenance. The ontology was ‘bootstrapped’ from existing database schemas. A demonstrator is said to have shown that federation across six technical databases was possible. An open day-cum-summit was held at Oxford University as Optique enters its final year – the project is to end in November 2017. The final year will include training of IT experts in the use of the system and integration with a geoscience desktop. In the Q&A Cameron was asked if Optique was going to be used ‘commercially’ in Statoil. He replied that this was the case*. Another questioner asked how the Optique approach differed from the many ‘data virtualization’ offerings on the market. The answer was unclear – SPE-1811111-MS.
Zachary Borden presented ExxonMobil’s gas lift optimization workflows (Glows) that are automating its gas lift surveillance and optimization effort. In a large asset, there is a good chance that an underperforming gas lift well will go unnoticed. A lot of ExxonMobil production comes from gas lift wells but there are relatively few gas lift specialists. ExxonMobil has tried data-driven and physics-based models to arrive at the conclusion that there are ‘horses for courses.’ Gas lift problems include slugging, intermittent lifting, tubing casing communication and more. Various physical or machine learning tools and classifiers are good at solving different problems. Support vector machines, random forest and naïve Bayesian classifiers are all available. The trick with Glows is recognizing which tool should be used in what circumstances. The Glows event detector is said to be very successful and easily distinguishes between normal flow and slugging. Glows also performs physics-based wellbore hydraulic models with embedded software (Prosper from Petroleum Experts). The Valve Performance Clearinghouse database at Louisiana State University was also used. Following field trials Glows is now considered a ‘one stop shop’ for well performance monitoring and has contributed to significant production hikes – SPE-181048-MS.
* We have it on reasonably good authority that this may not in fact be quite accurate.
© Oil IT Journal - all rights reserved.