BP presentations at the SPE’s digital energy (DE) event and PPDM’s annual Houston event demonstrate the growing role of open source software in high-end upstream information management. At DE, Mohamed Sidahmed presented work performed leveraging the ‘R’ statistical programming language to ‘augment’ operations monitoring by mining unstructured drilling reports. Unstructured, textual, hitherto the ‘missing link’ in the information workflow, contains valuable information on the root causes of deviations from plan and help address ‘inadequate reaction to real time changes.’
R-based text analytics leverage the collective knowledge stored in BP’s Well Advisor, looking for interesting patterns. Visual representations (word clouds) integrate existing surveillance systems and can provide early warning of, for instance, pump failure. More fancy techniques such as ‘latent Dirichlet allocation’ helps identify precursor events hidden in the data. Reports with similar content can then be attached to the root causes of non productive time and rarer high impact events. Data driven learning is now embedded in BP’s CoRE real time environment.
Meanwhile at the PPDM Houston data management symposium Meena Sundaram presented a ‘self service’ architecture deployed at BP’s Lower 48/Gulf of Mexico unit.
BP’s
service-oriented architecture is now up and running. Applications and
data sources are exposed as Rest endpoints, ‘providing scalability and
adaptability to technology innovation.’ The infrastructure stack builds
on a Cloudera ‘data lake,’ of over 35 domain-specific data sources.
These feed into business intelligence and descriptive analytics
applications. The system also supports enterprise level activities from
production accounting to budgets and reserves reporting along with
bespoke ‘on demand’ business scenarios and ad hoc queries.
Sundaram qualifies enterprise level data access as a ‘chicken and egg problem. Do you clean the data or show the data?’ BP has opted for the ‘show’ option, along with governance and data improvement with use. Today the data lake uses Cloudera MapReduce/HDFS, Voyager GIS data discovery and Amazon Cloud Search. ‘Big data’ tools including Solr and Lucene are also used. The toolset is now evolving to offer prescriptive analytics. BP’s near term goal is to move the supporting infrastructure to the Amazon Web Services cloud. More from the PPDM Houston conference and from SPE Digital Energy in the next edition of Oil IT Journal.
PPDM is planning to ‘morph’ into a professional society along the lines of the SPE/SEG and the like. A professionalization committee has been set up to plan the transition with members from BP, Halliburton, and Shell. Last year ECIM, PPDM and CDA signed a memorandum of understanding to investigate organization, governance and membership terms and to seek stakeholder agreement. To date, a ‘high level future vision’ has been defined but ‘important early details are still being driven out.’ Making sure that the morphed PPDM meets the expectations of members is proving a challenge.
The new society will be dedicated to the recognition of data as a critical asset for industry and to the data manager’s professionalism. Certification and training are also on the agenda as is the establishment of a ‘body of knowledge’ of upstream data management. Still up for debate is the role of the new society in the development of standards and best practices. Members can join in the debate on ppdm.org. Professionalization of PPDM, ECIM and CDA was mooted at last year’s ECIM with the promise of a roadmap early in 2015 but this seems still to be work in progress.
A common theme at the SPE Digital Energy event held at The Woodlands, Texas earlier this year (report in next month’s Journal) was data-driven analytics. Often this involves using some fancy statistics on a ‘training’ data set and then applying the learning to data that was not included in the training and see if it works. The approach combines hi-falutin science in the analysis phase with extreme statistical naiveté in prediction. I call this the ‘suck it and see’ (Sias) method.
In my early days in the North Sea we used Sias to depth convert our seismics. But of course we were using small data—sometimes very small data—maybe a handful of wells with dodgy velocity measurements. This small data was augmented with slightly larger undoubtedly wrong data derived from seismic surveys. We then interpolated or extrapolated to provide a prognosis of the well tops which were usually a fair way off target. I am sure that things have changed since then but I’m not sure that they have got a lot better.
Today, modern computing techniques let us apply statistics to much larger data sets and faster data streams than was previously possible. The assumption is that the more data you have, the more likely you will come up with something significant. This is of course a specious argument. Think for a minute of as near a ‘perfect’ a correlation as you would like to have. One that would have you rush out and bet the house on its ‘predictive analytics’ capability. Well you would be wrong to do so because I hid this fabulous correlation in a huge data set where it was arrived at purely by chance. Its predictive value was nil. In fact, any apparent correlation can come about purely by chance, given a large enough set of data. In fact, the bigger the data, the more likely it is to contain completely spurious results.
Of course I am not inventing anything here. The search for significance in statistics follows a well-trodden path. A statistical result should be evaluated against the ‘null hypothesis’ i.e. the possibility that it has come about by chance. Null hypothesis testing is a standard piece of kit especially in the life sciences where ‘evidence-based’ medicine is popular—making one wonder what else medicine might be based on, but I digress.
The usual statistical test for null hypothesis is the P-value, a measure of how much more support the data provides for your ‘alternative’ hypothesis over the null hypothesis. I’ll get back to this in a minute, but first I wanted to share with you a recent newspaper article that questions the widespread but uncritical use of artificial intelligence in the field of neuroscience.
The Le Monde piece was written by Karim Jerbi (U Montreal) who took a pot shot at the use of supervised learning methods, a technique that has quite a following in the digital energy community. The method tries to classify data according to natural affinity. It might for instance be used to distinguish between different rock types based on log cross plots. It uses artificial intelligence on a subset of available data and then evaluates its ability to classify the remainder of the data (yes this is Sias).
Jerbi’s team (which works on monitoring brain activity i.e. time series rather like oilfield monitoring data) showed that extremely ‘good’ classifications could be achieved from what were in reality completely random datasets. He observed that the use of bigger, complex multi-disciplinary data sets makes it hard to evaluate the likelihood of meaningful results and called for better policing by the community (of neuroscientists) of published results. It may be hard to figure out just how to evaluate a P-value from such data to check the null hypothesis.
I am sure that the same could be said for oil and gas use of AI where, although it may be hard to figure a P value, at least one ought to try. I thought that I’d do some big data experimenting myself in the form of some full text searches for ‘null hypothesis’ on the spe.org website. This returned a measly five references. A search for ‘P value’ did better, with 237. One of these caught my eye. P value testing is mentioned in the SPE’s style guide to authors which deprecates the use of words like ‘very’ and suggests instead that ‘to express how significant results are ... report the P-value.’ It seems as though few SPE authors are reading the style guide because a search for the word ‘very’ comes up with around 250,000 references! While this is not a very scientific investigation it supports the (perhaps obvious) notion that there is a tendency for putting a positive spin on a result rather than engaging in a rigorous analysis.
Just to confuse the picture further, Nature recently reported that another publication, Basic and Applied Social Psychology has banned the publication of P values, as such statistics were ‘often used to support lower-quality research.’ This is now the subject of a flame war with the statisticians beating up on the psychologist surrender monkeys.
Another item in Nature advocated an attack on ‘sloppy science’ derived from abusive ‘tweaking’ of statistical results and called for a register of experimental design and analytics prior to publication.
Getting back to big data and published research I think that the most significant development in this space is the ‘reproducible’ approach as exemplified by the Madagascar open source seismic imaging movement which advocates the publication of both algorithms and data.
This approach could apply equally to large data sets with complex statistical deductions. Putting the data into the public domain would allow other researchers to check the logic. As AI plays a growing role in operations and automation, the usual argument of data confidentiality may be hard to justify, particularly when results are baked into safety critical systems.
How did your involvement with Siemens begin?
DS—The pioneering operations intelligence platform XHQ was developed by IndX, Mario was part of the original team. I joined after Siemens acquired IndX. Today our Vizion Packs for XHQ incorporate best practice and domain knowledge obtained in the downstream and, increasingly the upstream. We transform raw data into XHQ, our ‘bread and butter.’ We are Siemens’ first official XHQ partner and are now developing our own stuff, to plug gaps in the oil and gas data landscape.
What gaps you are trying to fill?
DS—There is usually a big time lag between RT data and monthly or quarterly reporting, by which time operational damage may be done. At the corporate level, there is often a gap between strategy and operations. Big issues like energy costs may be hard to track with current tools. In maintenance, while predictive analytics packages can give early warning of failure, there may be a gap in deciding on the best course of action. A KPI/dashboard may show say ‘54% green,’ or a fancy pie chart, but this is not actionable information. Our solution lets users set targets and figure the value of different actions. KPI/KOPs need to be ranked by importance. We are data source agnostic and our experience allows us to set up and maintain these complex data linkages.
What about hard to capture data?
MB—We can access pretty well anything. We can talk to historians, relational sources or hand-entered data, all of which can be combined to an optimal KPI.
DS—We have been working on an energy intensity index, calculated in real time from multiple systems and allowing for comparison across units and with industry best practices.
Do folks really run operations by looking at KPIs? Aren’t there opportunities to feedback your RT results into what’s driving the plant?
DS—I am a big fan of closing the loop, leveraging operations intelligence and turning data into actionable information. For instance we can do analytics to identify equipment ‘bad actors’ (EBA) where replacement capex is less than maintenance cost. Then we can roll in the risk of unplanned outages and manufacture say, the top five EBAs, and publish a work request for their replacement as action items. Some tasks need to be automated. Some areas are not amenable to a truly closed loop.
So where is this tool running? On the platform or in head office?
MB—It will be running in head office, reading data from a central maintenance historian. But it could feed back to systems on the platform, say telling operators where they are in the big picture, showing emerging trends such as HSE indicators.
DS—HSE is an interesting area. Often indicators are lagging, based on near misses and incident counts. We provide leading indicators as a matrix showing performance of different parts of the business and where the next ‘perfect storm’ may be brewing! We can then plug into HR/training systems and show violations, overdue work orders or where significant process disturbances or alerts are happening on one unit or shift. These leading indicators can show where the next high potential incident could occur and where action needs to be taken to reduce risks during normal operations.
Where’s your main competition? In-house development?
DS—That’s probably true. XHQ has 15 years of development and best practices behind it, some still want to develop their own stuff! The reality is that bespoke software will always be trailing-edge. We have put all our expertise into our tools. We developed and maintain the XHQ Starter Pack and Upstream Intelligence for Siemens. Our own Upstream Vizion Pack adds our view of industry best practices. Using these products avoids having a bus load of consultants working on bespoke development. We are also very familiar with data sources and we know where the data is. More from IT Vizion.
Sergey Fomel (U Texas) opened the proceedings at the 10th Madagascar school, held in Harbin, China earlier this year. Fomel is a man on (at least) two missions. First to promote free, open source seismic imaging software in the ‘revolutionary’ tradition of Eric Raymond’s book, ‘The Cathedral and the bazaar’ and second to encourage researchers to publish not just results but also data and algorithms.
Madagascar has three fundamental pillars, a simple ‘RSF’ file format for array data, building block programs that can run on the command line using Unix-style pipes and the ‘SCons’ build system for data processing and reproducibility. Programs are controlled through configuration files in Python.
Madagascar has links to the Brown University’s Icerm unit that promotes reproducibility in computational and experimental mathematics advocating a ‘culture change that integrates reproducibility into research. Moreover, ‘Journals, funding agencies, and employers should support this culture change.’
Yang Liu (Jilin University) using field data processing showed how much can be achieved with Madagascar using Python/SCons, producing gathers, stacking diagrams and 3D cubes and (much) more with just a few lines of code.
One slide in Fomel’s presentation neatly summed-up the revolutionary Madagascar philosophy with a quote from renowned physicist Richard Feynman viz., ‘Science is the belief in the ignorance of experts.’ We’re not sure how the ‘revolution’ metaphor was received in China, but Madagascar appears to be very popular. A goodly proportion of Madagascar downloads and website visitors (more than all of the EU) come from China. Check out the Harbin school materials here.
The US Energy Information Administration (EIA) has added an Excel data interface, combining its own energy data with ‘Fred’ financial data from the St. Louis Federal Reserve. The EIA provides some 1.2 million energy data series while Fred adds 240,000 economic series. Preferred data streams can be captured in an Excel workbook and updated later with a single click. Previously these data sources have been accessible via application programming interfaces (API). The EIA’s Mark Elbert told Oil IT Journal, ‘All our data is also available via the agency’s free data API. You can browse the API online and see sample calls. The API calls allow up to 100 series to be fetched at a time. For systems that want to ingest entire data sets there is a bulk data download facility. The files and a manifest are updated with each release.’
‘Within the spreadsheet, you can browse each data repository by category or search by keywords to find data IDs and to download the series information and data. Once the desired data series are downloaded, all of Excel’s rich functionality is available to create analyses and graph results.’ Both the EIA and the Federal reserve offer the data services and Excel add-ins free of charge as part of their commitment to open data.
The 12.1 release of Caesar Systems’ PetroVR economic modeling package introduces multi-core computing to accelerate simulations and enhanced audit tracking of projects. CTO Leandro Caniglia told Oil IT Journal how the parallel computing solution was implemented. ‘Our stochastic simulations involve hundreds or thousands of Monte Carlo iterations, some spanning decades of data. We tried several alternative ways of parallelizing iterations, first on the local network and later, in the Amazon web services (AWS) cloud. But the cloud brought two practical problems, data confidentiality and version control. Clients’ confidential models should not leave a controlled environment. It was also tricky when distributing computation across multiple nodes to ensure that all were running exactly the same build of the software. Hence our interest in leveraging all the cores installed in a local machine.’
‘An application running on one core can spawn as many independent processes as there are cores, and then let these run the iterations. Once they are through, processes send their results to the application for presentation to the user. There are several techniques for communicating between a process and the application, we chose TCP sockets which makes our code run on a single machine, across the network or in the cloud. Although writing code for parallel execution proved harder than anticipated, the results are worthwhile as we are seeing near linear program scalability with the number of cores used.’ More on PetroVR from Caesar Systems.
eInfoChips (EIC) claims a 25x performance improvement for its Nvidia Tesla-based seismic imaging software. The solution was unveiled at Nvidia’s 2015 GPU Technology Conference in San Jose, California. EIC implemented a Kirchoff pre-stack depth migration algorithm on high-performance Nvidia Tesla K40 GPU accelerators. The 25x speedup was benchmarked over an Intel Core i7-2600 @3.40GHz system with 8GB memory. The port involved adding some 190 lines of Cuda to the 980 lines of C code in the kernel.
EIC claims expertise in parallelizing code for GPU deployment in automotive, medical, industrial automation and other verticals. CMO Parag Mehta explained, ‘Parallel programming is an art, mastered over multiple code migration projects. Our team of parallel programming experts has codified the leading practices with a mature checklist to achieve the maximum performance on GPU accelerators.’ More from EIC.
DGB has great plans for its OpendTect seismic interpretation package which it is planning to upgrade and ‘make truly competitive with other seismic interpretation systems.’ The new edition will launch at the 2015 SEG New Orleans convention under the name OpendTect Pro (OTP). OTP will offer improved interpretation workflow, a mapping functionality, a bidirectional link to Petrel and a PDF-3D plugin for sharing 3D images.
Commercial licenses of OpendTect will be automatically converted to OpendTect Pro licenses at no additional cost. DGB is also working on a new HorizonCube algorithm for inclusion with the Pro edition. The PDF-3D plugin allows users to capture a 3D scene to a 3D PDF file which can be shared with colleagues, managers and partners who can open it in the free Adobe Reader or embed it into a PowerPoint presentation. Checkout the 3D-PDF technology and visit dGB.
Neos GeoSolutions has announced a ‘second generation’ of its NeoScan geoscience interpretation package. NeoScan blends in-house datasets, public domain data and proprietary analytics to highgrade exploration acreage. The methodology includes seismic and potential field data along with Lidar and Shuttle radar topography data.
Saudi Aramco supplier Reveille Software has been awarded a US patent (No. 8,959,051) for ‘Offloading collections of application monitoring data,’ i.e. the asynchronous collection and storage of information from cloud or hybrid-cloud sources to a document management system.
Geokinetics reports successful testing of its AquaVib marine vibrator, a ‘long awaited’ alternative to air guns.
B-Scada has announced a new data connector to allow real-time visualization of data from Monnit’s low cost wireless sensors.
A new electronic logbook from Emerson Process Management lets operators electronically document activity, streamlining shift changes and enhancing safety and audits. The e-logbooks offer text and structured search for category, time span, equipment tags and more. Emerson also recently announced the Rosemount 40881006 multi variable transmitter for oil and gas applications, providing differential pressure, static pressure, and temperature measurement from a single transmitter.
ESG Solutions has announced a new release of its SuperCable low frequency GPS time synchronized downhole microseismic array for monitoring micro-earthquakes during fracking.
Energy Solutions Intl.’s Pipeline-Manager 4.0 includes new leak detection reports, water settlement analysis, enhanced GUI configuration from VisualPipeline and more.
FaultSeal’s FaultRisk 4.3 includes quick look Allan Maps with hydrocarbon contacts, new calibration tools and fault displacement statistics.
FFA’s GeoTeric 2015.1 includes a new ‘spectral expression’ tool for interactive optimization of seismic spectral content. A new link to Decision-Space (developed under Landmark’s iEnergy partnership program) allows for bi-directional data exchange. The Petrel link has also been enhanced.
FracFocus 3.01011, the US chemical disclosure registry improves data accuracy and public search with data now available for download in machine readable format.
V 5.16 of KepWare’s KepServerEX1012 includes a new scheduler plugin that optimizes available telemetry bandwidth while polling large deployments of flow computers and RTUs. The scheduler also prevents ‘rogue’ clients from hijacking bandwidth.
Oniqua’s Analytics Solution 6 has been certified to run on the SAP NetWeaver platform. Oniqua provides maintenance, repair and operations process optimization to clients including BP, BHP Billiton and ConocoPhillips.
A new release of Rock Flow Dynamics’ tNavigator (4.1.3) allows for the import of Rescue-formatted geological models, introduces an enhanced modeling workflow and improved performance of AIM-based compositional models.
Wavefront Technology Solutions has announced a software tool for designing and modeling its Powerwave well simulation process.
The Pipeline open data standard association (PODS) has issued a position statement in regard of recent developments in Esri’s software for the transmission and gathering pipeline market. In 2014, Esri released new ArcGIS location referencing for pipelines (Alrp) that captures pipeline routes and linear events in an Esri Geodatabase. Esri has also released a new utility and pipeline data model (Updm) for customers that need a single model to support both distribution and transmission pipeline assets. Updm builds on the Esri gas distribution data model, embedding the Alrp technology to handle the linear referencing requirements of transmission systems.
The association observes that while it might appear that Updm competes with PODS, there are some key differentiators. Notably, the fact that PODS is supported by an association of operators and vendors with ‘a common goal to set the standard for modeling gathering and transmission pipelines.’ PODS ‘feels that this collaboration and knowledge sharing brings additional value to our membership.’ In consequence, PODS is working with Esri to see how the membership could use Alrp in conjunction with both the PODS relational and PODS Esri spatial implementation patterns. PODS is seeking volunteers for a technical working group planned for Q2 2015 to investigate the PODS/Alrp synergies.
PODS has also published a draft charter for phase 2 of its data standards for the pipeline construction workgroup. The joint PODS/Iploca* initiative seeks to extend model scope prior to handover by loading as-built data and documentation into an operator’s PODS database. The workgroup is also to demonstrate the value of digital data integration between contractors and operators with a new construction extension to the model.
Earlier this year PODS also adopted the charter for its
offshore work group which is to expand the current PODS relational
database model with new features and tables for offshore pipeline
operations. New model components will include a schema for subsea
facilities and inspections. Scope will cover facilities extending from
a subsea wellhead to landfall. More from PODS.
* International pipeline and offshore contractors association.
Tough times meant that attendance was down at the 17th
SMi E&P Data Management conference, held earlier this year in
London. Some may wonder what else can be said on the topic of upstream
data. Quite a lot it would seem as the SMi event’s coverage expands to
new domains (construction data), geographies (Brazil, Kuwait) and
subject matter.
Sergey Fokin of Total’s
Russian unit described a pilot investigation into business
continuity—measured as mean time to disaster. The investigation
targeted geoscience along with cross functional activities such as data
management, geomatics and IT with an assessment of data criticality.
What happens if a particular office or function is unavailable due to a
power cut or a major IT issue? How long can the business continue to
function? What data and processes are affected? What contingency plans
are in place? One measure of disruption is mean time to disaster—the
length of time the business can carry on unaffected. But some events
may be harder to categorize. For instance, if a geology cabin burns
down during drilling, it may be hard to make a decision on where and
when to perforate. The potential financial loss from a perforation in
the wrong place may be far higher than the cost of a few days of
downtime. So a simple mean time to disaster analysis may fail to
capture the risk. Fokin observed ‘You can’t just guess—you need to base
such decisions on the facts.’
The study has
led to major reorganization with a duplicate backup site in a remote
facility and disaster recovery kit available in the server room along
with training and testing. The disaster recovery architecture includes
auto sync with Vision Solutions’ ‘Double-Take’ and NetApp SnapMirror.
Critical apps such as Gravitas, Geolog, Petrel, Total’s Sismage and
remote Eclipse are available in under two hours. Multiple stakeholders
were involved, IT, G&G, HSE and support services. Critical GSR
(check paper) processes are now available in under four hours at the
backup site and several notebook computers are available for critical
GSR activities.
Mikitaka Hayashi (from Japan-based EPC JGC Corp)
showed how Aveva Smart Plant has revolutionized construction data
management and handover. Hayashi recapped the difficulty of plant and
equipment data management during construction and (especially) handover
to the owner operator. Despite many attempts to build on industry
standards such as ISO 15926, the solution here is a commercial one,
Aveva SmartPlant. This supports complex activities such as concurrent
engineering and data management with multiple rapid changes early on in
a project’s lifetime. It can be hard to keep the many stakeholders
happy. JGC Corp employs a data steward for process control systems
deployment and instrumentation. It has developed its own JGC
‘engineering data integrity and exchange’ tool (J-Edix) for populating
its data warehouse and sees joint EPC and O&M data management as
the way ahead.
Kuwait Oil Co.
(KOC) offered two presentations on its production data and IT
architecture. Khawar Qureshey showed how a comprehensive line up of
software and in-house developed tools are connected with Schlumberger’s
Avocet workflow manager. The aim is to have standard optimization
models and processes across data acquisition, analysis and into the
E&P database. This involves using multiple tools and interfaces and
in house IT/integration expertise is ‘developing gradually.’
Schlumberger’s
venerable Finder database, the main data repository, has been
customized for KOC. Schlumberger’s Avocet has likewise been extended
with a field back allocation module. Other solutions have been
developed for various artificial lift scenarios. A field data quality
and optimization system (Fdqos) has been developed in-house using
mathematical programming techniques to optimize over the whole
workflow. Fdqos delivers recommendations/strategies (open well x, close
well y, raise/decrease production from well z) combining facilities
data from Finder with production rate estimates from Decide! The
solution has now been deployed across a dozen gathering centers. KOC is
now working to integrate Fdqos with its P2ES ERP system and with the
Halliburton-based Kwidf digital oilfield.
Grahame Blakey (GDF Suez)
observed that often schematics show GIS as at the center of the
upstream technology world. This is wrong! Exploring for and producing
oil and gas is our core business and needs to be at the center of the
picture, with a constellation of disciplines and software around it.
GIS then plugs in to any one of these tools as a valuable enabler. The
key then to GIS is integration. This can be at the technology level—but
also at the corporate strategy level. GDF Suez’ approach to GIS and
other IT integration leverages the Prince II framework.
GIS is integrating a plethora of applications and domains but always
inside the overarching E&P data architecture. There is a
‘deliberate effort not to build a GIS silo.’ Blakey recommends avoiding
the GIS word and prefers to speak of ‘mapping for the masses.’ But
under the hood, a lot is going on. Data QC with automated update jobs,
training, integration with SharePoint, the Flare EP Catalog and more.
GDF now requires GIS-formatted data from its contractors. In the
Q&A Blakey opined that 3D functionality in GIS was ‘underwhelming.’
Dan Hodgson spoke from the heart and from 20 years of experience of technology refresh projects, latterly with UK-based DataCo.
Hodgson classifies technology refresh projects as minor (once per
year—an app upgrade), intermediate (app refresh every 3-5 years) and
enterprise, every 10-15 years with a change in the whole subsurface
portfolio. The latter may take a year to do and cost hundreds of
millions. These used to be Landmark upgrades, more recently they have
been to Petrel/Studio. Technology has moved from Unix to Linux and from
Linux to Windows. There is no handbook available for an enterprise
upgrade or technology refresh. If there was a book you would have to
jump straight away to page 492, ‘troubleshooting!’ At the Schlumberger
forum last year, Chevron presented a $300 million technology refresh
that resulted in a ‘25% productivity increase.’ But Hodgson warned that
for the last Studio project he was involved in, ‘nobody knew the
product, including Schlumberger.’ In another, data migration required a
tenfold increase in disk space. Data migration can take an unexpectedly
long time. You may have 10 terabytes to shift but the database only
ingests takes 200GB/day. Hodgson recommends avoiding a single vendor
refresh. Multiple vendor plus in-house resources is best. A lot can go
wrong. Asked in the Q&A if he recommended a project management
framework, Hodgson replied that while the majors all use framework-type
approaches, what is really key is a good project manager. Asked why a
company might embark on a $300 million project he expressed a personal
opinion that such moves are not driven by a business case, more by
emotional decisions and peer pressure. ‘Maybe it was just time for a
change.’
Petrobras’
Laura Mastella showed how closely data management and a business case
are related. Petrobras’ geologist’s focus recently shifted from
clastics to carbonates and needed more data on the company’s cores and
cuttings. Petrography was a key enabler and required easy access to all
data types for interpretation. Enter Petrobras’ ‘Integrated technology
E&P database’ that replaced Excel-based data hoarding. The
system was five years in the making and now provides a single entry
point to multiple systems, linked by a unique well identity/table and
controlled vocabularies for petrography and other domains. Mastella
advises ‘make friends with the lab rats, otherwise they’ll stay with
their spreadsheets.’ Users get an integrated view of rock data via a
dashboard of lithotypes and summary poro perm data. The system ‘brings
rock data into the decision making process.’
Wolfgang Storz presented a subsurface data quality management (DQM) project at RWE–DEA.
There are notionally as many as ten dimensions of data quality but
Storz prefers a simple split between formal and technical DQM. The
formal side comprises the quality rules while the technology performs
the conformance checking. Nonetheless there is overlap and always a
‘credibility’ issue, which requires subject matter experts for
judgement. In the end the notion of a ‘single version of the truth’
that is valid for every data type may be an illusion—especially for
more subjective information like formation tops. RWE has cherry picked
the PPDM business rules. After checking commercial offerings RWE
decided to roll its own solution. Storz found the IT guys were really
good at coding business rules. DQM metrics are now visible as traffic
light displays and also in map view with a standard DQ symbology. Storz
concluded that DQM needs to be a part of the business culture. Data
managers need to have high status and competency to push back to
geoscientists.
Hussain Zaid Al-Ajmi presented KOC’s
partially automated E&P data validation (PADV). PADV seeks to
harmonize access to different data sources and to reduce data gaps and
redundancy. Halliburton’s Power Explorer is
deployed as a front end to a master Open Works repository with
authenticated data and standard naming conventions. Schlumberger’s
Finder, eSearch, LogDB and GeoFrame now sit behind Power Explorer. KOC
has worked to automate data workflows and business rules with scripts.
The PADV is now considered a KOC best practice.
Chris Frost (DataCo)
offered insights into document migration into an EDMS. Frost is also a
hands-on coder and likes to challenge internal processes, support
internal tool development and provide support for scripts. Frequently
document managers lack the scripting skills needed to perform data
mining and folder re-organization that is required prior to migration
and will use a time consuming and error prone manual approach. On the
other and, hand coding from scratch has its own costs and risks. Enter
DataCo’s ‘IQ’ toolkit
which, according to Frost, provides a happy mean between hand coding
and labor intensive approach. IQ offers stored procedures in SQL Server
for deduplication, taxonomy building and key words search. Documents or
equipment tags can be recognized (even on scans), classified and
captured to a SQL Server database. More from SMi Conferences.
Speaking at the recent IQPC Canadian oil and gas security conference, Chevron’s
Zoltan Palmai outlined the complex security challenge of a major
operator. An extended supply chain, joint operating agreements and
reporting means that corporate systems are not only exposed to direct
attacks, but also potentially at risk from multiple third party
systems. Palmai advocates a clear analysis of roles and
responsibilities. The process starts with a risk-based evaluation of
partners that will inform an IT operating model which is included in
joint venture and other contracts. Joint ventures are positioned in a
value at risk/likelihood of breach matrix. Risks can then be ranked and
an appropriate IT mitigation strategy applied.
The key to
accessibility from multiple stakeholders is identity and access
management (IAM). ‘Understanding and managing who has access to what is
core to IT security.’ Today IAM is at an inflection point as mobile
users, cloud-based systems and endpoints with different operating
systems are commonplace. Happily identity federation is maturing and
novel protocols can deliver IT services securely across system
boundaries. HTTP-based applications can support a wide range of devices
and trust frameworks from third vided identity source providers.
Oasis’ security services (SAML)
has matured to the extent that it ‘no longer requires an encryption
expert.’ Many popular languages now have a SAML API and third party
providers offer IAM orchestration solutions. Nevertheless, few
individuals are conversant with the technical details of the new IAM
and explaining the change to management ‘has proved challenging.’
If there remains any doubt as to the risks that large organizations run, these were dispelled by Chris Shipp (Fluor/DoE Strategic petroleum reserve) who cited a 2014 hack
that cause ‘massive damage’ to a German steel factory. Shipp offered
practical advice on specific risks from mobile devices or from hacks
that come in from a vendor’s compromised network. He suggests an email
sandbox to check dodgy links as a component of a web traffic ‘kill
chain.’ Companies spend a disproportionate amount of their security
budget on prevention. More should go towards remediation and recovery
with a structured incident response. Shipp recommends a Valve Magazine analysis as bedtime reading. More from IQPC.
Michael Burke is the new CEO of Aecom, succeeding retiree John Dionisio.
Tore Sjursen is now Aker Solutions’ executive VP operational improvement and risk management. Knut Sandvik takes over as head of Aker MMO.
US regulator, BSEE is to build an engineering technology assessment center in Houston and is looking for a manager.
Mike Taff succeeds Ron Ballschmiede as CB&I’s executive VP and CFO. Taff hails from Flowserve.
Daryl Crockett has joined ECCMA as Data Validation Expert. She is CEO of Validus.
Express Energy Services has appointed former NOV executive, Mark Reese, as chairman of its ES Platform Holdings parent.
SPE, along with five other engineering societies, has created the Engineering Technology and History Wiki to ‘preserve and disseminate the history of the engineering profession.’
Principal Michael Cline heads-up Gaffney, Cline & Associates’ new central London office.
Dale Tremblay has resigned as a member of Gasfrac’s Board of Directors.
GE has announced a $100m investment in a global R&D and manufacturing center in Saudi Arabia.
Becky Shumate and Jeff Allen have joined the GITA board following the departure of Dan Shannon and Jerry King.
Halliburton has opened a new $45 million integrated completions center in New Iberia, Louisiana.
Scotty Sparks is executive VP operations at Helix Energy Solutions following Cliff Chamblee’s retirement.
ICIS has announced a supply and demand database for petrochemicals and energy.
Don Basuki has been appointed GeoPressure Manager at Ikon Science’s Houston office.
Nick Search is advisor at Intsok UK. He hails from Quest Offshore.
Kadme has appointed Knut Korsell and John Redfern to its board.
Rod Larson is now president and COO of Oceaneering International.
OFS Portal has named Chris Welsh as CEO. He hails from ACT Global Consulting. Bill Le Sage is now chairman emeritus.
Zaki Selim has been elected to Parker Drilling’s board. He was previously with Schlumberger.
Process safety specialist PAS has appointed Mary Cotton, Jim Porter and Joel Rosen to its advisory board.
The Pipeline open data standards body, PODS, has elected Ken Greer, Scott Blumenstock, Andy Morris, Buddy Nagel, Victoria Sessions and Peter Veenstra, to its board.
Protiviti has added Frances Townsend to its advisory board.
Bryce Davis has joined SeisWare as senior account manager in Calgary. He hails from Spectraseis.
Smith Flow Control has appointed Sunil Verma as sales manager for India.
Teradata has
appointed Bob Fair and Hermann Wimmer as ‘co-presidents.’ Fair leads
the marketing applications division, Wimmer the data and analytics
unit.
Rodney Mckechnie is MD of UMG’s South African office.
US Shale Solutions has appointed Mike Stophlet, to its senior leadership team.
Rasmus Sunde is CEO of Forsys Subsea,
a joint venture between FMC Technologies and Technip. Alain Marion is
CTO, Arild Selvig head of engineering and Gerald Bouhourd leads the
life of field line of business.
UK-based Getech is acquiring oil and gas consultancy ERCL in
a cash and paper deal with an aggregate £4,300,000 value. The deal has
been part-financed with a £1.1 million load from RBS/NatWest.
Dräger Holding Intl. has acquired Norwegian technology startup GasSecure AS
for approximately 500 million NOK. The acquisition adds wireless gas
detection to Dräger’s portfolio. GasSecure’s technology came out of
Norway’s Sintef R&D establishment and was backed by VC Viking Venture.
Aspen Technology has acquired the Blowdown
software package from Stephen Richardson and Graham Saville (both of
Imperial College London). The tool is used to model depressurization in
process plants and identify locations where there is a risk of
excessively low temperature. The tool will be integrated with the
AspenOne engineering suite.
Legacy Measurement Solutions (LMS) is to acquire Pelagic Tank,
a supplier of measurement, production, and process solutions to the oil
and gas industry. LMS is also to divest certain assets related to its
gas analysis, chart interpretation, and field service business to Gas Analytical Services Inc., a subsidiary of CriticalControl Solutions Corp. LMS is a portfolio company of private equity firm White Deer Energy.
EssencePS CEO and chief developer Nigel Goodwin has brought to our attention a paper
presented at the recent SPE Reservoir Simulation Symposium which offers
a technical description of methods and algorithms used in EssRisk,
Essence’s software flagship for history matching, uncertainty-based
prediction and production optimization. Goodwin argues that brute force
Markov chain Monte Carlo methods cannot be applied exhaustively to the
complex field of fluid flow simulation. Even fast proxy models may fail
to represent the full range of uncertainty. Moreover, the ‘black box’
nature of proxy models make their evaluation hard. Engineers generally
prefer straightforward deterministic models.
Goodwin advocates Hamiltonian MCMC techniques,
along with efficient proxy models which lead to reliable and
uncertainty quantification and also generate an ensemble of
deterministic reservoir models. The technique is claimed to be the
foundation of a new generation of uncertainty tools and workflows.
The paper (50
pages and 178 equations ) is not for the faint hearted. But unusually,
as Goodwin observes, ‘unlike most vendors, service providers and oil
company research departments we are completely open about the
algorithms used. Our added value lies in efficient implementation and
in our user interface.’ More from EssencePS.
In two
presentations at the 2015 American Business Conferences Wellsite
Automation conference in Houston (see also our main report in Vol 20
N°2) Chevron’s George Robertson provided practical advice on the
application of automation to producing well-site safety before drilling
down into best practices for shutdown systems. Well site automation is
different from plant automation for several reasons. It has to be
tolerant of poor communications, have a low nuisance trip rate and be
cost effective. Mitigating loss of containment scenarios by shutting
down pumps or closing valves may or may not work depending on natural
flow and well head and manifold pressures. Failure modes must be safe
and detectable, systems must guarantee that alarms will be delivered or
at least warn that they cannot.
In his
presentation on shutdown systems, Robertson expressed a preference for
two out of three voting systems where two devices have to fail
simultaneously to either cause a nuisance trip, or fail to go safe when
required. Such systems can also be tested without taking the safeguard
out of service. Systems must be fail-safe, but they if they fail
safe too often, operators will inevitably bypass them! Safety system
design is the art of the possible. Risks must be brought to a tolerable
level that balances cost with potential consequences. ‘If your solution
is prohibitively expensive, it will not be implemented, and you will
have no safeguard.’ Checkout Roberson’s reading list. More from American Business Conferences.
UK-based Smith Flow Control (SFC) reports that over 2,500 key interlock systems (KIS) have been deployed throughout the Inpex-operated Ichthys LNG project offshore Australia. SFC’s mechanical interlocks embed a ‘human factors engineering’ approach to design. KIS are mechanical locking devices that operate on a ‘key transfer’ principle, controlling the sequence in which process equipment is operated. KIS are deployed on valves, closures and switches.
An
equipment item’s ‘open’ or ‘closed’ or ‘on’ or ‘off’ status can only be
changed by inserting a unique coded key that unlocks the valve or
switch. Keys can be daisy chained providing a ‘mechanical logic’ that
minimizes the risk of operator error. KIS systems can also integrate with distributed logic control systems to add reliable, mechanical assurance of safe operations. More from SFC.
Eden Prairie,
Minn.-headquartered Atek Access Technologies has launched a
‘cost-effective’ version of its TankScan tank level monitor for use in
retail or other indoor applications. The unit adds internet
connectivity via an Ethernet, Wi-Fi or cellular connection to Atek’s
TankScan TSM8000 monitors.
Atek
president Sherri McDaniel explained, ‘The new product family opens
opportunities for large-scale deployments of the TankScan TSM8000
monitor.’ A single unit provides connectivity for up-to 20 tanks.
Remote monitoring is said to cut servicing
costs by 30% and to facilitate just-in-time product delivery and
collection. The plug-and-play solution can be installed within minutes
without any training. More from Atek.
Turkish Petroleum has added reverse time migration to its portfolio of seismic imaging solutions from Paradigm. Paradigm also reports sales of its Geolog petrophysical flagship to Ashen Geologic Consulting and Great Plains Well Logging.
Accenture is
to supply strategy and management consulting services, digital
technology and systems integration to oilfield services company Q’Max Solutions in a two-year deal.
CGG GeoSoftware has acquired the rights to the VelPro velocity modeling and depth conversion product from In-Depth Solutions.
IT Vizion has partnered with Siemens PLM Software on the resale and implementation of solutions leveraging Siemens XHQ in oil and gas.
Quest Automated Services has announced digital oilfield technology, a.k.a. machine-to-machine automation for oil and gas.
Hyperion Systems Engineering and RSI have
established a joint venture, HyperionRSI Simulation, to deliver
operator training simulators and dynamic simulation solutions to the
global market.
IFS and Accenture have
signed a five years cooperation agreement focusing on growing IFS’s
license sales and related implementation and application management
services.
Aker Solutions has delivered the subsea containment assembly (SCA) to the Marine well containment company.
IBM’s Watson unit is to embed the AlchemyAPI service-based API for natural language processing, text mining and computer vision applications.
Aveva and EMC are
to jointly deliver a software solution for the management and control
of engineering data and associated documents. The initiative combines
Aveva Net with EMC Documentum’s EPFM suite for owner operators and
engineering prime contractors.
CMC Research Institutes has purchased 1,500 channels of Inova’s ‘Hawks’ autonomous recording system for usage on a time lapse seismic project.
Eurotech has partnered with Datum Datacentres to offer cloud and data center services.
Socar has awarded Fluor a
project management services contract for its new oil and gas processing
and petrochemical complex 60 km southwest of Baku, Azerbaijan.
FMC Technologies and Technip have entered into a 50/50 joint venture, Forsys Subsea. The unit is to ‘redefine’ subsea oilfield design, build and maintenance.
Honeywell has
been awarded a third project by Hoang Long Joint Operating Co.to handle
project management and engineering on the H5 platform, offshore
Vietnam. Technology includes the Experion PKS, C300 Controller and
Safety Manager.
Malaysian Yinson Holding Berhard has chosen IFS Applications for
its FPSO division, Yinson Production, based in Oslo, Norway. The $2
million contract covers both onshore and offshore operations.
Ikon Science is now offering Ji-Fi seismic inversion services from QI Solutions centers in London, KL, Houston and Calgary.
Naftna Industrija Srbije has deployed Kalibrate’s
retail network planning and location analysis solution to optimize the
performance of its network of retail outlets in four Southeast European
countries.
EarthQuick and Geovariances are to team on the provision of seismic data analysis, time-to-depth conversion, geological horizon mapping and geomodeling.
QTS Reality Trust has
been selected by Mansfield Oil to host its data center. Mansfield has
moved its physical assets and migrated its IT infrastructure to the QTS
facility.
Total and contractors Technip and Samsung Heavy Industries are to use EqHub on the Martin Linge project.
McDermott and Petrofac have
formed a five-year alliance to pursue top-tier deepwater subsea,
umbilical, riser and flowline (Surf) projects and complex deep and
ultra deep engineering projects.
Nodal Exchange and Innotap have
signed an agreement to leverage Nodal Exchange’s North American power
and natural gas commodity data in Innotap’s products.
Siemens has
announced an ‘open’ IT ecosystem built on SAP’s Hana cloud platform.
OEMs and application developers can access the platform via open
interfaces to utilize it for their own services and analytics.
Space-Time Insight ‘s Asset Intelligence solution version 2.0H has achieved SAP certification as powered by the SAP Hana platform.
JGC Corporation and Samsung Heavy Industries have awarded Yokogawa Kontrol the contract for an integrated control and safety system on Petronas’ PFLNG2 floating LNG plant.
The American
institute of formation evaluation (AIFE) has analyzed data from 170,000
US plus 430,000 worldwide drill stem tests (DST). AIFE evaluates test
quality prior to calculating permeability, formation damage and other
indicators. The data is provided as Horner plots, fluid types, flow
rates, formation temperature and water salinity.
Pressure,
formation depth and other data are now available as a comma separated
value file which can be downloaded for further investigation and
regional mapping. Also new is the ability to calculate a potentiometric
surface to investigate hydrodynamic regional flow and identify flow
barriers and possible stratigraphic traps.
The AIFE
database is available marketed on a graduated pricing scale from $90
for a single report. This reduces to $3.00 per test with the maximum
volume discount applied. AIFE reports that six major oil and gas
companies have access to the entire database, five from the AIFE’s
hosted server. More from AIFE.
CNOOC unit
Nexen Petroleum UK has contracted with Aberdeen-based Asset Guardian
Solutions (AGS) for the provision of its eponymous software tool to
protect and improve management of process control software used to
operate its North Sea Golden Eagle offshore development.
Asset
Guardian provides a ‘single point’ solution to manage process control
software systems, supporting configuration change management, version
control, disaster recovery and a secure repository for storing files
and data. The tool is reported as assisting compliance with regulatory
standards, guideline and codes of practices including IEC61508, 61511,
ISO 9001, CPNI and HSE KP4.
AGS is also
providing AGSync, a software tool that synchronizes data and files
across multiple onshore and offshore locations. AGS clients include BP,
GDF Suez, Inpex, Dolphin Drilling, Stena Drilling, Technip and
Woodside. More from AGS.
Total’s
Norwegian subsidiary, Total Norge has deployed Aveva’s ‘Activity
visualization platform’ (AVP) to train operators of Martin Linge
platform in the North Sea. Aveva COO Derek Middlemas explained ‘Total
required an accurate and realistic environment to familiarize staff
with operations of the facility before going offshore. Training has
started in parallel with construction so that safe operations can be
assured from first oil. Later in the platform’s life, the AVP will
provide a risk free environment where complex and potentially hazardous
scenarios can be simulated. The AVP also enhances safety by minimizing
trips offshore.’
The AVP is a
component of Aveva’s ‘digital asset’ concept that leverages design
models beyond traditional design and construction. Immersive
training solutions let owner operators tailor and repurpose design
models for training, simulation and operational readiness. Aveva
sees the Martin Linge deployment as an important milestone in its
digital asset strategy and expects more use of its PDMS and 3D models
in training and simulation. More from Aveva.
Stavanger,
Norway-headquartered Safran Software Solutions has teamed with
Aberdeen-based Absoft, to offer SAP-centric, risk-based project
planning to the upstream. Safran’s enterprise project and risk analysis
(EPRA) software will now be available as a component of Absoft’s SAP
upstream oil and gas portfolio. Safran will integrate Absoft’s plant maintenance and project system modules. Both companies
will offer planning and resource optimization solutions for major
projects, including modifications and maintenance, optimized shutdown
and turnaround.
Safran VP
Richard Wood said, ‘Companies can now manage all aspects of a project
from a single platform, without the need for expensive product
integrations and additional reporting tools. The
partnership offers a new approach for project teams that have
previously relied on disparate and inflexible planning, scheduling and
risk management solutions.’ A stand-alone edition of EPRA was recently
announced for SAP Hana. Safran clients include Statoil, Aker Solutions
and ABB. More from Safran and Absoft.
A position
paper on the Eccma* website by data validation specialist Daryl
Crockett (Validus) reports a ‘marked decline’ in new SAP and Oracle
Financial implementations. While larger organizations have migrated to
the ‘promised land’ of mega ERP, many are suffering from a
post-implementation hangover with the realization that these highly
disruptive and costly deployments fail to bring the expected return on
investment. Crockett attributes the lacklustre performance to one
‘spectacularly under-emphasized’ risk, poor data. ‘The road to success
is littered with the bodies of senior managers and executives who
realized too late that they had a data problem.’
The concept
of data as an asset is relatively new in the business world and the
majority of ERP adopters are unprepared for the responsibilities of
data ownership. Companies that rely on IT-led technology initiatives
suffer especially as after systems are designed and implemented, the
data will be around ‘long after the consultants have collected their
frequent flyer miles and moved on to the next job!’
Crockett
advocates getting a handle on data quality before embarking on an ERP
implementation. The best place to start is with a master data quality
and governance program. Even this can be tricky as suppliers like to
make their products seem unique so that buyers can’t shop around. Items
in a material master are often poorly described or entered as free
text. Would-be master data cleansers will find free white papers,
presentations and data dictionaries on Eccma.org.
* Electronic commerce code management association.
Oil country
information provider Drillinginfo has moved its petabyte of data to
Nexenta’s open source-driven software-defined storage (SDS).
Drillinginfo’s CTO Mike Couvillion explained, ‘Our data storage
requirements had grown to around 900 terabytes and were increasing at
over 20 TB/month.’ This was stretching the legacy NFS storage system to
the limit.
Drillinginfo
was looking for a ZFS storage system to improve scalability and
performance and turned to Nexenta whose flagship NexentaStor now serves
as its primary ZFS file system. The system runs on x86 industry
standard hardware from Supermicro with NexentaStor built into the
operating system. Drillinginfo also runs a VMware environment with
1,000 virtual machines to date.
Couvillion
concluded ‘Because the Nexenta system is so redundant, it can be put on
Supermicro, which costs us about $416/TB, well below the industry
average. Also there is no additional licensing for replication. The
system practically recovered its cost on delivery.’ Other components
include IBM SAN volume controller and V5000 Storwize system for its
fiber channel storage. More from Nexenta.
Wolfram’s
‘data drop’ (WDD) service is claimed to make it easy to accumulate data
of any kind, from anywhere—setting it up for immediate computation,
visualization, analysis, querying and other operations. WDD is built on
the Wolfram Data Framework, that ‘adds semantics to data to make it
computable.’
Collections
and time series of computable data are stored in named databins in the
Wolfram Cloud and are instantly accessible from all Wolfram languages
and other systems. The WDD can handle many data types, devices and
sensors. Data can be added programmatically using a Web API, from email
or web forms. A custom Raspberry Pi API is available for
experimenters. The latest addition to the WDD is a data drop interface
for the internet of things. More from Wolfram.
The search
for a politically correct means of fracking shales continues with an
announcement from Autris of a ‘green’ nitrogen fracking solution
delivered from its wholly-owned NitroHeat
subsidiary. The ‘dual action mega pressure’ nitrogen solution,
MaxFrack-N2, is a self-contained unit that produces up to 170,000
liters per hour of pure nitrogen at up to 5000psi. By our reckoning, at
that pressure, Nitrogen should be in a supercritical fluid phase which
one can imagine should create a fair amount of stress at the rock face.
Autris CEO Derek Naidoo boldly forecasts fracking revenue in excess of
‘$100M over the next 5 years.’ More from NitroHeat.
Things aren’t going so well for Calgary-based Gasfrac,
purveyor of propane-based fracking technology. Gasfrac has obtained
approval from the Alberta bankruptcy court for the sale of
substantially all of its assets to Step Energy Services.
Interestingly, propane fracking is back in the news in France (where
hydraulic fracking has been banned) where it is perceived as a possible
route to ‘environmentally friendly’ fracking. More from Step.