Tough times meant that attendance was down at the 17th
SMi E&P Data Management conference, held earlier this year in
London. Some may wonder what else can be said on the topic of upstream
data. Quite a lot it would seem as the SMi event’s coverage expands to
new domains (construction data), geographies (Brazil, Kuwait) and
subject matter.
Sergey Fokin of Total’s
Russian unit described a pilot investigation into business
continuity—measured as mean time to disaster. The investigation
targeted geoscience along with cross functional activities such as data
management, geomatics and IT with an assessment of data criticality.
What happens if a particular office or function is unavailable due to a
power cut or a major IT issue? How long can the business continue to
function? What data and processes are affected? What contingency plans
are in place? One measure of disruption is mean time to disaster—the
length of time the business can carry on unaffected. But some events
may be harder to categorize. For instance, if a geology cabin burns
down during drilling, it may be hard to make a decision on where and
when to perforate. The potential financial loss from a perforation in
the wrong place may be far higher than the cost of a few days of
downtime. So a simple mean time to disaster analysis may fail to
capture the risk. Fokin observed ‘You can’t just guess—you need to base
such decisions on the facts.’
The study has
led to major reorganization with a duplicate backup site in a remote
facility and disaster recovery kit available in the server room along
with training and testing. The disaster recovery architecture includes
auto sync with Vision Solutions’ ‘Double-Take’ and NetApp SnapMirror.
Critical apps such as Gravitas, Geolog, Petrel, Total’s Sismage and
remote Eclipse are available in under two hours. Multiple stakeholders
were involved, IT, G&G, HSE and support services. Critical GSR
(check paper) processes are now available in under four hours at the
backup site and several notebook computers are available for critical
GSR activities.
Mikitaka Hayashi (from Japan-based EPC JGC Corp)
showed how Aveva Smart Plant has revolutionized construction data
management and handover. Hayashi recapped the difficulty of plant and
equipment data management during construction and (especially) handover
to the owner operator. Despite many attempts to build on industry
standards such as ISO 15926, the solution here is a commercial one,
Aveva SmartPlant. This supports complex activities such as concurrent
engineering and data management with multiple rapid changes early on in
a project’s lifetime. It can be hard to keep the many stakeholders
happy. JGC Corp employs a data steward for process control systems
deployment and instrumentation. It has developed its own JGC
‘engineering data integrity and exchange’ tool (J-Edix) for populating
its data warehouse and sees joint EPC and O&M data management as
the way ahead.
Kuwait Oil Co.
(KOC) offered two presentations on its production data and IT
architecture. Khawar Qureshey showed how a comprehensive line up of
software and in-house developed tools are connected with Schlumberger’s
Avocet workflow manager. The aim is to have standard optimization
models and processes across data acquisition, analysis and into the
E&P database. This involves using multiple tools and interfaces and
in house IT/integration expertise is ‘developing gradually.’
Schlumberger’s
venerable Finder database, the main data repository, has been
customized for KOC. Schlumberger’s Avocet has likewise been extended
with a field back allocation module. Other solutions have been
developed for various artificial lift scenarios. A field data quality
and optimization system (Fdqos) has been developed in-house using
mathematical programming techniques to optimize over the whole
workflow. Fdqos delivers recommendations/strategies (open well x, close
well y, raise/decrease production from well z) combining facilities
data from Finder with production rate estimates from Decide! The
solution has now been deployed across a dozen gathering centers. KOC is
now working to integrate Fdqos with its P2ES ERP system and with the
Halliburton-based Kwidf digital oilfield.
Grahame Blakey (GDF Suez)
observed that often schematics show GIS as at the center of the
upstream technology world. This is wrong! Exploring for and producing
oil and gas is our core business and needs to be at the center of the
picture, with a constellation of disciplines and software around it.
GIS then plugs in to any one of these tools as a valuable enabler. The
key then to GIS is integration. This can be at the technology level—but
also at the corporate strategy level. GDF Suez’ approach to GIS and
other IT integration leverages the Prince II framework.
GIS is integrating a plethora of applications and domains but always
inside the overarching E&P data architecture. There is a
‘deliberate effort not to build a GIS silo.’ Blakey recommends avoiding
the GIS word and prefers to speak of ‘mapping for the masses.’ But
under the hood, a lot is going on. Data QC with automated update jobs,
training, integration with SharePoint, the Flare EP Catalog and more.
GDF now requires GIS-formatted data from its contractors. In the
Q&A Blakey opined that 3D functionality in GIS was ‘underwhelming.’
Dan Hodgson spoke from the heart and from 20 years of experience of technology refresh projects, latterly with UK-based DataCo.
Hodgson classifies technology refresh projects as minor (once per
year—an app upgrade), intermediate (app refresh every 3-5 years) and
enterprise, every 10-15 years with a change in the whole subsurface
portfolio. The latter may take a year to do and cost hundreds of
millions. These used to be Landmark upgrades, more recently they have
been to Petrel/Studio. Technology has moved from Unix to Linux and from
Linux to Windows. There is no handbook available for an enterprise
upgrade or technology refresh. If there was a book you would have to
jump straight away to page 492, ‘troubleshooting!’ At the Schlumberger
forum last year, Chevron presented a $300 million technology refresh
that resulted in a ‘25% productivity increase.’ But Hodgson warned that
for the last Studio project he was involved in, ‘nobody knew the
product, including Schlumberger.’ In another, data migration required a
tenfold increase in disk space. Data migration can take an unexpectedly
long time. You may have 10 terabytes to shift but the database only
ingests takes 200GB/day. Hodgson recommends avoiding a single vendor
refresh. Multiple vendor plus in-house resources is best. A lot can go
wrong. Asked in the Q&A if he recommended a project management
framework, Hodgson replied that while the majors all use framework-type
approaches, what is really key is a good project manager. Asked why a
company might embark on a $300 million project he expressed a personal
opinion that such moves are not driven by a business case, more by
emotional decisions and peer pressure. ‘Maybe it was just time for a
change.’
Petrobras’
Laura Mastella showed how closely data management and a business case
are related. Petrobras’ geologist’s focus recently shifted from
clastics to carbonates and needed more data on the company’s cores and
cuttings. Petrography was a key enabler and required easy access to all
data types for interpretation. Enter Petrobras’ ‘Integrated technology
E&P database’ that replaced Excel-based data hoarding. The
system was five years in the making and now provides a single entry
point to multiple systems, linked by a unique well identity/table and
controlled vocabularies for petrography and other domains. Mastella
advises ‘make friends with the lab rats, otherwise they’ll stay with
their spreadsheets.’ Users get an integrated view of rock data via a
dashboard of lithotypes and summary poro perm data. The system ‘brings
rock data into the decision making process.’
Wolfgang Storz presented a subsurface data quality management (DQM) project at RWE–DEA.
There are notionally as many as ten dimensions of data quality but
Storz prefers a simple split between formal and technical DQM. The
formal side comprises the quality rules while the technology performs
the conformance checking. Nonetheless there is overlap and always a
‘credibility’ issue, which requires subject matter experts for
judgement. In the end the notion of a ‘single version of the truth’
that is valid for every data type may be an illusion—especially for
more subjective information like formation tops. RWE has cherry picked
the PPDM business rules. After checking commercial offerings RWE
decided to roll its own solution. Storz found the IT guys were really
good at coding business rules. DQM metrics are now visible as traffic
light displays and also in map view with a standard DQ symbology. Storz
concluded that DQM needs to be a part of the business culture. Data
managers need to have high status and competency to push back to
geoscientists.
Hussain Zaid Al-Ajmi presented KOC’s
partially automated E&P data validation (PADV). PADV seeks to
harmonize access to different data sources and to reduce data gaps and
redundancy. Halliburton’s Power Explorer is
deployed as a front end to a master Open Works repository with
authenticated data and standard naming conventions. Schlumberger’s
Finder, eSearch, LogDB and GeoFrame now sit behind Power Explorer. KOC
has worked to automate data workflows and business rules with scripts.
The PADV is now considered a KOC best practice.
Chris Frost (DataCo)
offered insights into document migration into an EDMS. Frost is also a
hands-on coder and likes to challenge internal processes, support
internal tool development and provide support for scripts. Frequently
document managers lack the scripting skills needed to perform data
mining and folder re-organization that is required prior to migration
and will use a time consuming and error prone manual approach. On the
other and, hand coding from scratch has its own costs and risks. Enter
DataCo’s ‘IQ’ toolkit
which, according to Frost, provides a happy mean between hand coding
and labor intensive approach. IQ offers stored procedures in SQL Server
for deduplication, taxonomy building and key words search. Documents or
equipment tags can be recognized (even on scans), classified and
captured to a SQL Server database. More from SMi Conferences.
© Oil IT Journal - all rights reserved.