ECIM 2014, Haugesund, Norway

Shale ‘collapses the asset lifecycle.’ Shell’s enterprise architecture. Statoil’s BI take on 4D seismic. A professional society for data management? Interica on E&P data usage patterns. Landmark, Schlumberger and CGG’s takes on the E&P cloud. Noah on shale data management.

The 2014 edition of Norway’s ECIM E&P data management conference was held in Haugesund, Norway this month with 310 attendees. Out in the harbour, a large platform was being kitted out. Nothing unusual for this hub of Norway’s offshore industry. Except that the platform, the Dolwin Beta, is a gigawatt capacity electricity transformer and a key component of an offshore wind farm, part of Germany’s ‘energiewende’ green transition. But we digress.

Halliburton’s Joe King described how in North America, shale exploration is collapsing the asset lifecycle. Today, everyone is operating marginal fields with a renewed focus on profitability. While technology from both horizontal and vertical vendors is evolving, many challenges remain unsolved. There has been much talk of big data, but today, much of our work is too fragmented and siloed for the application of big data principles and analytics. While the landscape is evolving with Hadoop and its derivatives, data management remains labor intensive—thanks to a ‘complex, vast panorama of data.’ But there is great potential for exploratory data analytics and serendipitous discovery. Current approaches center on first principles/closed form solutions for, say, optimizing drilling parameters. But new techniques allow us to evaluate drilling data against thousands of proxy models in near real time. Statistical techniques such as principle component and Monte Carlo analysis can be leveraged to tune well spacing to shale sweet spots. Other use cases target drilling performance which King illustrated with an Open Street Map view of activity in the Eagle Ford of Texas. For big data implementers there is the problem of what technology to chose. There is a trade-off between using horizontal technology (read Hadoop) which may be cheap but low in domain savvy. On the other hand, for consultants, ‘every solution is a project.’ King advocates a ‘platform driven’ approach that offers a configurable environment and ‘seeded science.’ Our guess is that he was thinking of Halliburton’s recently acquired Zeta Analytics!

We have heard before from Johan Stockmann on Shell’s ambitious enterprise architecture! In his iconoclastic style, Stockmann railed against the upstream’s IT status quo and fragmented data landscape that is ‘costly and hard to maintain.’ The EA approach starts with meticulous design of business goals, processes and roles, then you can design IT solutions. ‘Do not listen to vendor sirens,’ many system components for data quality, governance, off-the-shelf software and data management procedures all need work. But, ‘you cannot manage your way out of a problem that has its roots in poor design.’ The architecture is the key. It is persistent, valuable and hardest to remediate if done wrong. While E&P data is different, upstream IT is similar to other industries and the same principles should be followed. What are these? Agree on goals, define (data) success and prepare for the journey up the ‘maturity staircase.’ Shell uses an ‘opportunity driven’ approach to its architecture. New business needs such as handling shale plays will kick off an update to the architecture—to ensure that the new data maps to the enterprise model. The aim is for seamless interoperability of IT systems across facility design to lab and reservoir. ‘Agile’ also ran, as in ‘smart data’ and in-memory processing.

We have also reported previously on tests of Teradata’s data appliance but it was good to hear Statoil’s Mark Thompson provide more on the use of business analytics on 4D seismic data. Statoil has been conducting seismics on the Gullfaks field and now records new surveys every six months from an installed 500km cable array. Data comes in faster than Statoil can process it—as do the ad-hoc requests from users for near, mid and far trace data sets with various time shifts applied, attribute computations and so on. Managing the hundreds of data volumes created is a major data pain point.

Enter the subsurface data warehouse—a home for all the 4D seismic data along with reservoir data, production data and well logs. All stored on a massively parallel processing system from Teradata. Data is staged according to use with ‘hot’ data on flash storage and cold on hard drives. Data movement is automated depending on usage patterns and data access is through SQL. Thompson got a good laugh for his claim that ‘geophysicists don’t know SQL, they know Fortran,’ although it’s not clear why. Work process are stored as user defined functions allowing access from standard apps. 4D tools-of-the-trade, such as repeatability analytics, time shifts and correlations (of seismic deltas with pressure changes in the reservoir) have been coded in SQL and stored in the RDBMS. Thompson showed a few lines of, it has to be said, rather ugly SQL, that now run automatically. A Statoil committee is now working on the wider application of business intelligence pending a firm commitment to the technology. For Thompson, the foot dragging is a lost opportunity. ‘Why are we ignoring this? I really don’t know. We need to embrace the 21st century.’

Lars Gåseby, speaking on behalf of Shell and ECIM floated the idea of a single professional data management society spanning ECIM, PPDM and CDA. The ambitious goal is for a professional society of oil and gas data managers along the lines of the SEG, AAPG or the SPE. A memorandum of understanding has been signed between the three organizations which are to fine tune their proposals for organization, governance and membership terms by December 2014. In Q1 2015, the societies will be seeking stakeholder agreement and, if this is forthcoming, the society will come into being. Gåseby is enthusiastic that this will happen.

Interica’s Simon Kendall provided an unusually detailed account of data usage patterns he has observed at a variety of client sites. Interica started out as PARS whose eponymous hardware and software bundle began life in 1995, archiving Seisworks data to D2 video tape. Today, the hardware has been dropped leaving the PARS/Project Resource Manager (PRM) archiving data from some 30 applications to LTO6 cartridges with a 2.5 PB capacity. Kendall reports explosive data volume growth in the upstream. One client has 100PB of disk in its seismic processing center. Seg-Y files of post stack data easily exceed 50GB. And then there is unstructured and duplicate data. G&G makes up 5% of an E&P company but manages 80% of its data. A short PRM sales pitch ensued. The tool is used by 7 majors to track disk usage by application, over time and by user. This has allowed for some statistics—although Kendall warns against drawing hasty conclusions from this likely biased sample. A ‘typical’ large company may have around 100 file systems, 10s of PB of storage, about 30 main applications, 10-30 thousand (sic) projects and 10s of millions of files. Sometimes the oldest ‘live’ projects have not been touched in a decade—they are still on spinning disks. Oh, and archive volumes are increasing at an even faster rate!

The ‘average’ medium sized company likely has a mix of Linux and Windows applications and around 650 TB of storage (this figure has doubled in the last 5 years), 10 main apps, 8 thousand projects and perhaps 30TB of data that has not been touched in 4 years. A small company will likely have mostly Windows applications and maybe a million files, 15 TB of data, 100 thousand directories and 40-50% of data as duplicates. Kendall has seen popular SEG Y files duplicated up to 18 times in one shop. Around half of all apps (in this sample) are Schlumberger’s Petrel and make up one third of the data volume. Kendall puts the value of the interpretation IP effort for his top ten clients at $2.8bn.

In the Q&A Kendal opined that while keeping data on a local workstation may be deprecated, users still do it. In some companies, it would be highly desirable just to be able to generate a list of what data the company possesses. Smaller companies are often reliant on ‘tribal knowledge’ about what is there and where and there is a general lack of knowledge of what has already been done. The move to PC based tools is not making things easier. It is too easy to populate, copy and move stuff, actions that are hard for data managers to combat. ‘There is much more data sitting on C drives than we ever want to admit.’

Satyam Priyadarshy, chief data scientist with Halliburton/Landmark likes to distinguish between business intelligence, answering questions you are already asking like ‘what is selling?’ and big data/analytics (BDA) which is ‘mystery solving with data science.’ The latter is what is needed for innovation. BDA uses higher math and science to explore new patterns turning the raw data into the single source of the truth, working on data in situ. We were now back in Zeta Analytics territory. BDA will be highly applicable to shale exploration. Priyadarshy slid into acronymic mode enumerating Hadoop, Hive, Postgres, Tableau Ploticus and more. An ‘in memory analytic server’ is claimed to cut seismic imaging time from 48 to 3 hrs. The key enablers of BDA are not commercial software (maybe we are not in Zetaland!)

An open source environment frees you up for BDA and lets you mix & match algorithms from different sources. You need a modular platform and a large number of ‘TAPS,’ technology algorithms products and solutions. Big data projects, like others, fail from a lack of leadership and from poor advisors/consultants who ‘sell technology not solutions.’ BDA and the TAPS explosion are the next industrial revolution. In the Q&A we asked Priyadarshy how you could set up a BDA team without knowing beforehand what problems you were going to solve and what ROI could be expected. He suggested ‘working under the radar using open source software.’ BDA may be an embarrassing topic for the C suite.

Not to be outdone, Schlumberger’s Stephen Warner also presented on the big data-cloud-analytics spectrum that is ‘redefining the information management landscape.’ Along the hydrocarbon ‘pathway,’ it is information management that drives decision making. Only a decade or so ago, few reservoirs were modeled and data managers had an easy life. Today we have terabytes of data and exotic new subsalt and unconventional plays. Analytics has application in drilling and targeting shale sweet spots. ‘Shale is a data play.’ Shale delivery is difficult to model and simulate—so derive your KPIs from the data. Schlumberger has partnered with big data experts on a proof of concept leveraging SAP’s Hana in-memory compute appliance, with Microsoft in real time analytics in the Azure cloud. Scaling up for big data requires new capabilities. Data managers need to be at the forefront of these changes to data driven workflows. There has never been a more exciting time in data management.

Big data enthusiasm is all very well but there is nothing that beats an actual test drive. This was provided CGG’s Kerry Blinston who presented results of a trial on CGG’s UK upstream data set which has been ported to a Teradata (yes, them again) appliance. Teradata’s Aster technology was used to investigate a ‘hard data set, not a top level management story.’ The trial investigated occurrences of stuck pipe during North Sea drilling, a major cause of non-productive time. Along with conventional drilling parameters such as weight on bit, rate of penetration, other data sets of interest were added to the mix, a 1TB data set of some 200 thousand files. Hidden in text log files were three ‘tool stuck’ occurrences which were not visible in the completion log. Many such data connections are not routinely made just because there is too much data. Blinston offered a new definition for analytics as ‘finding what we didn’t know we had but didn’t know we didn’t know we had it!’ The 300 well pilot found ‘unexpected correlations and spatial distributions’ that were good enough for prediction and improved drilling models.

Shale exploitation goes back a long way. Noah Consulting’s Fred Kunzinger cited a 1694 British patent to ‘extract and make great quantities of pitch, tarr, and oyle out of a sort of stone.’ Today shale is changing the balance of power in the energy market. The sheer speed of shale development is creating new problems with the management of the vast amount of data and information that the new processes depend on. There is a need to rethink ‘conventional’ data integration and here there is a renewed focus on master data management, the key to bringing all departments and disciplines together. Factory drilling requires just in-time data management, analytics and continuous improvement along the ‘new assembly line.’ A business transformation is needed to improve shale margins and transform portfolios.

~

Data management is a rather dry subject and our account fails completely to capture the magical couple of days spent in sunny Haugesund. We did our now traditional trail run in the Steinsfjellet park. The Oil IT Journal/ECIM runners club now has two members! Another treat this year was the Haugesund marching band which entertained the data managers at Tuesday’s gala dinner. Watch them on YouTube and visit ECIM.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.