June 2011


Shell’s ID in the cloud

Covisint is now 6 months into its contract with Shell Oil for the provision of cloud-based identity management. New Energy Ecosystem uses a standards-based approach to ‘avoid lock-in.’

Last month we heard Shell’s Johan Krebbers (speaking at the SPE Digital Energy Conference in Houston) describe the migration of authentication to the cloud, leveraging standards-based protocols like SAML (www.oilit.com/links/1105_19). On the choice of an identity service provider, Krebbers observed, ‘the last thing you want is to be is in bed with Microsoft or another proprietary system.’

Speaking at Energistics’ 2011 Western European Region meeting at Oracle’s London HQ (more in next month’s issue) Scott Klender presented work performed by his company, Covisint, on Shell Oil’s identity management system. Compuware unit Covisint provides a standards-based solution to the issue of access to multiple applications in house, chez partners and in the cloud, from disparate users, locations and devices.

Covisint provides a ‘business to business’ (B2B) ecosystem to the automobile, energy and financial services verticals. The system has been rolled out in Shell for six months and Covisint is now working on a second super major account. Klender observed that ‘passwords are bad, and they are going away.’ This is in part driven by compliance with government regulations. Shell wants to manage 750,000 identities, many more people than just their employees. This would make alternative smart card-based solutions, with a $100/year per card fee, prohibitively expensive.

Covisint’s new ‘Energy Ecosystem’ is based on its ExchangeLink platform, a service-based technology in the cloud providing single sign-on with federated identity and trust management. Shell Oil has a diverse ‘loosely coupled’ workforce spanning joint ventures and various business partners. ExchangeLink provides rapid provisioning and de-provisioning of identity. When an employees leaves, the company can zap his or her identity right away. Audit information such as ‘which external people have access to your systems’ is now all in one place. ExchangeLink is being phased in to the Shell organization, initially with basic identity management from a single point of administration. Subsequently federated application provisioning will leverage SAML, and the OASIS service provisioning markup language (SPML—www.oilit.com/links/1106_1).

Comment—Given that Shell was leery of leveraging a ‘proprietary’ identity system, we asked Klender if Covisint was any different say, from a Microsoft or other proprietary system. He observed that Covisint leveraged standards like SAML and if needs-be ‘we could be swapped out’. More from sales_info@covisint.com.


Cost control

French supermajor Total has deployed Skire Unifier, a cloud-based cost control solution, to manage its $10 billion annual upstream capital projects spend.

Silicon Valley headquartered Skire is to provide Total with its ‘Unifier’ cost control system. Unifier, which was selected after a formal call for tender, will be used on Total’s upstream capital projects. Key to Total choice was Skire’s workflow automation, earned value and cash flow management and its interface with SAP financials and Oracle Primavera.

Unifier is a collaborative web-based platform that integrates business processes, data and documents. Skire has been providing a ‘true’ multi-tenant software as a service (SaaS) architecture since 2000. But in the last couple of years, a ‘seismic shift’ has occurred away from traditional on-premise business applications.

Skire has a word of warning for would-be cloud users, ‘Many companies are rebranding their products as cloud computing.’ Buyers should check to see if a ‘true’ cloud architecture is deployed. ‘If the provider hands you off to third party hosting services, that is not the cloud and the issues of the old model will resurface.’

Last year, Dow Chemical’s Bob Donaho cited Skire Unifier as a key enabler on Saudi Aramco’s Ras Tanula ‘giga-project’ (Oil IT Journal May 2010). ConocoPhillips and BG Group are also Skire clients. More from www.oilit.com/links/1106_2.


15 years of PNEC—Part 2, 2002—2006

Oil IT Journal continues its celebration of 15 years of PNEC’s Data Integration Conferences. We track the rise of Linux and XML-based standards and tools, the combined technical and marketing success of Innerlogix’ data quality environment, the advent of ‘Portal’ based data management, Yukos’ contrarian view point, Wal-Mart’s data environment and the arrival of Google Earth.

We continue our 15 year review of Phil Crouse’s PNEC Data Integration conference with the 2002 edition. This was the year of new XML-based languages, with new MLs from Innerlogix, PDVSA and three from PPDM! Nagib Abusalbi described Schlumberger’s ‘noble goal’ as ‘providing one official answer to a query, along with known risk’ but as he admitted, ‘We are not there yet.’ Users should be able to view data and fix problems, the more data is used, the more it gets fixed. Dag Heggelund introduced Innerlogix’ XML-based Extensible Data Connectivity Language (XDCL) for ‘no-code’ development of data drivers. Application service provision (ASP) had lost some of its shine in the wake of the dot-com bust. Many companies were reluctant to see their data go off-site although BP in Aberdeen and PDVSA were counter-examples. Shell reported on a data management software gap analysis performed by Schlumberger. Interestingly, PGS’ PetroBank was selected. Shell’s Cora Poché made a call for a seismic data clean up project—along the lines of the MMS’ Gulf of Mexico well data cleanup initiative. Shell was a keen user of hosted software, moving its data to Landmark’s ‘Grand Basin’ e-business unit. Chris Legg described BP Houston’s search for a replacement to Amoco’s DataVision and the ex-BP/Arco EXSCI. In the end BP selected Petrosys’ dbMap ‘because it ran on both PC and Unix and for map quality.’

2003

Will Morse (Anadarko) saw the move from Unix to Linux as an unstoppable trend, ‘be there or be square!’ But he warned that ‘the costs of Linux migration may not be as low as you think’. Anadarko was running Landmark and Paradigm apps on Linux. POSC announced that as part of its Practical Well Log Standards (PWLS) work, it had been granted the right to use and migrate the Schlumberger classification including curve mnemonics.

2004

This was the year of Innerlogix which scored a marketing home run as ChevronTexaco, ExxonMobil and Shell all presented enthusiastically, what we named the ‘star of the show,’ its Datalogix data clean-up tool. Trudy Curtis (PPDM) and Alan Doniger (POSC—now Energistics) announced an agreement on schema design principles—especially on units of measure—and on ‘profileable’ schemas, noting too the return of the Petroleum Industry Data Dictionary. Charles Fried (BP) opined that, ‘we are still in the stone age regarding data bases.’ Problems exist with data in Excel spreadsheets and shared drives are ‘all filling up.’ Fried observed, ‘all these disparate data types are a pain in the butt to manage and there is no money for this.’ Vitaly Kransnov revealed that all software used by Russian super major Yukos was developed in-house because commercial software failed to meet Yukos’ needs. Commercial tools are ‘complicated and hard to use,’ offer poor connectivity to the corporate database and no national language support. Yukos ‘adapts its software to its corporate knowledge, not vice versa.’

2005

Pat Ryan described Calgary-based Nexen’s data management framework which supported its 280 ‘best of breed’ G&G and engineering applications. Nexen’s Data Application Separation layer (DASL) leveraged Tibco middleware to link well data objects with OpenWorks, GeoFrame and an in-house developed PPDM repository. The model was extended to pipeline and facilities. DASL captures user ‘context’ at login. Mike Underwood updated the meeting on ChevronTexaco’s data quality effort. Chevron’s workflow had been improved and automated. Data cleanup is performed with Innerlogix tools. Madelyn Bell (ExxonMobil) noted a shift in focus at PNEC from data models and middleware to the ‘value and completeness of data.’ Exxon is making it easier to retrieve, refresh and reuse old studies. Bell queried why E&P companies are not more proactive in mandating data quality standards for vendors. Pat Ryan (Nexen) agreed that terminology is crucial. Document management is hard to link with geoscience systems—but the need is now to access G&G and corporate documents and contracts. Paul Haines (Kerr McGee) suggested information management was ‘more about people and process than about the data’. Companies understand the importance of metadata, but ‘vendors don’t supply it!’ A plea echoed from the floor, particularly for geodetic information where a ‘standard data model/exchange format’ is needed. In designing BHP Billiton’s Petroleum Enterprise Portal, Katya Casey took inspiration from eBay and Amazon. The Portal embeds Schlumberger’s Decision Point, SAP Business Warehouse, Microsoft Exchange, Documentum and other tools. ‘Everything I need today is right in front of me.’

2006

The 2006 PNEC high point was Wall-Mart CIO Nancy Stewart’s talk on how the retail behemoth uses a Teradata database to track activity around the globe in real time. Kerr McGee (now Anadarko) was working on unstructured data management, leveraging AJAX technologies to enhance its users’ experience. Shell continued to enhance (and measure) data quality. More metrics underpin Burlington Resources’ (now ConocoPhillips) application portfolio rationalization. Clay Harter (OpenSpirit) made a compelling case for the use of Google Earth (GE) in oil and gas. OpenSpirit had already jumped on the Google Earth bandwagon, offering the popular GIS front end as a data browser for OpenSpirit-enabled data sources. GE Enterprise rolls in shape files and raster images through Fusion which blends the GE database and data on in-house servers. OpenSpirit (OS) had leveraged its integration framework to tie into GE by dynamically creating kml/kmz from OS sources that can be consumed by GE. GE ‘fills the need for a light-weight easy to use 3D browser.’ More on PNEC from www.oilit.com/links/1106_39.


Book review—In memory data management

SAP’s SansSouciDB promises a ‘true’ relational database for both transactions and analytics.

‘In Memory Data Management: An inflection point for the enterprise1’ (IMDM) by SAP’s Hasso Plattner and Alexander Zeier is a curious read. Its starting point is the dichotomy between transactional databases and reporting/analytical systems. Historically, these have been kept separate for performance reasons. You don’t want your transactional system to be brought to its knees by a query from an analyst. Another problem that arises in database analytics is that the ‘pure’ relational model of linked tables and indexes can provide poor performance with joins across many tables. But systems are speeding up. Do we really need all this complexity of non relational systems duplicated for transactions and analytics? The thesis of IMDM is that modern systems are so powerful that you can realize database nirvana by running the database in memory. This database runs so fast that complex queries can proceed with little or no impact on the transactions. All you need is a fast machine with boatloads of fast RAM. One data base, one data model. Well, that is the theory.

It is not quite as simple as that. IMDM spends a lot of time (more detail than most readers would probably want) explaining how to adapt data structures and squeeze the database into memory and retool queries as stored procedures. The book revolves around a system called SansSouciDB (SSDB), as used in SAP’s ‘Hana’ appliance (Oil ITJ December 2010). SSDB stores active data in ‘compressed columns’ in memory. External (disk) is reserved for logging and recovery purposes and to query historical data. SSDB makes use of parallel processing, across blades and across cores. Test target architecture included high end blades each with 64 cores and 2 terabytes memory per blade. Uses cases are briefly presented—one for smart grid meter data, another for real time sensor net/RFID data streams.

Sometimes IMDM’s scope is bewildering. What is that Microsoft Surface doing there? Why so much on specific Intel Nehalem architectures, on virtualization and even ‘big data’ and cloud computing. The authors appear not to leave any trendy IT stone unturned, ensuring that the marketing message for ‘in memory’ resonates in every ear. Could SSDB include an element of FUD2?

A better title for this book might have been adapting and optimizing databases to novel architectures. But this in a sense would be giving the game away. Modern architectures, as we saw in our review of Pete Pacheco’s introduction to parallel programming (April 2011), far from offering straightforward performance hikes to either seismic imaging or relational databases, actually require a retooling of the system, application and likely the user code to realize the full potential of the hardware. At one level, IMDM can be seen as an interesting discussion on how to build and run a fast database and will be of interest to technologists. On the other hand, it does a good job of debunking the notion that ‘in memory,’ at least with current architectures, is a straightforward route to performance. Sans souci (French for ‘no worries’) it is not!

1 www.oilit.com/links/1106_40.

2 Fear uncertainty and doubt.


OSIsoft/PI System 2011 User Meet, San Francisco

Enbridge Pipeline and Rockwell’s Factory Talk. NiSource’s ‘repairable component modeling’ adapts to shale gas boom. MOL refiners move to global operations and maintenance and real time fleet data.

Earlier this year, attendees at OSIsoft’s 2011 User Conference in San Francisco heard how Enbridge Pipeline has deployed Rockwell Automation’s Factory Talk Historian Machine Edition (ME), described as a ‘PI Server that fits in the palm of your hand,’ acts as a high speed data collection engine for the backplane. Data is consolidated from multiple ME modules to the plant PI Server historian. The system is used across Enbridge’s remote pump stations in nine states. The ‘distributed historian’ approach stores up to 14 hours of data on a 2GB internal memory when data links go down. The system also manages the various versions of the ‘story’ stored on local recorders and remote PI servers. The system also provides visibility of highly granular information on temperature, amps, volts and vibration data. Such high volume data has helped tuning exception/compression settings to optimize performance. Enbridge managed to avoid a pump motor rebuild by comparing anomalous data with historical temperature data. Rockwell notes that ME is ‘not OPC,’ rather a true real time system, ‘50ms is 50ms.’ ME talks direct to PI and all data is available for Data Link, ProcessBook and WebPartsPI/SDK applications. More from 1106_6.

Shale gas-driven growth in NiSource’s natural gas pipeline system is forcing a more ‘dynamic’ approach to its management. New production sources are creating directional flow modifications and capacity constraints that stress system reliability. Enter NiSource’s program for operations and maintenance defect elimination using a ‘repairable component modeling’ (RCM), ‘failure modes and effects criticality analysis’ (FEMCA) and IBM’s Maximo as the enterprise computerized maintenance management system. RCM compares risk exposure with mitigation options to identify an ‘economic balance.’ The current state of NiSource’s real time architecture is ‘fractured and proprietary.’ NiSource is moving to a framework of OSIsoft PI System tools and Microsoft SharePoint as a ‘central hub of actionable knowledge.’ PI System will be used for online condition monitoring at six compression stations.

Hungarian Oil and Gas, a.k.a. MOL Group is undergoing a paradigm shift as it consolidates its operations and maintenance from a local approach, seeking to optimize operations at each refinery, to enterprise-wide optimization of its entire fleet. To achieve this, MOL is ‘closing the gap’ between process control and business governance. This is being achieved with a fleet and refinery-wide real time infrastructure and a unified data model of all refineries in the group. PI System process feeds into various refining decision support systems built with PI SDK, PI DataLink and other PI tools. The result is ‘visible and controllable operations’ throughout MOL’s plants, better situational awareness and a new focus on timely corrective actions. Many PI SDK-developed applications have been rolled out, for example ‘Nice,’ a multi language app for movements, sample and inventory management and ‘Semafor’ for operational KPI/reporting, using Business Objects XI atop the PI database. More from www.oilit.com/links/1106_7.


ResourceApps’ interactive oil and gas map demonstrator

Ajax, GoogleMap and JQuery, Dygraphs combine in impressive mapping technology showcase.

Colorado-based ResourceApp has developed an interactive map of publicly available data of North Dakota oil and gas wells. The interactive oil and gas map allows users to ‘click, zoom, download and to peruse’ oil and gas well logs, recorded mineral deeds, royalty assignments, tops, production data and more. The speed of the application is impressive. ResourceApp’s demonstrator shows off its ‘novel’ geospatial database mapping technology, based on a combination of Ajax, GoogleMap and JQuery APIs and the open source ‘Dygraphs’ JavaScript data visualization library. ResourceApp is inviting companies to test drive the tool to evaluate potential applications of the technology in house. More from www.oilit.com/links/1106_8.


Athens Group on mitigating drilling software risk

Nestor Fesas—‘improve drilling software quality with risk management and contractual specs.’

Speaking at the SPE Digital Energy event earlier this year Nestor Fesas of the Athens Group noted that drilling software was particularly risky. One application that controlled a top drive caused erratic behavior and injured two rig hands. Drilling software is risky because it is ‘invisible,’ specs are often inadequate, the development processes are immature. This means that it is ‘easy’ to change and implement specs late in the development cycle, a dangerous process.

Athens Group produces an annual survey of drilling non-productive time (NPT)1. The key message from the last report is that drilling (software) control systems (DCS)-related NPT is ‘way too high.’ Athens Group advocates a lifecycle approach to software development. Software quality is inversely proportional to risk—so it is a good idea to embed risk mitigation efforts early in the development lifecycle. This can be achieved by establishing contractual software standards, by validating and verifying requirements and design.

What is software risk identification and management? Fesas gave the example of the documentation for an alarm system that was 85 pages long and contained over 1,000 alarms! This very risky specification document was approved with minimal review. This is ‘clearly wrong.’ Contractual language should allow for the verification of performance quality and HSE expectations. Industry standards for performance, quality, health and environmental (PQHSE) requirements need to be included.

In the Q&A, Fesas was asked if he could see the equivalent of an API spec for software development. He said that wording for a ‘fit for purpose’ end user agreement was being mooted for certain contracts. Further quality improvement is achievable by ‘using a subset of the operating system.’ Another questioner asked how version upgrades could be made to interconnected control systems without compromising the system. Fesas opined that, ‘Interaction between systems is a relatively well constrained dialog. You can also simulate inputs and outputs. Some clients use the training simulator to check out software before deployment.’ Full paper available on www.oilit.com/links/1106_9.

1 The State of NPT on High-Specification Offshore Assets—www.oilit.com/links/1106_10.


Patrick Hereng on €1 billion annual IT spend

JDN Solutions interviews Total CIO—Petaflop SGI cluster for 2012, 5,000 iPhones and SAP ERP.

Interviewed by the French newsletter JDN Solutions1, CIO Patrick Hereng revealed how Total spends its billion Euro IT budget. Total’s upstream information systems are considered ‘strategic’ and account for a large part of the budget. Total runs its own supercomputer—an SGI system whose capacity is to double to a petaflop bandwidth by 2012. For supply chain management, Total’s application of choice is SAP, naturellement.

Total has evolved from a ‘Maginot line’ approach to security to an embedded security model. This provides secure access from any endpoint, to any system, anywhere. However, new styles of supplier interaction are forcing a revision of the security model.

Total has opted for the iPhone as a group standard with some 5,000 iPhones today, all secured and protected against loss. The company is also migrating its traditional POTS2 telephone system to ToIP3. Total is also embarking on an intranet portal which will harmonize the group’s existing internet sites and support collaborative social network type activities.

Total’s IT R&D effort includes multi petaflop computing, virtual reality-based training, and ‘serious games.’ The latter are expected to facilitate training and improve safety in Total’s refineries.

Total is starting to use software as a service (SaaS), particularly for social networking. Hereng sees Total’s IT migrating to a hybrid cloud model. Data centers are being consolidated in virtualization projects and towards a private internal ‘cloud’. Total is also testing Microsoft’s cloud-based applications.

Hereng is wary of the concentration taking place in the IT Marketplace. Total tries to bi-source key technologies although this is not always possible. There is, for instance, no alternative to SAP. But concentration also means that offerings may be more integrated. This explains Total’s choice of Microsoft on the client workstation, along with SharePoint, and Microsoft’s email and instant messaging technologies. The approach reduces Total’s integration overhead.

1 The original article (www.oilit.com/links/1106_5) was authored by Antoine Crochet-Damais and has been translated and summarized with permission.

2 Plain old telephone system.

3 Telephone over Internet Protocol.


Software, hardware short takes

Maxeler, Aramco, ATE-AeroSurveillance, Blue Marble, Comsol, Petrolink, Dynamic Drilling, Pitney Bowes, Safe Software, Ensyte, Energy Solutions, Landmark.

Maxeler’s new MaxGenFD compiler hides the complexities of finite difference seismic imaging development such as managing very large data sets, boundary conditions and domain decomposition across multiple compute elements on the company’s FPGA hardware.

Saudi Aramco has announced ‘Apex,’ an automated approach to seismic acquisition and processing. Apex uses ‘robotics technology’ for 3D data acquisition and automates processing tasks such as near surface and subsurface velocity estimation.

ATE-AeroSurveillance’s new ‘Ardent’ airborne real-time detection and notification system offers real-time object detection and tracking from manned or unmanned aircraft—www.oilit.com/links/1106_27.

Blue Marble’s GeoTransform 6.1 release offers developers raster image manipulation across many GIS raster image file formats and embeds the EPSG geodetic database—www.oilit.com/links/1106_28.

The 4.2 release of Comsol Multiphysics includes a new Geomechanics add-on to its structural mechanics module. 3D simulations can now be imported to AutoCAD—www.oilit.com/links/1106_29.

Petrolink’s new website (www.oilit.com/links/1106_30) is a showcase for its Digital Well File oilfield data distribution service.

Dynamic Drilling Systems has written a functional Witsml client written in C#. The code, which includes the nWitsml library from Setiri Group (www.oilit.com/links/1106_31), is available from www.oilit.com/links/1106_32.

Pitney Bowes’ Encom Discover 3D 2011 geoscience add-on to MapInfo Professional/Discover GIS vector and voxel analytics, automatic geometry calculations and 2D/3D correlation in the seismic interpretation module—www.oilit.com/links/1106_33.

Safe Software’s FME 2011 speeds geo-data access to sources such as Oracle Spatial, Esri ArcSDE, Microsoft SQL Server, PostGIS, and more—www.oilit.com/links/1106_34.

Ensyte has added an electronic bulletin board to the new release of its gas supply/transportation software solution, Gastar. The interface is used to communicate with transportation clients for scheduling, confirmations, gas usage, and billing statements—www.oilit.com/links/1106_35.

EnergySolutions’ PipelineStudio 3.3 release enhancements include heater and cooler elements, subnetworks and a scenario editor—www.oilit.com/links/1106_36.

The 5000.6 release of Landmark’s GeoProbe high-end seismic interpretation system includes tighter integration with DecisionSpace Desktop, a 64 bit Windows 7 port and ‘GeoShell,’ a new data-object called that can represent large, complex, multi-z, bodies such as salt intrusions—www.oilit.com/links/1106_37.


International Digital Oilfield Conference 2011, Abu Dhabi

ADCO, PDO, Emerson, RasGas, Siemens and Saudi Aramco share digital oilfield tales from the front.

Speaking at the second International Digital Oilfield Conference in Abu Dhabi last month, Bahir Al-Azawi described Adco’s ‘Smart Fields’ (SF) initiative. The SF enables multi disciplinary monitoring and control of production and injection in near real time. The SF includes an asset-level collaborative work environment (CWE) and corporate-level real time production and drilling centers RTOC. Each CWE includes an integrated asset model and embeds Halliburton’s AssetObserver dashboard.

PDO’s Fahud field collaboration center (FFCC) was presentated by Salim Al Busaidi. The FCC handles 500,000 real time data points per minute. The venerable Fahud field has been producing for 44 years. PDO is now enhancing data management of its 500 well with Halliburton’s Engineering Data Model. PDO’s ‘Nibras’ portal ‘brings all production data together in single version of the truth.’ Nibras was the subject of a second PDO presentation by Salim Al Busaidi. The Nibras client is a SharePoint/Silverlight development with data services supplied by .NET/Windows Communications Foundation. Under the Nibras hood, data is stored in OSIsoft’s PI System alongside vertical application and databases from Halliburton (EDM) and Schlumberger (OFM). PDO is now working on well performance monitoring and optimization.

Emerson’s Dale Perry advocates a move from ‘simple’ Hart protocol to Foundation Fieldbus (FF) intelligent devices, using Emerson’s ‘human centered design’ (HCD) GUI. Perry contrasted HCD with ‘technology-driven’ design which shows functionality and leaves it up to the user to perform a task. HCD is ‘task driven’ and builds intelligence into the UI.

Ahmad Al Kuwari introduced RasGas’ real time information system (RTIS). RTIS combines around 250 million pages of reports with real time data ‘harvested’ from multiple sources into a central database. 2,000 staff hours per year have been saved through process automation. The system includes data from RasGas’ laboratory information manage system (LIMS) for fluid surveillance and sampling.

Both ADCO and PDO presented their digital oilfield communications infrastructures. ADCO uses a combination of SDH fiber network of various capacity (STM-1 to STM-16) with local Ethernet LAN connected to the SDH nodes. PDO was forced to upgrade its wireless infrastructure and has now opted for WiMAX 802.16d for the last mile.

Ahsan Yousufzai described how Siemens’ work with Saudi Aramco began with the deployment of its ‘XHQ’ as the basis of Aramco’s Enterprise Monitoring Solution (EMS). EMS runs Aramco’s refineries, pipelines and export terminals—all controlled from the ‘Ospas’ centralized facility. Siemens was awarded the i-Field contract following technology assessment pilots with different vendors. A ‘generic i-Filed visualization’ (GIVis) template is customized to each location. The ‘data driven’ solution means that wells are immediately visible as they come on stream. Data is consolidated at a central Historian and the system is linked with Aramco’s wells database. Siemens XHQ now monitors the complete Aramco value chain from wells to export terminals. Aramco is now working to integrate with SAP modules for MRO, finance, HR and on a link to subsurface applications. More on IDOC from www.oilit.com/links/1106_38.


PNEC Data Integration Conference 2011 Houston

Highlights from this years PNEC—Paradigm beyond the ‘gold’ data store, RasGas’ ‘DMX’ data management accelerator, managing Saudi Aramco’s petabyte disks, Shell’s data distribution engines, Petrosys’ data audits, Idea Leadership Company on managing data ‘cultures,’ Exco’s IT transformation, Shell’s spatialization program and Petronas’ ‘PiriGIS’ information system.

Judging from the number of presentations at the 15th Petroleum Network Education Conferences’ (PNEC) Data Integration and Information Management in Houston last month, Landmark and Petris have cornered the market for data management application software. In-house software development continues apace, mostly focused on customizations of PPDM. Noah/Hess and Schlumberger/CDA made valiant attempts to put a ‘dollar value’ on data management.

Paradigm’s Jean-Claude Dulac thinks that industry needs to go beyond the ‘gold’ data repository. Why should we expect a ‘single version of the truth’ when all our data has errors and uncertainties? The reality is that measurement is imprecise, and that picks may differ between interpreters. Deciding which is ‘right,’ approving one source over another makes establishing a gold data store contentious and costly. Uncertainty is everywhere—in measured data, in processing and in modeling parameters. Dulac advocates the use of Bayesian probabilities, ‘We need to change the data management paradigm—instead of using manpower to qualify data, use the computer to find all the versions of the truth, and to find out what data most affects the outcomes.’ This enables the interpreter to focus on the most relevant information. Probabilities need to be analyzed in the review process. Everyone needs to think in terms of probabilities and review probabilistic maps, tops and log computations. Dulac suggests that ‘Gold plus probability equals a platinum repository.’ The Q&A sparked off an enthusiastic debate on the possibility of extending PPDM to manage uncertainty.

Mark Priest from Qatari LNG operator RasGas explained how the company is transitioning from development to operations. The transition includes a data management acceleration project (DMX) which aims for ‘no data left behind’ after the development phase ends. DMX also sets the scene for surveillance-based operations. Early in the project, it was realized that RasGas had ‘many more schematics than wells,’ there were ‘inconsistent’ formation tops and ‘a million files scattered around the place in non-communicating systems.’ All in the face of a projected 80 years plus of field life.

The project is not technology driven, the idea is to make DMX sustainable with built-in data quality checks, security and a desire to minimize the time that subject matter experts are expected to devote to the project. Data ‘judo’ was used to gain control over Excel, leveraging the tool’s power to the end user while eliminating its use as a data repository. RasGas sees data quality as improving with use as there are ‘more eyes on the data.’ The project is expected to bring around 4,000 person hours per year efficiency gains. RasGas shareholder ExxonMobil provided advice to the project. Questioned on the ‘solvability’ of the data management problem, Priest noted that 20 years ago, data management was an ‘office function’ at best. But thanks to the efforts of PNEC’s Phil Crouse and others, ‘We are beginning to get a handle on this. Information is power and is a key differentiator.’ Priest wants future RasGas engineers to look back on DMX and say, ‘Wow we are standing on the shoulders of giants!’

Saudi Aramco’s seismic data volumes are currently growing at around 45% growth per year. Jawad Al-Khalaf reported that today, 50,000 trace acquisition is commonplace and the trend is for seismic interpretation on pre stack data. Aramco’s Expec data center had around 7.5 petabytes online last year—this has now grown to 9 petabytes posing ‘a big challenge for IT.’ Paradoxically, storage space is underutilized—with about 1.5 PB unused. Too much data is mirrored, making for higher costs in terms of floor space, energy, maintenance and backup. ‘Terabytes are cheap, administration and usage is where the costs are.’ Aramco is looking to more data compression and deduplication where appropriate—and for better use of performant disks across applications including its Disco (seismic processing) and Powers (flow modeling) tools.

Aramco is working on breaking down workflows into processes, applications and data types (PAD) and documenting its business processes. This has been achieved for disk hungry applications including Matlab, Paradigm’s Geo-Depth, Hampson-Russel, GeoFrame, OpenWorks, Petrel. There is a 2.3 petabyte workspace for seismic data in Disco. Targeting and cleaning up the big disk users resulted in an immediate 250 terabyte space saving. Aramco is now working on data mirroring, native Oracle compression and Documentum disk optimization. A data life cycle policy has been developed for each PAD. The company is also evaluating ‘housekeeping’ tools to automate life cycle policy execution, optimize disk use and storage management. The intent is to develop or buy a robust storage management system to help arbitrate what is needed online and offline and how data should flow between SAN systems for processing, on to NAS for longer term use and eventually move offline.

Hector Romero outlined Shell’s journey from well data stores to ‘data distribution engines.’ Shell’s corporate data challenge is to manage the ‘official’ version of a log along with competitor data of variable quality and to serve it all up with the constraints of entitlements. Back in 2000 Shell’s data situation was ‘complex.’ By 2005 data management solutions were implemented around Landmark’s Corporate Data Store (CDS) and Petris’ Recall. Shell now has around five million wells in the CDS and is developing standards, naming conventions and processes for data audit, vendor management, search, project building and quality. The system serves data of ‘known,’ rather than ‘best’ quality. Users can judge if is fit for purpose. Connectors between the CDS and Recall have been developed with help from Landmark and Petris. Shell’s ‘data distribution engines’ serve data to projects from the corporate stores. Data cleansed by users can be written back to the CDS. Most recently Shell is leveraging Recall LogLink to ‘auto push’ data from Recall to OpenWorks. A second data path uses Landmark’s Advanced Data Transfer tool (ADT) to populate OpenWorks from the CDS. Shell’s users like what has ben done and want more, in particular an expansion of the data footprint to Landmark’s Engineering Data Model to add casing points, perforations etc. Romero is now also trying to bring in information from Shell’s plethoric production data sources. Unstructured data such as well reports, scout tickets and links to documents makes for something of a grey area. Here Shell uses Landmark’s PowerExplorer to fire off complex queries leveraging Shell’s Portal and EP Catalog.

Volker Hirsinger described how Petrosys has been working with the Victoria (Australia) Department of Primary Industries on ‘tracking change and handling diversity in a database.’ Many apparently static items in a database actually change over time. Such information can be captured in a PPDM well version table which has audit history, data source and interpreter fields. This provides a minimal form of change tracking. Petrosys has extended PPDM to allow a full audit history. A typical use might be generating a ‘single version of the truth’ from different source databases. Here Petrosys has developed a method of promoting a ‘best version’ to the corporate database.

John Eggert, who heads-up The Idea Leadership Company, has put his finger on a data management pain point: the communications ‘gap’ between management and technologists. This ‘social’ problem stems from the fact that techs lack people skills and are convinced that management does not understand what they do. Non techs on the other hand have unrealistic expectations and like to take ‘data free’ decisions. Co-author Scott Tidemann (Petrosys) has developed a program to help, focused on communicating around data management, ‘Communication skills impact project success as much as technical skills.’ Structured approaches like PDMP project management and PRINCE2 can help too.

Rob Thomas described Exco’s IT transformation in support of its shale gas-exploration effort. Shale gas wells come in fast and furious and mandate efficient IT systems. Analysis performed by co-author Jess Kozmann (CLTech) located Exco in a data maturity matrix and identified a ‘change trajectory’ for improvement. This involved a move away from Excel/Access usage with better data links and standardize tools. The outcome is that now Exco has portal based access to its 10 terabytes of drilling and completion data, has a handle on its application portfolio and has now embarked on a three year project to develop a corporate PPDM-based upstream data store.

Roy Martin reported on how Shell has spatialized data in its CDS and OpenWorks repositories. Spatialization involves transforming positional data into a GIS representation, usually in ArcSDE or a file geodatabase. Shell has formalized and streamlined the process in a global project commissioned to resolve the situation. Landmark’s PowerHub, PowerExplorer Spatializer and Safe’s FME toolset were used with ‘minimal customization.’

Zukhairi Latef described how Petronas has maximized the value of its subsurface data with GIS integration using an ESRI ArcSDE/ArcGIS Server with raster imagery in ERDAS Image. The PiriGIS system has its own spatial data model (a PPDM/ESRI blend) with ESPG geodetics and ISO TC211 metadata. Spatializing tools include OpenSpirit and Safe’s FME. Data integration has proved key to Petronas’ exploration success. More from www.oilit.com/links/1106_39.


Oracle Spatial 2011 user group, Washington

Indus on EPA data cleanup, HNTB’s TruViz for LiDAR data management, eSpatial—GIS in the cloud.

The Oracle Spatial user group convened in Washington last month. Indus Corp.’s Bob Booher outlined how Indus Corp. has been cleaning up the US Environmental Protections Agency’s Facility Registry, an ‘integrative’ system combining facility data from over 30 National EPA systems and over 45 State Systems. An Exxon-Mobil refinery in California leads the field with 20 different EPA and State programs linked to a single FRS record (www.oilit.com/links/1106_11). The FRS is key to the EPA’s governance of air, water and toxic substance permitting. Issues fixed in the major geo data cleanup included different formats, datums, missing metadata and other errors. All data is now in a standard SDO geometry using the Federal NAD83 standard datum. In some cases, geocoding data (ZIP codes) provided better locations than the lat/long pairs.

HNTB’s Todd Rothermel noted that point cloud (PC) data collection has exploded of late with different techniques (LiDAR, Sonar, GPR) applied in air, ship or vehicle-borne contexts. PC acquisition creates multi gigabyte files that can cause problems to users and data managers. Rothermel advocates a combo of Oracle 11g Spatial (running on a Linux instance in the Amazon EC2 Cloud) for PC data storage and Bing Maps Silverlight for PC visualization. This approach has been patented and commercialized as HNTB’s TrueViz Pulse. The TrueViz API includes tools for loading, searching and extracting data to various applications and endpoints (www.oilit.com/links/1106_13).

Eamon Walsh (eSpatial) showed how Oracle Spatial in the cloud is used to provide GIS data in ‘software as a service’ mode. Key to successful SaaS is ‘multi tenancy,’ with each user’s data kept secure and private, as exemplified by Salesforce.com, NetSuite and others. Multi tenancy is set to transform the GIS business. eSpatial’s offering also runs on the Amazon cloud, on pre-configured Amazon Machine Instances running Fedora Linux and Oracle 11g. Turning the cloud into a secure industrial strength multi tenanted environment still required some smarts. AMIs need to be secured and backed up, access needs to be restricted and there is a need to monitor and respond to evolving machine states, ‘Amazon has a lot of $ meters running!’ Walsh offered some fairly detailed advice to those wanting to roll their own EC2-based GIS servers before turning to eSpatial’s own offering of ‘OnDemandGIS’ and the ‘iSmart’ server. iSmart web GIS is an ‘instant multi-user’ cloud based offering. Users sign up for an account, load data, manage layers, users and permissions and visualize data in the browser (www.oilit.com/links/1106_14). More from the conference on www.oilit.com/links/1106_12.


Folks, facts, orgs ...

AGA, CWP, Andrews Kurth, Apache, Berkeley Research, Big Eagle, Divestco, Dril-Quip, Perficient, i4Energy, CDA, Wipro, Perenco, MVE, NDB, Oilennium, OTM, Siemens, Ryder Scott.

Larry Borgard, President and COO of Integrys Energy Group has been elected first vice chairman of the American Gas Association which has also released the 2011 edition of the AGA FERC Manual a source of information on FERC rules affecting users of pipeline services. The manual was prepared by consultants Dewey & LeBoeuf (1106_26).

The Center for Wave Phenomena at the University of Colorado has two new sponsors in PDVSA and Transform Software.

Shahid Ghauri has joined Andrews Kurth LLP as a Partner, in the Business Transactions (Oil & Gas) and Project Finance sections in the firm’s Houston office.

Jon Jeppesen has been promoted to executive VP for operations at Apache’s Gulf of Mexico Shelf, Deepwater and Gulf Coast Onshore regions. Jon Graham is VP of HSE. Mark Bauer is region VP for the GOM Shelf and Michael Bose is region VP and country manager for Argentina.

Former top aide to Governors Schwarzenegger and Davis, Susan Kennedy, has joined consulting firm Berkeley Research Group as a Special Advisor.

Christopher Anderson is the new CEO of Big Eagle Services.

In a restructuring, CFO Roderick Chisholm is to leave Divestco, Stephen Popadynetz will assume the role.

Blake DeBerry is Dril-Quip’s new Senior VP Sales and Engineering. Jim Gariepy is Senior VP Manufacturing.

Bob Raiford has resigned as CFO of ENGlobal.

Perficient has opened a new and expanded facility in Hangzhou, China to accommodate growth at its CMMI level-5 certified Global Development Center.

David Culler has been named director of the i4Energy Center, a California energy think tank.

Common Data Access has appointed new directors: George Rorie, (Shell) Roy Rees-Williams (Nexen), Anne Hegarty (Statoil) and Jeremy Lockett (Centrica).

Bart Stafford is now Oil & Gas Solutions Lead at Wipro.

Stuart Glenday is now Geoscience Data Manager at Perenco.

Mara Bellavita has joined Midland Valley as support geologist.

NDB has recruited James Bowkett and Andrew Udolisa who hails from Landmark.

Leif-Arne Langy is the new chairman of DNV’s board and Kongsberg CEO Walter Qvam is chairman of the Council.

Oilennium has appointed Janet Iglesias as Learning Development Specialist to support customers in The Americas.

Crispin Keanie heads-up OTM Consulting’s new office in Dubai.

Siemens has appointed David McIntosh as VP Federal lobbying. He hails from the Environmental Protection Agency.

Ryder Scott has promoted Allen Chen to senior petroleum engineer (PE), Joe Stowers to PE, Brett Gray, Phillip Jankowski, Tiffany Katerndahl and Michael Michaelides to geoscientist, Eleazar Benedetto-Padron to senior geoscientist, Kosta Filis to senior engineering technician, Jim Baird to managing senior VP, Frank Jeanes, Jim Stinson and Mario Ballesteros to VP technical specialist, Rick Robinson, Steve Gardner, Tosin Famurewa, Miles Palke and John Hanko to VP project coordinator and Pamela Nozza to engineering analyst.


Done Deals

Petex, NSI Upstream, Aker Solutions, Autonomy, Iron Mountain, Honeywell, EMS Technologies.

Earlier this year, Petroleum Experts (Petex) acquired NSI Management Company (a.k.a. NSI Upstream) along with its flagship Oil Field Commander (OFC) software. OFC links real-time and historical data sources with petroleum engineering analytical and modeling tools. Petex’s product line includes the Integrated Production Modeling (IPM) suite of tools (GAP, PROSPER, MBAL, PVTP, REVEAL, and RESOLVE) and Integrated Field Management (IFM). Petex is to rebrand OFC as ‘Integrated Visualisation Manager’ (IVM).

The Finanstilsynet (Norwegian Financial Supervisory Authority) has approved Aker Solutions prospectus for listing of shares in its Kvaerner unit which is to be de-merged and listed on Oslo Stock Exchange in July 2011.

Autonomy is to acquire ‘selected key assets’ of Iron Mountain’s digital division including archiving, eDiscovery and online backup for $380 million cash. Autonomy is to use its ‘Idol’ meaning-based technology to extend compliance, discovery and analytics and enable intelligent collection and processing of non-regulatory data from distributed servers, PCs and mobile devices. The deal adds over six petabytes of data and 6,000 customers to Autonomy’s private cloud.

Honeywell is to acquire EMS Technologies, a provider of connectivity solutions for mobile networking, rugged mobile computers, and satellite communications for $491 million (approximately 13 times 2010 EBITDA.) The deal will enhance Honeywell’s ruggedized mobile computing products and services for use in transportation, logistics, and workforce management settings as well as secure satellite-based asset tracking and messaging technology for search and rescue, warehousing.


Xcite Energy on target with Baker Hughes’ and DGI’s CoViz

Real time collaboration center and new software interface enable pinpoint reservoir targeting.

UK North Sea operator Xcite Energy Resources reports a successful geosteering project using a combination of Baker Hughes’ Reservoir Navigation Services (RNS), Dynamic Graphics’ CoViz geo-visualization package and remote supervision from Baker Hughes’ Beacon collaboration facility in Aberdeen. Working on the Bentley field in UK block 9/3b, Xcite first drilled a pilot hole to range in the top reservoir and optimize placement of the sidetrack across the structure. Monitoring the sidetrack used a newly developed interface between Baker Hughes’ RNS software and CoViz.

Using RNS and Baker Hughes’ ‘AziTrak’ deep azimuthal resistivity tool, Xcite successfully drilled 555 m of 100% net-to-gross sand—all in 100% oil pay zone. The well produced 2,900 bbls/day, with help from an electrical submersible pump. More on www.oilit.com/links/1106_25.


Saudi Aramco’s SAP Land mega deployment

New land management system provides GIS, satellite-based land use monitoring.

In its 2010 Annual Review, Saudi Aramco announced the roll-out of a ‘state-of-the-art’ land management system (LMS), a component of its SAP-based front-end business enterprise system. The LMS adds geographic ‘intelligence’ to thousands of Aramco’s land records through an embedded, ESRI-based geographic information system. The system covers around a tenth of Saudi Arabia’s land mass, primarily in Eastern and Central provinces with a network of more than 5,000 km of utility and pipeline rights-of-way. Corporate expansion has resulted in the company’s land reservations increasing nearly three-fold since 1980, with considerable future growth expected from new oil and gas discoveries.

The new system replaces previous ‘fragmented,’ stand-alone systems with multiple interfaces that required high maintenance and training overhead that required staff to master several databases and systems. The LMS has automated some 17 separate land-related business processes. Authorized users are now able to perform all land management- related activities through SAP, including land use permits, well approvals, land requests and encroachments.

The system also provides remote land use monitoring with a built-in change detection solution that uses satellite imagery. This combines with on-site inspection to monitor unauthorized land utilizations. Aramco claims the new SAP LMS as a ‘world-class solution to large scale land management.’ Download Aramco’s annual review from www.oilit.com/links/1106_21.


Santos virtualizes on IBM, Red Hat Linux, NVIDIA Quadro Plex

More on Santos’ private cloud. Red Hat ‘faster, cheaper and more stable platform.’

Santos has released more chapter and verse on its private ‘cloud’ based virtualization solution for its upstream applications (Oil IT Journal March 2011). The horsepower driving the open-source thin client software is provided by an IBM x3650 M3 server with Nvidia QuadroPlex graphics processing units which provide real-time 3D rendering to ‘standard-issue’ notebook computers. The result is a portable, high performance 3D Red Hat Linux environment accessible from a standard Windows laptop with no 3D capabilities. Santos serves its 600 concurrent users from 12 servers located at its regional offices.

IS Manager Andy Moore said that the solution has resulted in a $2.5 million cost saving to Santos, ‘Red Hat immediately delivered a faster, cheaper and more stable platform. Red Hat Enterprise Linux is used in our development and production environments. It is has been the platform of choice for the oil and gas industry for some time as the preferred development environment for the major geoscience software vendors.’ Santos also reports a reduction over 300,000 KWh/year in electricity consumption. Santos’ virtualization effort won a Red Hat Linux Global Innovation Award last May. The company is now extending its virtualization support to deliver 64-bit Windows 7, again from the IBM/Nvidia servers. More from www.oilit.com/links/1106_23 (Nvidia) and www.oilit.com/links/1106_24 (Red Hat).


Aupec announces subsurface applications benchmark

Participation invited in study of tools, support and processes.

Aberdeen-based Aupec has initiated a Subsurface Applications Benchmark (SAB) study to review tools, data and support models in current use around the global subsurface community. Companies of all sizes are invited to take part. The report will compare G&G applications toolkits across E&P peer companies and identify the most popular tools and usages. The SAB aims to help companies seeking to rationalize their application portfolio and/or validate their existing systems, support and workflows spanning regional exploration to production monitoring. The ‘light touch’ study is sponsored by a large global upstream player and is backed by technical support of New Digital Business. More on the survey from www.oilit.com/links/1106_22.


Sales, contracts, partnerships and deployments

Allegro Development, AVEVA, EMC-Isilon, FEI, GE, GIS-pax, GeoKnowledge, Geosoft, GeoCruiser Times, Hamilton Group, IPRES, Kadme, Orogenic Resources, LMKR, Object Reservoir, Moblize, MetaPower Canada, Paradigm, Wellstream, Evolution Markets, Logica, Jee, Technip, Samsung, Yokogawa Indonesia.

Saudi Aramco has chosen Allegro Development’s ETRM software Allegro 8 platform to support its new trading unit.

SETAL Óleo e Gás of Brazil has deployed AVEVA Plant on its ‘Avatar’ Project.

PetroChina has implemented EMC Isilon scale-out NAS at its Research Institute of Petroleum Exploration and Development.

FEI is providing its 3D ‘nano porosity’ analysis system, the Helios DualBeam /Qemscan to Whiting Oil and Gas.

GE and Malaysian O&G service provider SapuraCrest Petroleum Berhad, have opened a $3.5M state of the art Regional Services Center in Kuala Lumpur.

ArcGIS specialist GIS-pax has partnered with GeoKnowledge on map-based assessment of plays and play fairways. GIS-pax will develop and maintain links to GeoKnowledge’s GeoX ArcGIS plug-in.

Geosoft has appointed Beijing-based technology and consulting company Geo-Cruiser Times as its Chinese reseller.

Hamilton Group has announced an agreement with IPRES Norway to provide marketing, technical support and consulting to clients in the Americas.

Kadme and Orogenic Resources have signed a Whereoil reseller agreement.

LMKR and Object Reservoir have teamed to ‘close geoscience and engineering workflow gaps’ for shale gas studies, leveraging GeoGraphix, ORKA and Limits.

Moblize has been awarded a tender to build a real time operations center for Pertamina EP ‘beating Halliburton and Schlumberger.’

MetaPower Canada, has won a $ CD 1,8 million order from a ‘Canadian Energy Company’ for the development of KPIs, safety and business process management at a North Albertan oil sands facility.

Paradigm has signed a multi-year software license agreement with JKX Oil & Gas for a seismic-to-simulation workflow, including SeisEarth, SKUA and Geolog, to be deployed as the JKX’s standard toolkit.

GE Oil & Gas’ Wellstream business has been awarded a $200 million contract by Petrobras.

Energy and environmental markets broker Evolution Markets is to roll out Pivot 360 to its US broker team, in preparation for Dodd-Frank implementation.

AVEVA and Logica have signed a Strategic Alliance agreement. They will jointly provide a managed service capability to optimize clients’ complex engineering projects and through-life management of digital assets.

Shell has awarded training firm Jee a 3 year contract extension for subsea engineering training. Jee will deliver courses to Shell employees worldwide.

Shell Australia has signed with Technip/Samsung for the construction of the first floating liquefied natural gas (FLNG) facility in the world at its Prelude gas field off the northwest coast of Australia.

Yokogawa Indonesia, has received an order from Chandra Asri Petrochemical for the replacement of a distributed control and safety instrumented system at a petrochemical plant in Cilegon, western Java.


Standards Stuff

OGP Seabed Survey data model. ASTM Laser Scan Data Exchange. AIPN Model Contracts. IWIS RP.

The International Association of Oil & Gas Producers (OGP) has released Version 1 of its seabed survey data model (SSDM). The model is intended for data exchange between oil companies and survey contractors, as well as a ‘sound’ data model for managing seabed survey data at an enterprise level within O&G companies. The model includes a freely available data model, data dictionary and support documentation. More from www.oilit.com/links/1106_41.

Speaking at the 2011 Fiatech conference, Tad Fry (CAPEX Process & Technology) unveiled the new ASTM E57 Laser Scan Data Exchange Standard. The open standard covers 3D data (point clouds, range images), associated imagery, meta-data to support downstream processing in terrestrial, aerial and mobile contexts. More from www.oilit.com/links/1106_43.

The Association of International Petroleum Negotiators (AIPN) presented its annual Model Contracts Workshop this month in Paris. Presentations covered AIPN model contracts, International Joint Operating Agreement Drilling, Gas Balancing Agreement, Accounting Procedures and Area of Mutual Interest Agreement—www.oilit.com/links/1106_44.

OTM Consulting has prepared a Recommended Practice document for the Intelligent Well Interface Standard, IWIS, a joint industry project whose mission is ‘to assist the integration of downhole power and communication architectures, subsea control systems and topsides by providing recommended specifications (and standards where appropriate) for power and communication architectures and other associated hardware requirements.’ The document builds on ISO13628 – Part 6, an implementation guide to the IWIS interface. IWIS provides a detailed specification for power, communications and hydraulic systems and the standard subsea physical interface. The spec has backing from several national and international oils and suppliers including Aker, Baker Hughes, Cameron, FMC, GE and others. Download the IWIS RP on www.oilit.com/links/1106_42.


2011 FIATECH Technology Conference and Showcase

Aveva on ISO 15926 flagships, Coreworx on front-end construction planning, Atlas RFID Solutions’ ‘visibility portal,’ real time communications for safety, satellite-based asset management.

At the 2011 Fiatech Technology Conference and Showcase in Chandler, Arizona earlier this year, Neil McPhater, (Aveva) outlined the flagship deployment of the ISO 15926 standard on Statoil’s Snohvit project. Aveva translated some 2,420 DGN files on the Snohvit LNG project into 120 PDMS databases (around 15 GB). ISO 15926 is said to have helped with the data translation by allowing ‘full intelligence’ to be transferred to the catalogues. Over 12,000 process lines and nearly a quarter of a million structural steel sections were involved in the project.

Joel Gray (Coreworx) emphasized the strategic value of front-end planning (FEP) and enabling technologies. The US Construction Industry Institute (CII) defines front end planning as ‘the process of developing strategic information, addressing risk and committing resources to maximizing the chances of project success.’ Coreworx builds on CII resources such as the FEP toolkit and the Project Definition Rating Index (PDRI/www.oilit.com/links/1106_18), part of a ‘trilogy’ of planning tools for major capital projects. The CII Front End Planning (FEP/www.oilit.com/links/1106_19) toolkit also ran. Checkout the Coreworx Chevron case study on www.oilit.com/links/1106_17.

John Chesser described Atlas RFID Solutions’ turnkey solution for site materials management on ‘large, complex EPCM1 projects.’ Atlas’s solutions include barcode, passive or active RFID and GPS surveys that provide pinpoint positional accuracy and traceability of equipment during construction. Atlas’ Materials Visibility Portal (MVP) is a site materials management system (SMMS) that replaces hand written field material control with automated, electronic field data collection.

The site is first mapped with a GPS survey which forms the backdrop of the MVP. Tagged equipment is located on a map of the site which provides click through access to equipment data. Such information is visible to mobile computers, smartphones etc. Materials movement reports are generated automatically as equipment moves around the site. Chesser recommends that EPCs develop a material tagging/tracking strategy, ‘If you are going to have a supplier add a tag or a barcode to material, include this requirement and instructions in all bid documents.’ User should also require an electronic piece list from suppliers and ‘make sure your identification numbers are unique.’ It is a good idea too to nominate a SMMS coordinator to supervise the whole caboodle.

Gustavo Aguilar who is a student at the University of British Columbia believes that current safety management software fails to provide comparable incident ratios and industry wide information. He is proposing a Construction Real Time Information and Communication System for Safety (C-RTICS2). This should be free use and access and is to be administered by the university.

James Hollopeter (GIT Satellite Communications) has extended the Atlas RFID solution above with a bi-directional worldwide satellite network of GPS location and sensor data. The ‘low cost, lightweight, portable hardware solution was originally developed for the US Air Force ‘NETS’ Contract. The mesh sensor network provides a company-wide parts tracking and inventory database spanning the globe. More from www.oilit.com/links/1106_20.

1 Engineering, Procurement, Construction Management.


SAPphire—Baker Hughes’ ‘Odyssey’ leverages SAP ESH module

Karen Lane’s presentation describes new health, safety and environment management system.

Speaking at the SAP’s SAPphire Now event in Orlando, Florida last month, Karen Lane presented Baker Hughes’ corporate social responsibility and health, safety, and environmental application developed atop of SAP EHS. Baker Hughes’ (BH) approach to HSE is risk-based and business-focused. This cycles though centers of expertise, communities of practice, management systems and business units in a ‘closed-loop continuous improvement’ process.

BH’s HS&E previous ‘home grown’ information system suffered from functional overlap, gaps and fragmented information. This was impacting HS&E performance with potential compliance risks and causing a loss of focus on goals and targeted programs. BH wanted to raise its HS&E game with better processes, initially focusing on incident management. The business objective was to provide the HS&E big picture, to foster an interdependent safety culture and to promote HS&E through ownership.

BH embarked on an analysis of the top HS&E software vendors which supplied proof of concept demonstrators. SAP was selected because of its multi-lingual offering, breadth and robust reporting and alignment with BH’s business requirements. The user friendly NetWeaver interface and mobility (Blackberry) support were also plus points.

BH has developed a structured approach to HS&E management with a detailed industrial safety and security incident investigation workflow. These have been mapped into the new solution with help from implementation partners E2Manage, Infosys and TechniData (now an SAP unit). The resulting platform, known as ‘Odyssey’ has now been extended to address root cause analysis, emissions management and reporting. BH has a roadmap out to beyond 2012 for extension of Odyssey to cover, inter alia, chemical inventory management, industrial hygiene, energy and crisis management. The roadmap should not however be construed as a ‘forward looking statement.’ Heavens no!


Ryder Scott refines model review process

Reservoir models may not always be used as intended. Miles Palke sorts out the good from the bad.

Speaking at last month’s meet of the Houston chapter of the Society of Petroleum Evaluation Engineers, Ryder Scott’s Miles Palke observed that main use of reservoir simulation is to test various field development scenarios and evaluate investment opportunities. But companies may pressure their consultants to use such ‘scoping’ models for reserves estimation. Elsewhere, models developed to forecast vertical well performance may be used to project undeveloped horizontal well performance.

Palke observes that in such cases, ‘A disconnect exists between the original purpose and proposed use of the model.’ In such circumstances, an in-depth review of the model’s internals is required to ascertain its applicability to the new use case. Palke recommends a ‘reality check,’ comparing model output with traditional analytical techniques. He also provides a check list1 of tests that can be applied even when the engineers who originally set up the model are no longer around.

When reviewing the history match process, reviewers need to be aware that some simulation software allows unreasonable changes to be made to inputs. Pore volumes may exceed gross volumes, a modeled aquifer may not exist or residual saturations may tend to zero. A serious problem can arise when minor tweaks to parameters impact history in a minor way but significantly affect predictions. ‘Don’t be tricked by very good matches of single phases or cumulative volumes at the end of history.’ These can simply indicate that the modeler has set the dominant phase’s rates so that the simulator hits the volumes. Reviewers should home in on the transition from simulation to prediction and also checkout the reasonableness of the base case with no changes in operating conditions or well count. Building on Palke’s analysis, Ryder Scott is developing a set of metrics for reservoir model reviews. More from www.oilit.com/links/1106_5.

1 This article was abstracted from Ryder Scott’s informative June-August newsletter www.oilit.com/links/1106_4.


StudioSL developer spills the (Net)Beans

Matteo Di Giovinazzo explains why StreamSim opted for the open source NetBeans framework.

A DZone blog posting by Matteo Di Giovinazzo, lead programmer at StreamSim Technologies, provides an insight into reservoir engineering on the Java NetBeans platform. StreamSim’s StudioSL GUI provides pre and post processing of simulator results. Initially targeting StreamSim’s own 3DSL stream line simulator, StudioSL now interfaces with other simulators.

The NetBeans Project API has allowed StreamSim to expose a logical view of simulation projects, hiding the unnecessary detail of a physical file system view. StreamSim experienced ‘huge’ differences between Windows and Linux clients in terms of performance over the network. Listing and filtering files in a large folder over the network is ‘really slow’ on Windows clients.

StreamSim selected NetBeans for cross platform (Windows and Linux) development and because it is free and open source, supported by the community. Di Giovinazzo said, ‘In a world where most all software is proprietary, the NetBeans open source status seems to be well received. There is also an active community source of ideas and continuous training for all our developer team.’

NetBeans goodies include XML MultiView and the Schliemann parser. StreamSim has implemented several projects and workflows using NetBeans Project API and ANT-based Project API. Cross platform capability is being developed as StreamSim now supports remote execution on high performance clusters, using a queue management infrastructure such as the Sun Grid Engine. More on www.oilit.com/links/1106_15.


Smart Engineering Apprentice—AI for rod pump failure analysis

USC has successfully tested machine learning. SEA to be deployed at Chevron’s mid continent unit.

The University of Southern California Viterbi School of Engineering’s CiSoft Smart Engineering Apprentice (SEA) project is applying artificial intelligence to predict rod pump unit, using maintenance strategies as captures from domain specialists. These experts record their operations and maintenance actions along with the ‘signature’ of the rod pump’s status. Computer learning compares combinations of signatures, actions and outcomes to develop rules that can be used to predict approximate times of failure for rod pumps that show signs of impaired functioning. Prof Raghu Raghavendra ventured, ‘In a sense, the computer system is the apprentice of field experts, learning from their past experiences in rod pump maintenance.’

The data captured acts as a historian of rod pump failures and can be leveraged in templates for future repairs. New employees rely on the system as a problem solving ‘point of reference.’

The technique has been trialed on 391 wells in Chevron’s West Texas McElroy using 18 months of history. The system identified 205 pumps as ‘normal,’ and predicted that 47 were on the point of failure. In reality, 11 of the 205 ‘normal’ pumps failed. And of the 47 ‘failures,’ 4 turned out to be normal for an overall 94% correct prediction rate. A second trial identified a further 52 pumps as about to fail. Actually 13 of these worked OK during the subsequent observation period—still providing an 80% predictive accuracy.

Chevron’s Mid Continent Alaska business unit’s ‘iField’ project is behind the SEA R&D. Chevron’s Lanre Olabinjo said, ‘We recognized the potential of the SEA for failure prediction on rod pumping systems and have been evaluating the technology since 2009.’ The first production version of SEA is scheduled for release in October 2011. SEA will be integrated with the Artificial Lift Systems Optimization Solutions already developed for the MCA unit’s Well Performance Decision Support Center (DSC) in Midland Texas. More from www.oilit.com/links/1106_16.


© 1996-2024 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.