According to Paul Haines, Kerr McGee’s IM strategy can be summarized as ‘buy, apply and modify’. Application software is bought in from established vendors, but KMG finances software development and attempts to influence future directions. Haines described KMG’s information management (IM) function as responding to internal client requirements—in a relatively short time frame—there are no more ‘Project Mercurys*’, with a 3-4 year time horizon. Users of KMG’s in-house developed well log databases ‘don’t want LIS—they want LAS’. Haines recommends making sure that vendors document data properly. KMG uses OilWare’s Well Log Indexing System (WLIS) to ‘crawl’ corporate file systems for LAS, LIS and DLIS log files which are converted to XML. InnerLogix’ ILX Cataloguer is also being deployed to crawl and index raster logs recognizing TIF, PDF, CGM and other formats.
Jo Kostecka described Marathon’s well data cleanup effort. Kosteka has developed a Perl program for semi-automated well number matching. The program uses ‘fuzzy logic’ to locate partial matches based on combinations of API number, location, total depth etc. The program assigns weights to different variables and a two pass process brought the number of suspect matches down to manageable proportions.
Will Morse (Anadarko) gave a detailed cookbook for moving upstream desktop applications (as opposed to seismic processing) to Linux. Morse sees the move to Linux as an unstoppable trend—‘be there or be square!’ But he warns that ‘the costs of Linux migration may not be as low as you think’. One problematical aspect of Linux is its poor support for high end graphics cards. Morse recommends that buyers specifying Linux Standard Base (LSB) compliance when purchasing software. Particular attention should be made to testing NFS, NIS, networked license managers, remote Oracle etc. as vanilla Linux does not support these out of the box. Anadarko is running Landmark and Paradigm apps on Linux today.
POSC’s John Bobbitt presented work done by OASIS/UNCEFACT – X12 – XML. These different groups use the same concept. An XML ‘Module’ describes a business object such as a well header. Different XML modules can be assembled to create compound documents ‘on demand’. To cater for specific (and limited) instances where local requirements mandate refinement or restriction in use the ‘Profile’ concept is introduced. Thus the North Sea Profile contains rules and guidelines for N. Sea use (DTI Wellname etc.) whereas the Gulf of Mexico Profile uses the API number. Note that profile creation must follow certain rules for legitimacy; geographical variants are OK – but company-specific profiles are proscribed.
Azhar Sindhu, (RCG Information Technology) outlined a solution tailored to a US operator’s production reporting requirements. Integration can be achieved with a variety of technologies. This project involved integration of well data, land information, operations, applications such as TietoEnator’s Energy Components, Landmark’s DIMS and DSS, MRO Software’s Maximo and various portal projects. RCG’s analysis is based on its productized Business and Technology Roadmap. RCG IT clients include Apache Corp., BP, ExxonMobil, Texaco and Unocal.
Ray Flores walked through the development of Pemex’s E&P technical database for the Veracruz region. Pemex is a big Finder shop. Finder integrates data from Pemex in-house applications along with GeoFrame, OpenWorks, Eclipse and OilField Manager. A data integration ‘bus’ is planned to extend data integration across SAP and Merak.
INT’s tools allow centrally managed data to be viewed over wide area networks, according to Jim Velasco. The latest INT product line, exemplified by the Web LogViewer, offers thin client access to data in multiple data stores. By ‘thin client’, INT means a J2SE 1.4 runtime, a web browser or the Java WebStart plug-in. A ‘lightweight’ API allows for quick integration into enterprise solutions. The LogViewer provides area curve filling, lithology display, curve attribute editing, image integration and pan/zoom. INT’s target market is no longer the software development community, but now embraces corporate IT departments. INT’s tools are ‘as far as you can go in shrink-wrapping this kind of functionality’.
KM and the web
Jeff Pferd (Petris) quipped that KM ‘used to be called training!’ Petris uses web technologies to capture information on usage patterns—workflow monitoring, search engine logging, application launching etc., using ‘passive knowledge capture.’ This leverages ‘meta data mining’ and answers questions like ‘which departments are using what information?’
Ugur Algan’s paper described Landmark’s development of a Technical Workspace Portal for AGIP ENI. The portal offers multi-database query, project and asset disposal preparation. Landmark’s Team WorkSpace was customized to ENI’s requirements. The portal was hosted internally by ENI in Milan. Queries are executable across PGS Tigress, IHS Energy’s Iris21, C&C Reservoirs Analogues, in-house ENI data sources and Microsoft Office documents.
Pioneer’s deployment of best of breed applications was hampered by complex, manual data entry according to Tim Elser. A drive towards more frequent update and reporting meant that manual data entry was no longer an option. Pioneer’s data infrastructure is built around Hyperion’s SBASE OLAP. Aclaro’s PetroLook and PetroShare were selected for data mining functionality. PetroLook and PetroShare are both developed in Microsoft’s C#/.NET. The end result has ‘piqued the imagination of Pioneer’s data users – who no longer feel that disparate data stores are an obstacle to data access’.
Peter Flanagan revealed that Oildex was founded on the observation that ‘a $16 invoice could cost as much as $30 to process’. Oildex’s supply chain software and digital invoicing has brought this cost down to $5. The payment cycle time has also been reduced from 60/90 days to 14 days.
Carl Hucsall described an SAIC project which sets out to ‘transform the digital oilfield of the future’ by linking SCADA data feeds to the field office. SAIC recognizes four levels of operation value (and complexity!) from remote monitoring, through exception management and diagnostics to dynamic reservoir optimization. A tiered IT architecture is proposed. A resource layer provides base level IT services scheduling, transactions, triggers and exceptions. An intermediate services layer manages business logic and analytical services such as trends, curve fitting and historical analysis. Finally a user interaction layer provides modeling, simulation and ‘discovery’. These are delivered in a ‘web services situation—not as a behemoth of a database’. Real time information benefits production, sales trading and supply chain. Production is the first field where web services will contribute—‘the technology is here today’.
Bruce Sanderson (Geodynamic Solutions) explained that search technologies start with Google-type full text searches. These are fast, free and easy, but often suffer from a large number of search results. Deploying Google on an internal intranet can help constrain searching—but these are expensive to deploy. Other options include document management systems such as OpenText’s LiveLink—this is ‘a great tool’ but has proved hard to integrate with upstream workflows—indexing and document check in/out is an unwelcome overhead. Enterprise search systems can be very powerful. These crawl file systems—looking in documents, web pages and databases. Again these are expensive to deploy. Geographical Information Systems (GIS) can dovetail with enterprise search technology. Deployment requires a search ‘hub’ coupling GIS to text search. Best of all worlds would be ‘something like Autonomy plus a GIS’.
IT downtime was proving a major problem on Pemex’s Cayo Arcas platform. Richard Crounse (IT Monitor) described a number of commercial tools which monitor infrastructure performance in real time. HP OpenView provides basic monitoring, but ‘can be improved on’. Trouble shooting routers and switches can spot faults developing. Network services can be monitored with Open Source Software Process Dashboard, with Cisco NetFlow, Osi Software’s PI System and Real Time Performance Management. Activity can also be captured as a ‘net bot’ image—and stored in an Oracle blob—like a security video of network activity, ready for post-intrusion analysis. PI System has been in business for 20 years and claims 10,000 installations.
The Shell-chaired round table debated upstream nomenclature—in particular well naming systems. IHS Energy offers a service to US States to check that an API well number has not been issued before. IHS Energy unit Petroconsultants offers a well-number generation service to operators, although many are not aware of this. One user of the service reported problems with the fact that this generates a ‘private’ in-house well number. The true benefit will come when the Petroconsultants numbers are broadcast to other companies. POSC announced that as part of its Practical Well Log Standards (PWLS) work, it had been granted the right to use and migrate the Schlumberger classification including curve mnemonics. Work is in progress with the Minerals Management Service on standards for well classification of purpose (water, oil, injector etc.) and result (dry hole, P&A, suspended etc.).
* Project Mercury (circa 1990) was an early upstream data modeling project from IBM.
This report is abstracted from an 8 page report on PNEC produced as part of The Data Room’s Technology Watch reporting service. For info, email email@example.com.
© Oil IT Journal - all rights reserved.