A funny thing can happen on the road from university to life. Leaving academia and throwing oneself onto the job market can be scary. There is a temptation to turn back into the academic world and hang on for a while—perhaps for a PhD or even a complete change of tack. For me, just after my MSc, the temptation to change momentarily overcame—for reasons which I won’t bore you with now. Just as I was ready to ‘be’ a geophysicist, I decided that, no, a medical degree was what I really needed and another six years of study!
Fortunately the madcap scheme did not get further than the interview which went down like the proverbial lead balloon. The interviewers spotted me as a phony as soon as I opened the door and just had to ask a couple of killer questions to expose my lunacy and then chat for the regulation ten minutes so that appearances were saved.
In the meantime though, I got an introduction to both marketing and the business tenet that ‘there is no such thing as a free lunch’. During my brief period as a medical groupie, I was invited to a drug company presentation followed by a ‘free’ curry. Birmingham, UK, at the time home to a proud geophysics MSc course, is perhaps better known as the birthplace of the ‘Balti’ curry which, when it finally came was indeed extraordinary—the only time that I have eaten gold leaf.
But free it was not. As a geophysicist, whose only concession to ‘doing’ medicine was the purchase (but not the reading of) a chemistry textbook, I knew nothing about medicine. And before the curry there was an hour-long infomercial. The drugs and diseases wafted over my head. All I can remember, many years later, was talk of ‘massive influx of e-coli’ and ‘myocardial infarct’ but I wasn’t clear whether this was one disease or many. Subsequent googling suggests that the only context in which these symptoms appear together is if you are blown up. Perhaps it had something to do with eating too much curry?
But the presentation was my first infomercial. In the nearly forty years that have elapsed since then I have heard quite a few more and have even given a few myself. Without claiming to be an expert on the subject (I never did go back to study either medicine or advertising—just ploughed right on with the geophysics) I would claim to have developed a certain sensitivity to the sales pitch. I would therefore like to share some thoughts as to what has turned me on recently and what has turned me (and quite a few others) off.
Starting a new conference is an entrepreneurial gamble on the intrinsic interest of a new field. If you bet right, in a few years you may have built up a following of end users and buyers—which in turn will attract notice from the vendors.
This leads to another potential income stream for the conference organizer, but one which is not without its pitfalls. The worst case scenario is of course the one where your sponsor gets an automatic right to present. Sometimes this means that listeners who may have paid a couple of thousand bucks to attend are forced to hear a sales pitch which, in other circumstances, the vendor would probably pay them to hear. This is a worst case marketing scenario—a real turn-off.
IT isn’t always like that. Some conference organizers keep the bar high and ensure quality from vendors and users. Some vendors may be are a bit more inventive—presenting company case histories or even sponsoring a third party, perhaps academic, presentation.
For conference organizers and for that matter newsletter publishers, balancing the benefits of advertising/sponsorship and direct sales is something of a conundrum. Today, publishers, whether they are of humble newsletters or national newspapers, face a similar dilemma to the conference organizer—is your revenue to come from advertising or subscriptions? Is your content to come from advertisers, end users or what? The end member of the publishing spectrum is the freebie—with finance from advertising and, in some cases, even content being paid for along the ‘vanity publishing’ model.
I can’t help thinking that the freebie paradigm is something of a fools bargain for the advertiser. If you are a reader, do you like reading the ‘guff’ of the press release? If you are a writer, reflect on whether folks are really going to plough through four pages of your ramblings! And if you are an advertiser, what is the point in having an ad in a publication that is not going to get read? Advertising in this context is something of a fool’s bargain.
The above made me reflect on what we are trying to achieve here at Oil IT Journal. Without getting too religious (we are trying to make a living like the rest!), I think that there is an answer to the publisher’s conundrum above. Just as the organizer ‘bets’ on the interest of a new topic, we try to collect ‘stuff’ that, in our opinion, folks should be reading about. A monthly gamble on what is important today and tomorrow.
Tell a friend
Which kind of defines our role—get read! Is it working? According to our ‘Urchin’ (a.k.a. Google Analytics) monitor, our website www.oilit.com received just over 100,000 visitor sessions (nearly one million hits) in the first two months of 2008. We now have 50 paid up corporate licensees to our million-word plus archive including 11 in the supermajor/NOC category. And we send out nearly 400 paid for paper copies of Oil IT Journal every month—all paid for by people who actually read it.
As for advertising, we don’t want to rock the apple cart with flashing jpegs and pages of ‘guff’ but we have introduced ‘sponsored links’ from our online content and RSS feeds which provide an unobtrusive connection from our content to whatever an advertiser wants. Which of course might be just ‘guff’ but we hope not. It could be a link to an insightful White Paper that just begs to get read too!
China National Petroleum Company (CNPC) has selected environmental, health and safety (EH&S) specialist ESS Software to provide an integrated IM platform for ‘business sustainability’ and compliance across its mainland Chinese operations. Tempe, AZ-based ESS’ system supports CNPC’s upstream and downstream, oilfield services, engineering and construction, material and equipment manufacturing and supply operations.
ESS flagships, the Essential and Compliance Suites manage operational risk for a range of global, regional and local environmental data management requirements.
Essential Suite includes a browser-based front end for point-and-click access to emissions information and tools for emergency management and other large-scale events. Compliance Suite provides (in the US) OSHA/EPA recordkeeping and reporting as well as tools for worker training and safety.
The packages support all phases of emergency management from mitigation though preparedness, response and recovery. Tracking software keeps tabs on resources and inventory, including daily staff and strike team members. Automated alerts and notifications can be sent to team members via desktop, laptop or a Pocket PC. Integration with Microsoft’s Virtual Earth allows facility data to be displayed on a backdrop of maps, satellite and aerial imagery.
The ESS platform has been translated into Mandarin for use in China and also includes support for CNPC’s multimode emissions control initiatives, including greenhouse gas and carbon management. CNPC selected ESS following the successful implementation of ESS software at PetroChina (the world’s largest company by market capitalization). According to ESS, the PetroChina deployment increased PetroChina’s productivity by driving process improvements that generated time and cost savings while improving management decision making.
ESS COE Robert Johnson said, ‘CNPC chose ESS following an extensive review of the marketplace. Companies like CNPC and PetroChina selected our integrated enterprise platform because we deliver proven, reliable solutions that reduce complexity, risks and costs.’
‘ESS software is transforming the way companies manage EH&S and emergencies at both tactical and strategic levels to ensure business continuity.’ ESS Clients in the oil and gas vertical include Halliburton, Sunoco and KNPC, Kuwait and Lyondell Houston Refining. IBM Global Services is to manage the deployment project.
AspenTech’s woes continue. Following earlier warnings, a ‘Wells Notice’ in 2006 from the SEC (OITJ June 06) and a ‘Staff Determination’ from Nasdaq last year (OITJ Feb 07) the Nasdaq Listing Qualifications Panel has decided to delist AspenTech from the Nasdaq. Weaknesses in internal controls of software license revenue recognition were at the root of the problem.
AspenTech CEO Mark Fusco said, ‘We are disappointed that the time it has taken for the review we initiated in connection with the restatement of our financial statements has resulted in the delisting of our common stock.’
Meanwhile AspenTech’s stock will be quoted on the ‘Pink Sheet’ electronic quotation service pending a possible appeal. Fusco added, ‘AspenTech remains a financially strong company as evidenced by our cash and cash equivalents of $131 million as of December 2007 and we are committed to regaining compliance with our filing requirements and applying to list our common stock on a national exchange as soon as possible.’
Hans Werner Meuer of the University of Mannheim and (his own) high performance computing (HPC) consultancy Prometeus has just published a paper* looking back on the last fifteen years of supercomputing. Prometeus is behind the authoritative ‘TOP500’ list of supercomputer sites**, launched in 1993. The Top500 evolved from an earlier ranking from the University of Mannheim. Today the List uses the Rmax Linpack matrix math benchmark to evaluate HPC performance. The List is published twice yearly in June and November.
One reasons for TOP500’s success is competition between countries, manufacturers and computing sites. The US led the field in the early days of the list with 45% of all TOP500 installations. The most recent list shows the US increasing its lead—with 56%. Japan, an early leader has fallen back to be overtaken by the UK with a 9.6% share, and Germany, with a 6.2% share. Of the manufacturers, Cray Research and Fujitsu were the early leaders. Today, IBM has a clear lead with a 46.4% share. HP is N° 2 with 33.2%, and the leader of 1993. Cray is N° 3 with 2.8%.
Meuer highlights a couple of outstanding machines. Coming in at N° 259 on the 9th TOP500 list was IBM’s Deep Blue. Each of its 32 processors was equipped with 15 special purpose VLSI ‘chess chips***’ and Deep Blue was the first chess computer to beat a reigning world chess champion, Garry Kasparov. Today, according to Meuer, ‘No chess player stands a chance against a computer, even a PC!’ But Meuer’s favourite is the Intel’s ASCI Red (1997), the first Teraflop/s computer. ASCI Red was the US’ response to France’s nuclear weapons tests at the Mururoa atoll and was part of the US government’s commitment to ban nuclear weapons testing. But the 9,632 Pentium II-based massively parallel machine was the last produced by Intel’s supercomputer division.
Until very recently UNIX was the prevalent HPC operating system. Today Linux has taken over this role. Despite Microsoft’s effort to break into this market, according to Meuer, ‘Windows plays no role****.’ The Intel legacy is extremely strong with 71% equipped with Intel processors—particularly the dual and quad core chips. AMD Opterons come in second with 15.6% and IBM Power processors are third with 12.2%. Gigabit Ethernet is the most used interconnect technology followed by InfiniBand. Myrinet, which dominated the market a couple of years ago, is now falling back.
Clusters equip 406 of the Top500 and are challenging the leaders (2 of the Top10 systems are clusters). Massively parallel systems remain the tops in performance with eight systems in the TOP10. HPC performance over the last 15 years has outstripped Moore’s law with a doubling every 14 months for the N° 1 spot. The current N° 1 is the Livermore National Laboratory’s IBM BlueGene/L. Meuer expects the Petaflop/s barrier to be broken in 2008— probably by the Los Alamos National Laboratory’s IBM’s RoadRunner. In 2015 all systems is the TOP500 will be Petaflop/s machines!
While the TOP500 does not address all facets of HPC, it does provide useful trends and, according to Meuer, is ‘much more reliable than the predictions of market research companies such as IDC, Diebold, etc.’
*** In this context—see our article on programming the IBM Cell Broadband Engine on page 4 of this issue.
**** As readers of Oil IT Journal already know—see our March 2007 editorial.
Speaking at CERA Week this month, ExxonMobil Senior VP Mark Albers described as a ‘common misconception’ the notion that oil and gas is dwindling fast and that peak production is near. ‘Oil may be finite, but it is far from finished!’ Albers cited USGS estimates that put conventionally recoverable oil at over three trillion barrels. Approximately one trillion barrels have been produced to date. ‘The supply challenge is not due to scarcity.’
Much of the Earth’s remaining oil is held in complex formations, in remote locations, and under harsh conditions. Technology is needed to overcome these challenges and bring these abundant resources to market. Cash is also a requirement. According to the International Energy Agency, a $22 trillion infrastructure investment is needed over the next 25 years.
Meeting the supply challenge requires technology, teamwork and trade—in other words partnerships between international oil companies (IOCs), national oil companies (NOCs) and host governments.
But Albers warned, ‘at a time when we should open doors to trade, resource nationalism closes them. At a time when we should be building bridges of international partnership, resource nationalism builds walls.’
In an article on Shell’s website, Shell CEO Jeroen van der Veer offered a different slant. The world is experiencing a step-change in the growth rate of energy demand due to rising population and economic development. After 2015, easily accessible supplies of oil and gas probably will no longer keep up with demand. For van der Veer, the answer is to add other sources of energy to the mix—renewables, more nuclear power and unconventional fossil fuels such as oil sands.
By 2100, a radically different energy mix will include solar, wind, hydroelectricity, biofuels and nuclear. Humans will have found ways of dealing with air pollution and greenhouse gases. The question is, will this result in a ‘mad scramble’ as nations rush to secure energy resources for themselves, or in a more orderly ‘blueprint’ scenario with international cooperation on economic development, energy security, and environmental pollution through cross-border cooperation.
Acceleware has just announced a seismic data processing ‘accelerator,’ a graphics-processing unit (GPU)-based hardware/software add-on that ‘outsources’ number crunching to an array of GPUs. The company has developed a Kirchhoff pre-stack time migration seismic processing library that is tuned for the NVIDIA Tesla GPU hardware.
We asked Acceleware for some benchmarks. This is what they came up with. Adding a 2 GPU* accelerator to a single server with two dual core processors (4 core total) speeds up a complete Kirchhoff time migration by a factor of 8. This speedup includes the preprocessing which is done primarily on the CPU.
A slightly more obscure benchmark showed a 45 times Kitchoff speed up running on ‘an 8 GPU solution with 2 servers each with two dual core processors (8 core total)’ as compared with a single core machine.
* Likely an NVIDIA Tesla ClusterInABox.
Calgary-based Cybera is asking for expressions of interest in ‘cyber-infrastructure’ projects targeting the oil and gas vertical. Cybera is a Canadian not-for-profit qango* that provides leadership and investment in Alberta’s cyberinfrastructure.
Cybera defines cyberinfrastructure (CI) as the integration of high speed data networks, high performance computers and storage clusters, visualization and sensor networks. These resources are exposed as web interface-based services and computing ‘utilities.’ Alberta’s CI includes the WestGrid and CyberaNET networks. The latter connects a dozen research establishments via a 10Gb/s Ethernet link between Calgary and Edmonton. Cybera is looking for increased use of CI on commercial seismic processing, reservoir modeling, and risk management. To date such outsourced IT has been held back because of ‘culture and conservatism, data management challenges and poor telecom links**.
$CDN 15 million
The CI initiative is a $15M program in support of ‘collaborative projects to accelerate the development of CI and ‘e-Research.’ Expressions of interest are being solicited from oil and gas companies for participants to contribute problems, data, algorithms and staff time.
* Quasi-autonomous non-governmental organisation.
** Up to a point, Calgary’s Metronet already had a 5GB/s bandwidth in 1996! (OITJ Nov 1996).
The IBM Cell Broadband Engine (Cell BE) is a new class of multi-core processors for consumer and business markets. The Cell BE was developed by IBM, Sony and Toshiba for game consoles (the Cell BE is used in the Sony Playstation III) and also for use in scientific applications. A new IBM Redbook, ‘Programming the Cell BE’ includes sample applications of interest to oil and gas, Monte Carlo simulation and fast Fourier transform (FFT). The Cell BE’s has two kinds of processors, both of which share all available memory. The Power Processor Element (PPE) contains a 64-bit core that runs 32-bit and 64-bit operating systems and applications. The Synergistic Processor Element (SPE) is optimized for running compute intensive SIMD Single Instruction, Multiple Data (a.k.a. vector processing) applications.
The novel architecture addresses one of the fundamental obstacles to high performance computing—that of memory ‘latency,’ which is now considered to be the main limit to peak compute capability. The Cell BE’s 16 parallel direct memory paths promise ‘just in time delivery’ of data for compute-intensive applications. The Cell BE SDK for ‘multi core acceleration’ offers support for C++, ADA and Fortran. Download the Redbook from www.oilit.com/links/1011.
A new offering from Open Text Corp. addresses compliance and safety rules for process changes at refineries and chemical plants and other facilities. The ‘management of change’ (MOC) process coordinates and documents major plant changes to ‘increase safety and reliability and minimize environmental impact.’ According to Open Text, MOC programs are a major challenge for plants, in terms of time, resources and the risk of fines, lawsuits and shutdowns when an initiative fails.
The US Occupational Safety and Health Administration (OSHA) process safety management regulations stipulate that when a critical plant component changes, a formal MOC program is required to ensure that the proposed change is made safely. Open Text’s Livelink ECM/MOC solution uses content management and business process automation capabilities to simplify the MOC process. Open Text partner Gateway Consulting Group helped with development of the ECM/MOC.
Gateway analyzed MOC processes at a dozen chemical and petrochemical facilities in the US. Gateway president Rainer Hoff said, ‘If operators don’t know what’s in their plant, then it’s impossible to operate the plant safely. Owners have excellent documentation when the plant is built—but these must be updated with every change made to a plant. This isn’t just a good idea—it’s the law!’
Energy Navigator has released AFE Navigator V6.0 with enhanced workflows and better integration with other systems. A publish and subscribe mechanism allows key AFE events to be broadcast automatically to third-party financial applications using a publish and subscribe model. A user-definable data integration tool controls the data exchange.
Geosoft’s 2008 release, which includes Oasis montaj and Target 7.0, now embeds ESRI’s ArcEngine mapping technology. ESRI native format maps can now be viewed without leaving the Geosoft environment.
OpenSpirit V 3.1 adds data connectors for GeoFrame 4.4, Kingdom 8.2 and a beta of the OpenWorks R5000 connector.
Yokogawa’s new ‘Centum VP’ production control system integrates plant information management, asset management and operation support functions into a ‘unified operating environment’ for process plants including oil and gas and petrochemicals.
Schlumberger has awarded ‘Ocean certification’ for ZEH Software’s CGM Extension for Petrel. The CGM plug-in exports Petrel graphics to CGM files for printing, montaging, editing and archiving.
Calgary-based Aram Systems has certified Ultera Systems’ Mirage data recorder for use with its Aries seismic data acquisition system. Mirage replaces tape drives and cartridges with two high performance 750GB RAID disk drives in removable canisters.
Veritas unit Hampson-Russell’s ‘CE8R2’ update for its geological and geophysical interpretation and modeling suite includes improved AVO gradient analysis, extra options for data loading, better crossplotting. The View3D component is now available on 32-bit Linux and 64-bit Windows.
Petrolink’s Power suite of well site software tools are now certified WITSML compliant.
Hunt Petroleum is to implement P2 Energy Solutions’ (P2ES) Enterprise Land (EL) package. P2ES claims EL is the upstream’s ‘first enterprise application to leverage a services-oriented architecture (SOA).’ EL modules are component-based and loosely coupled, facilitating configuration, deployment and integration with other applications.
Steve Payte, land systems specialist with Hunt said, ‘We are impressed with EL’s ease of use—despite the early stage of the application, we were reassured by its reliance on familiar concepts. EL also integrates well with our legacy systems—Tobin GIS Studio, Excalibur and our custom well service application.’
P2 ES is hosting the system. Payte concluded ‘The hosted model took a big burden off of our shoulders from an IT perspective and enabled us to complete the project quickly.’ In a separate announcement, P2ES has also sold its Petroleum Financial hosted solution to Classic Hydrocarbons Inc. of Fort Worth, Texas.
Speaking at the recent Oracle Crystal Ball user group (OITJ Jan 08) Oracle’s oil and gas supremo David Shimbo updated Oracle’s ‘digital oilfield’ (ODO) initiatives (OITJ June 07). Shimbo restated Oracle’s commitment to delivering digital oilfield solutions by providing an E&P data management framework, an application integration architecture and PPDM-based master data management. The architecture will allow oil companies to deploy ‘best-of-breed’ applications with integrated data access across the enterprise.
Scope of the ODO is potentially vast—spanning SCADA, G&G, drilling, engineering financials and HSE. An impressive slide showing companies deploying the PPDM data model on Oracle included three majors, four NOCs and a goodly number of independents and international oils—although it was not clear how many of these deploy the ODO per se.
At the heart of the ODO is a PPDM-based Oracle ‘3D spatial data warehouse’ that supports data analysis and Hyperion-based business intelligence (BI). Spatially-selected data can be turned into charts or manipulated in pivot tables—a paradigm that should be familiar to the BI/data mining community.
A collaboration with AspenTech rolls in process control, real time historian data for (potentially) a vertically integrated IT infrastructure spanning upstream, downstream and sales, leveraging Oracle’s Enterprise Asset Management solution. This targets ‘reliability-centered maintenance,’ with applications in refineries, gas plants, pipelines, oilfields and offshore platforms.
Hyperion is also used to provide oilfield key performance indicators from data sources including CygNet SCADA, ARIES/TOW and eWorkspace (Hyperion) is used for ‘pub sub’ daily reporting. This leverages a ‘Petroleum Essbase* data cube architecture which decomposes into production operations and P&L cubes.
Occidental is another ODO implementer with a ‘sandface to sales meter’ project leveraging most all of the above. Newfield has also leveraged ODO components in its ‘360 Portal’ whose components include an electronic well file, an AFE routing and approval workflow and a GIS front end. Newfield has also deployed the PPDM-based master data management/data warehouse ODO component.
But if there is a single poster child for the ODO it is likely Chesapeake whose ‘Insight’ program is leveraging the ODO to ‘transform discussions from technology to business value.’ Insight, which also spans the whole gamut of oil and gas operations, has Oracle executive management sponsorship. Insight has kicked off with two sub projects—one well focused (again leveraging PPDM) and the other a revamp of Chesapeake’s maintenance activity.
* Extended spreadsheet database (Hyperion).
Andrew Marks (Tullow Oil) recalled the days when Tullow was ‘sick and tired’ of local businesses operating on their own and initiated an IM program (before Marks joined) called ‘One Tullow IM’ with the aim of ‘unified knowledge sharing via the Tullow Intranet.’ At the start of the project, Tullow had dispersed teams and multiple reporting lines. Now, 18 months later, Tullow has ‘standards, policies, processes and procedures’ (SP3) under development, and has defined roles and responsibilities defines.
Marks, who left Lasmo 8 years ago, ‘when Finder was new,’ asks, ‘How far have we moved since then? Have core interpretation applications changed significantly?’ Marks appears to think not—although ‘other technologies’ have come along to help. One such technology is the GIS Portal, now ‘well established,’ such that you can ‘see anything anywhere.’ Desktop GIS should complement the traditional G&G lifecycle—so that you can grab a piece of data—interpret—and move on.
Tullow’s intranet portal is built atop LiveLink DMS, Open Works, Kingdom and Petrel. A GIS front end allows for selection of basic well information and download to Excel. Logs can be viewed in application viewers although much project technical data remains local to interpreters. In the past, teams were burdened by monthly reporting. Senior management needs to read 10 pages per day—much of which is duplicate information. So Tullow now publishes ‘journal type’ information. Traffic lights show how production is going—allowing for real time decisions rather than a wait for a monthly report 6 weeks late.
Alan Smith (Paras), who was interim OMV CIO last year, presented a paper authored by OMV’s Franz Schmidt on the IM aspects of OMV’s 2004 takeover of Rumanian state oil company, Petrom. In Rumania ‘nobody really knows how much oil is produced.’ Petrom has tens of thousands of producers with no detailed information, no SCADA, no networks. Tank levels and phone calls is ‘all you’ve got.’ OMV is now working on a global program to address data ownership, standards, quality and ‘anarchic unauthorized updating’ of error-prone systems. The aim by 2010 is to recognize data/information as assets and ensure data correctness and storage in the right place. The vast majority of Petrom personnel had never even seen a PC before, so training, language localization and just ‘keeping things simple’ are important. Petrom is moving from its legacy in-house software to TietoEnator’s production reporting system. Pipelines are being mapped and incorporated into network diagrams for roll-out in 2008. GIS is used as an integrator and SAP is now a major component of Petrom’s IM—used to match production information with financials. Applications management in OMV is also to be addressed—with a move to a ‘true data,’ single version of the truth paradigm.
Al Kok works in SaudiAramco’s Exploration Data Management division providing quality assured data services and knowledge-based data management to exploration. The division collaborates with data producers for data capture, edit and QC and currently manages over 900 exploration and delineation, 9,500 development and 2,000 water wells. In 2007 Saudi Aramco drilled 600 wells and 1,070 wellbores using 128 rigs (up from around 50 in 2001). Keeping pace with the activity increase has been a ‘significant challenge’ for Kok’s department. Aramco’s Well data environment is an Oracle corporate database (CDB) with a large data footprint. A separate database holds well log data. There are multiple data acquirers, owners and loaders—each owner does their own data loading. Drilling engineering and wellsite geology track loading and check for data completeness. The output from the CDB is quality assured, project ready data for Aramco’s interpreters. Well defined processes ‘prevent errors rather than fix problems,’ eliminating cross reference conflicts and ensuring that ‘employees understand what’s going on.’ Processes are documented, errors targeted and data is ‘continuously improved.’
Robert Best presented Neuralog’s work getting a handle on PDVSA’s million logs and thousands of seismic sections that represent 70 years of activity. In 2004, PDVSA kicked off a legacy log data management project with Neuralog. PDVSA wanted open standards and ‘open GIS.’ A Gerencia de Operaciones de Datos Departamente (GODD) project team was formed from data management and IT. Today, cleansed and QC’d digital data goes to the PPDM 3.7-based NeuraDB relational database. This is synchronized with PDVSA’s Finder. Oracle BLOB storage is used for bulk data and content and ESRI’s ArcGIS provides a GIS front end. The result adds value to the PDVSA dataset by enhanced physical and logical data security, improved data access and usability and better interoperability with other repositories and applications. The system is now being stress tested as many companies are giving back fields (and data) to PDVSA as they do not consider the new government terms acceptable.
Agustin Diz described Repsol-YPF’s IM effort, particularly in support of performing reserve estimation and portfolio analysis and often on a tight schedule. In the past the company often started studies over—without realizing that reviews of what was done before were available. Transferring such knowledge can avoid ‘costly mistakes.’ G&G data is generally stored satisfactorily. But reservoir data—like pressure build up tests and interpretations is stored (or not) ‘all over the place.’ Drilling data management is ‘so so.’ Production data is OK at the macro level but poor at detailed allocations. Document management is improving (especially engineering documents for facilities). But it is hard to deploy data and document management systems that support the workflow—‘There is no easy answer, data management has to be a part of everyday work.’ The current solution leverages Microsoft Sharepoint pending deployment of a workflow tool—candidates under evaluation include Orchestra and PointCross.
Han de Min introduced Aspentech’s ‘Operations Domain Model’ that orchestrates E&P ‘real time and right time’ processes and captures ‘enterprise configuration data.’ According to d Min, Aspentech’s Hysys flagship is used to design 75% of oil and gas facilities globally. De Min is unsure if real time reservoir modeling is achievable. What is required is a ‘sustainable scalable asset register over the full life cycle of a facility.’ ‘Handover is the problem.’ Today’s service-oriented architecture is the ‘most sustainable’ way forward. De Min envisages a standards-based publish/subscribe bus with the data historians beneath and visualization and applications above. StatoilHydro is already using the system—’Norway is ahead of the game here.’
Katya Casey thinks that BHP Billiton’s subsurface computing strategy has proved fruitful. BHP’s new president (from Exxon) is promoting ‘functional excellence’ in subsurface computing which translates into global standards and a cross-discipline team charter. BHPB is consolidating corporate databases in headquarters, while maintaining local data ownership. BHPB ‘believes in metadata,’ POSC’s process taxonomy and PPDM’s discipline taxonomy have been leveraged in a Verity-based E&P metadata catalog. A GIS portal was built with Google Earth atop of ESRI SDE and Oracle Spatial. ‘ESRI doesn’t own GIS!’ BHPB is now sharing data management practices developed in exploration with its production engineers.
Casey calls for an open discussion of the state of vendor data management including datums, record quality and ISO standards metadata on each record. Casey deprecates the ‘inefficiency of bulk vendor data subscription updates.’ The solution also uses Schlumberger’s Ocean API, considered as an ‘open’ development platform for E&P.
Marco Piantanida presented ENI’s web portal that acts as a front end and launcher for ENI’s many applications. The Portal uses PowerHub and DecisionPoint XML services. Landmark’s Team Workspace portal solution was extended with ENI’s own technical and scientific portal. ‘A database only gets noticed when it becomes a part of the portal.’ Is it easy? No. Piantanida was amused with the talk of ‘web services.’ This project relies on direct Oracle connections. Some tools can be invoked in context. But some monolithic applications don’t allow this and may require a high-end PC, making them unsuitable for most users. Contrary to the marketing pitch, web services and middleware do not simplify interoperability—the same dependencies mean that you have to update everything on upgrade.
Hans Tetteroo described how Shell uses ESRI to analyze, manipulate, collect and display data. But final maps (often PDF files) are stored in the Map Management System (MMS). Previously, map management was done on the desktop with an in-house developed tool, ‘Mercator’. But support proved ‘unsustainable,’ performance inadequate and there was no global reference data system. In Shell, the majority of users have locked-down PCs. Most applications need to be GID-scripted (a major operation!) A ‘next generation,’ map management study was undertaken in 2005. This came out in favor of a web-based solution.
Flare’s E&P catalog was identified as a potential solution requiring additional development. LiveLink is used for publishing and to provide an audit trail and version management. ‘Users don’t want to see GIS systems,’ so the aim is for users to be able to generate (for instance) an emergency response plan for a facility and have the system populate as much information as possible automatically before building the map. But the real grey hairs come when legacy data is included. The system has been successfully piloted in EP Europe and is now rolling out around the world. Global support is assured by Shell’s ‘GRASP’ global rollout applications and support program.
This article is taken from a 12 page report produced as part of The Data Room’s subscription-based Technology Watch Service. More from www.oilit.com/tech.
In the SMi panel session on E&P recruitment and retention, Deidre O’Donnell revealed that head hunters Working Smart received 300 applicants for a single junior geoscience position with a major—100 from MSc-level applicants. The problem facing the oil industry is not so much entry level personnel but the fact that over half of 1st year graduate employees in oil and gas leave to join other industries. For those that stay, a ‘mercenary’ approach and an awareness of self worth is observed. This is particularly acute in the circa 15 year ‘mid life crisis’ leavers!
Serge Brun (Schlumberger) suggested that companies and employees have to remember that ‘there is life outside of management.’ Schlumberger’s problem is the 5 year syndrome—as people are poached by clients after phase one training! Session chair Najib Abusalbi suggested that this could be considered an honor—‘a reflection on the quality of our training.’ In fact Schlumberger operates an ‘open door’ policy for such employees wishing to return to the fold.
O’Donnell stated that five to fifteen years experience is the ‘most sought after demographic.’ Companies should ‘look after them. They are the ones that are most likely to leave!’ The question of family life and mobility was raised. Thierry Gregorius noted that oil and gas required people to move about to get on. Brun concurred— ‘Oils will always require people to move, to work in harsh environments. You need to be even more cautious in recruitment.’
Addressing the thorny question of cycles and letting people go during industry downturns, Brun stated that Schlumberger has learned how hard it is to pick people up after even a short downturn. ‘It will not happen again.’ Gregorius agreed, ‘Shell has learned from its mistakes and is now one of the biggest recruiters.’
The CAPE-OPEN Laboratories Network (CO-LaN) is conducting interoperability tests between software products implementing the CAPE-OPEN interface.
Fugro-Jason has appointed Alistair Cunningham and Ashley Taylor as Middle East business managers and Tom Taylor as region manager, North America.
SolArc has hired Eric Johnson as VP, marketing. Johnson was previously VP marketing with Halliburton.
The Process Control Systems Forum (PCSF) is to hold a workshop on control systems cyber security from March 31—April 4, 2008 in Salt Lake City, UT.
Aker Kvaerner has appointed Dave Hutchinson as senior VP with its subsea business unit.
Martin Ferron has resigned as President and CEO of Helix Energy Solutions. Chairman Owen Kratz is to take over the CEO role.
A new US Congressional Research Service report, ‘Emergency Alert System and All-Hazard Warnings’ recognizes ‘widespread acceptance’ of the Common Alerting Protocol (CAP) OASIS Standard. More from www.oilit.com/links/1012.
Mark Bashforth has stepped down as MD, Roxar Software Solutions (RSS) and now occupies a position with Roxar in Houston. CEO Gunnar Hviding is interim MD RSS.
Marc Daverat is now MD at DataFlux’ new Southern Europe headquarters in France. Daverat was previously with Schlumberger, Cap Gemini and data quality consultants Solveo.
TOTAL and Saudi Aramco have joined the Energistics standards consortium. The Energistics membership community now consists of 89 global upstream organizations.
ABB CEO Fred Kindle has resigned following ‘irreconcilable differences’ on how to lead the company. CFO Michel Demaré is interim CEO.
Philip Behrman is retiring as senior VP exploration for Marathon Oil.
NetApp has promoted Tom Georgens to president and COO and Tom Mendoza to vice chairman.
Patrick Gannon has resigned as president and CEO of the OASIS standards body.
Russ Krauss is now VP marketing with Object Reservoir. He was previously with Knowledge Reservoir.
John Archer has joined Petris as product manager of PetrisWINDS Enterprise. Archer was previously with BEA Systems.
Paradigm has appointed James Lamb as US regional VP and Serge Sauvagnac to lead its ‘Premier Partners’ program.
Peter Goyne has been appointed director of oil and gas operations with SpectrumData. Goyne hails from Halliburton’s Landmark unit.
Jeff Soine has been appointed president of Woodside Energy (USA) Inc., he was previously COO.
Maurice Wilkins is to head-up Yokagawa’s new global marketing centre in Dallas, Texas. Wilkins was previous with ExxonMobil and Honeywell.
Energy Insights has published an ‘IT Shortlist’ of suppliers of Energy Trading and Risk Management software.
GeoEye has appointed Mike Horn to its Board of Directors.
The Instrument Society of America (ISA) is in the process of drafting its ISA99 series of standards for cyber security of industrial automation and control systems. Part 1, terminology, concepts and models was published late last year and will shortly be joined by Part 2 , control system security.
ISA kindly provided Oil IT Journal with a copy of a new report on the ISA99 standard. The 100 page report ($115 from www.isa.org) describes cyber security technologies, their pros and cons, expected threats and known cyber vulnerabilities. The report provides preliminary recommendations and guidance for cyber security deployment and countermeasures.
Industrial automation and control systems (IACS) include control systems used in refineries and geographically dispersed operations such as utilities, pipelines and petroleum production and distribution facilities. In the IACS context, security means the prevention of unwanted penetration, interference with operations, and access to confidential information. The standard covers computers, networks, operating systems and applications.
The report tracks the evolution of IACS from individual, isolated computers with proprietary operating systems and networks to interconnected systems and applications employing commercial off the shelf (COTS) operating systems and protocols. These are now being integrated with enterprise systems and other business applications. While increased integration has brought significant benefits in terms of information visibility, the COTS approach is increasing system vulnerability to the same software attacks as are present in business and desktop devices.
Also, joint ventures, alliance partners, and outsourced services have led to a more complex situation with respect to the number of organizations and groups contributing to security of the industrial automation and control system. Conventional business information security focuses on the objectives of confidentiality, integrity and availability (CIA). But for IACS priorities differ. Here security is primarily concerned with maintaining the availability of all system components and keeping the plant running—with emphasis on real time control. The CIA model is inadequate for a full understanding of the requirements for security in industrial automation and control systems. Here the ISA report advocates ‘defense in depth,’ with multiple countermeasures so that, for example, intrusion detection is deployed to signal the penetration of a firewall**.
The report discusses securing data historians, operating platforms, Distributed Control Systems, Programmable Logic Controllers, SCADA systems and conventional control system IT. ISA99 has backing from a veritable who’s who of oil and service companies—to name a few: ExxonMobil, BP, Chevron, Shell and Aramco, along with pretty well all of the process engineering supplier community. More from www.isa.org/standards.
* ANSI/ISA-TR99.00.01-2007 Security Technologies for Industrial Automation and Control Systems.
** For more on defense in depth and ‘deperimeterization’ we recommend The Data Room’s technology Watch report from the 2005 SPE Digital Security in Oil and Gas event—now a free download from www.oilit.com/links/1013.
Swedish enterprise asset management solutions provider IFS has launched a new solution for energy and utilities. IFS provides component-based business software developed using open standards. IFS’ components are optimized for ERP, enterprise asset management, and MRO.
IFS describes the new tool as an ‘integrated engineering collaboration/capital project/O&M solution,’ with support for a mobile workforce and document management. IFS’ applications for oil and gas service providers offer an alternative to custom solutions and ‘information islands’ in the context of engineering, procurement, construction and installation activities.
In a separate announcement, IFS inked a deal with geographical information systems (GIS) specialist ESRI to ‘bring GIS and enterprise asset management together.’ The integrated Energy and Utilities solution was launched at Energiforum in Oslo, Norway this month.’
Seattle-based remote asset management and telematics specialist SARS Corp. has received ISO quality certification for its SARS Andronics liquid petroleum gas monitoring unit. Andronics’ UtilityEye LPG system monitors remote LPG tanks, both above and below ground and sends a notifications when a tank is filled or when fuel levels drop. The system communicates via low-earth orbit satellite for reliable monitoring in remote locations. ISO 9001-2000 was developed in conjunction with the American Petroleum Institute (API) as the basis of quality management systems for manufacturers and service providers.
SARS CEO Clayton Shelver said, ‘The rigorous certification program assures customer confidence and accelerates sales.’ Andronics currently has 4,000 LPG units deployed with some of Europe’s largest energy companies. The company also develops the LEOCATE GPRS-based vehicle tracking system that monitors fleet movements and ‘calls home’ with automated email reports on engine idle time, distance traveled, number of stops, hours worked and fleet utilization. Northern Ireland-based Andronics was acquired by SARS in December 2007.
Kongsberg’s oil and gas division has opened a new ‘integrated operations laboratory’ at its Carpus HQ in Norway. The ‘IO Lab’ will develop and test work practices and tools for collaboration between the company’s own engineers and customers. The Lab addresses new build projects, operations and modification and maintenance tasks. The IO Lab is equipped with advanced audiovisual systems, 3D well and reservoir visualization software and real time information systems for drilling and production operations. Advanced process simulators can be linked to the control system for testing and real time operations. Remote access to process control systems allows for inspection, diagnostics and parameterization. Time stamped records of control system parameters can be recorded in a ‘real time logistic model’ of the whole field.
Trond Weberg, Kongsberg Maritime CTO said, ‘The IO Lab allows us to remotely assist customers by bringing in the required experts without having to travel to a site. Communications and audiovisual technology provide offsite workers with access to the same tools. It’s as good as being on the spot!’ In October 2007 the IO Lab connected live to StatoilHydro’s Kristin platform. A test monitored the activity of five Kongsberg maintenance engineers working on Kristin’s process control systems.
Intergraph has released SmartPlant Enterprise for Owner Operators (SPO), a new component of its SmartPlant Enterprise suite. SPO offers predefined integrated work processes to help owner operators manage plant engineering design and to synchronize with operations and maintenance. SPO is said to increase plant lifecycle data interoperability and to assure fast-track implementation. SPO leverages integration technologies including SAP NetWeaver and Microsoft SharePoint. SPO can interoperate SAP, Meridium and Maximo. SPO also interfaces with distribution control and enterprise content management (ECM) systems.
SPO promises cross-functional and cross-organizational information access through a role-based web portal. Teams can collaborate ‘seamlessly,’ data discovery time is substantially reduced and information quality and consistency is enhanced.
Energy Solutions is implementing its PipelineManager package on StatoilHydro’s gas lines and is also upgrading the software on the company’s liquids lines. StatoilHydro will use PipelineManager for leak detection, product tracking and modeling pipeline behavior.
Sigve Arthun, procurement advisor for StatoilHydro said, ‘We’ve been using PipelineManager on our liquids lines for several years with a good track record of customer support from EnergySolutions. We are building a new combined heat and power station and are therefore upgrading our PM installation to support this gas-powered unit.’
EnergySolutions’ operations director Clive Seaton added, ‘StatoilHydro can benefit from a lower cost of ownership by deploying PM on both liquids and gas pipelines. A single solution is easier to implement and provides more standard functionality—which is important for StatoilHydro’s long-term pipeline management strategy.” PipelineManager is used worldwide for hydraulic modeling, inventory calculations, leak detection, product tracking, predictive simulations, DRA and scraper tracking, instrument drift and survival time calculations.
BP has deployed technology from VRcontext (VRC) to model and simulate the behavior of its Schiehallion floating, production, storage and offloading (FPSO) vessel. In a recent webcast, VRC CEO Marc de Buyl described how multi-vendor 3D CAD, laser scan, photo imagery and a digital terrain model of the seabed have been assembled into a virtual reality 3D model, leveraging VRC’s WalkInside flagship product. The model is linked to information systems, engineering and process simulators and has been used for real time monitoring, operator training and emergency response. The VR approach, according to de Buyl, allows owner operators to ‘capitalize on data transparency—across all disciplines and activities on deep complex offshore assets.’ BP used the model to help steer a remotely operated vehicle (ROV) around complex seabed equipment.
WalkInside’s link to the engineering world is through VRC’s ‘ProcessLife’ product that enables bidirectional links to real-time applications such as SCADA/DCS and process simulation. ProcessLife applications include computational fluid dynamics data visualization, tracking field personnel in 3D and training. WalkInside also embeds Noumenon’s XMpLant formulation of the ISO 15926 plant data standard. According to the company, a growth in standards take-up has led to an increased role for XMpLant. VRC has been nominated for a FIATECH award for its innovative use of real time dynamic tessellation for continuous rendering of the 3D model. The CETI award ‘celebrates engineering and technology innovations.’
Aker Kvaerner has acquired a majority stake in 3D drilling simulation and visualisation specialist First Interactive with an option to buy the remaining shares. First Interactive provides visualisation solutions to the oil and gas sector. The companies are to develop a ‘next generation’ drilling simulator for operator training and rig commissioning. Other solutions will target remote support for real-time drilling operations. First Interactive is located in Stavanger with an R&D department in Russia and Ukraine. 2007 revenues were NOK 25 million.
SGI (formerly Silicon Graphics) has acquired the assets of clustered high performance computing specialist Linux Networx. The deal was financed by a stock sale to Oak Investment Partners and Lehman Brothers. SGI CEO Bo Ewald commented, ‘We’ve grown orders more than 30 percent in each of the last two quarters. We’re in a position to acquire key technology and expertise to further power our growth. This represents the first of such key technology acquisitions and will help further the development of our software environment and support for our clustered systems. In addition, we are very pleased that Oak and Lehman Brothers have provided additional financing to the company to help speed our growth.’
Kuwait Oil Company (KOC) has selected Paradigm’s Geolog software to store its well log petrophysical data, interpretations and analyses. The multi year deal was done through Paradigm’s Earth Decision Sciences unit and includes additional software modules, training and technical support.
Eurotech has signed a deal with ARKeX for the provision of systems administration and network support for its Windows and Linux systems.
KBR unit Granherne has signed a frame agreement with StatoilHydro for engineering design services for the development of oil and gas resources in the Gullfaks Area of the Norwegian continental shelf.
The French Petroleum Institute (IFP) has signed a joint venture agreement with the French atomic energy authority (CEA) for the development of ‘Arcane,’ a high performance computing platform for numerical simulation. The Arcane project includes IT services for optimizing code for massively parallel computers. The IFP will use the codes to simulate geological storage of CO2 and for fluid flow modeling of oil and gas reservoirs.
CSE Global’s Semaphore unit teamed with Industrial Defender’s (ID) to embed ID’s cyber security technology in its T-BOX and Kingfisher real time units (RTU). Semaphore’s T-BOX is an IP-based telemetry solution that integrates SCADA, control, and communications functionality in a rugged package. T-BOX uses web technologies and public networks for decentralized monitoring and control. ID’s Defense in Depth cyber security solutions for process control includes network security professional services, technology and managed security services.
Semaphore claims its T-BOX products offer up to 50% less total installed cost per point versus traditional SCADA/ PLC systems and permit greater organizational access to data through automated reporting and browser software. T-BOX’s push and web technologies enable high performance, economical implementation and operation.
Web server technology with SMS* reporting and remote control provides real-time access over the internet. Operators also receive alarms and communicate with their sites remotely using a cell phone. Automatic alarm escalation allows key maintenance personnel to receive any unacknowledged alarms. Semaphore’s products are designed for monitoring and control applications in many verticals including oil and gas.
* Text messaging.
Speaking at a keynote session at the DaratechPLANT2008 conference last month in Houston, Bentley CEO Greg Bentley identified key market trends in process plant creation and provided an update on Bentley’s technology. Bentley claimed that all owner-operators and engineering/procurement/construction (EPC) firms at daratechPLANT2008 use Bentley software with, on average, over 20 Bentley sites per firm.
To manage project information across these distributed enterprises many design firms now use Bentley’s ProjectWise collaboration servers. Bentley poster child BP has now implemented its ten major capital projects across three continents using ProjectWise’s lifecycle plant engineering data management.
EPC workflows increasingly incorporate modular solutions, leveraging suppliers’ innovations. This means that suppliers are assuming more of the engineering and construction work as they design, configure, fabricate and just-in-time deliver modules of greater functional scope and scale for assembly on site.
Bentley supports this distributed engineering activity with bidirectional exchange of virtual work packages. Here, interoperability is the key but the days of ‘command and control’ plant creation and a ‘monolithic’ project software environment are gone. These have been replaced by a ‘uniquely robust’ plant data model, the ISO 15926 standard for the representation of process plant lifecycle data.
Though originally conceived to support data management over the decades of a plant’s lifetime, ISO 15926 fortuitously also solves the formidable challenges of distributed plant creation. Today’s services-oriented architectures also take full advantage of self-describing data, the hallmark of ISO 15926.
Bentley originally used ISO 15926 to import plant data created in design systems from other vendors. But now the standard supports real-time interoperability between plant applications, including new acquisitions within Bentley’s growing application portfolio. The ‘culmination’ of Bentley’s ISO 15926 work is OpenPlant, suite of applications that store data in the ISO 15926 data model.
Also at Daratech, Bentley announced that Norwegian design consultants NLI Engineering Oil & Gas and Gazprom unit TyumenNIIgiprogas have signed up to its Enterprise License Subscription (ELS) program. ELS gives companies unlimited access to the entire Bentley software portfolio for a fixed annual fee. More from www.bentley.com/openplant.
Norwegian research institute Norsar has just commercialized ‘SeisRoX,’ a tool for quantitative, model-assisted seismic interpretation. The package aims to facilitate quantitative interpretation with a ‘multi-domain’ model concept, integrating rock physics with seismic modeling technology. SeisRoX provides a direct means of visualizing the relationship between the rock properties and the seismic image and investigating the effects of uncertainty.
SeisRoX can be used to model pre-stack depth migrated seismic (PSDM) attributes at the reservoir scale. A ‘robust and flexible’ workflow guides the user from the generation of the rock physics model through to the simulation and analysis of the 3D PSDM seismic image. This approach is particularly useful for investigating the seismic sensitivity to geological properties, seismic properties and reservoir geometry.
PFC Energy has just published its succinct (8 pages) but informative rankings of the worlds’ largest listed energy firms. The combined value of the PFC Energy Top 50 companies has climbed by 45% since 2006 to a $5 trillion market capitalization—the largest jump in the list’s history.
The top performers in the Energy 50 are traded NOCs with a valuation of some 20 times trailing 12-month earnings, compared with a 12 times multiple for IOCs. Petrochina climbed from third to first place on the PFC Energy 50 in the run-up to its Shanghai IPO. Other NOC climbers include Sinopec, CNOOC, PTT and Petrobras. However, the PFC list excludes the really big fish—the non-traded NOCs. The lion’s share of the oil and gas sector’s value lies with Saudi Aramco, National Iranian Oil Company, Kuwait Petroleum Company, Abu Dhabi National Oil Company and Petronas.
The Energy 50 report notes that, in a year when WTI oil prices rose 57%, the combined capitalization of the majors increased only 15%, reflecting the increasingly difficult struggle to deliver production growth. The majors substantially underperformed the smaller IOCs such as Apache, BG, Devon, Anadarko and Woodside.The market also favored the oil service sector where the Top 15 are priced at an average 21 times trailing 12-month earnings. Services are the only sector where US-based companies still dominate, although the year’s strongest service company share price growth came from a Chinese service company and an Indian drilling company. The report is a free download from www.pfcenergy50.com.
Hi tech engineering specialist ISM International is leveraging experience gained in the US Homeland Security program to enter the oil and gas patch. According to ISM, demand from ‘several majors’ has led to the ‘GotchaGPS’ personal safety technology being embedded in new products aimed at the oil industry. GotchaGPS is powered by Millennium Plus*, a mobile tracking device attached to a vehicle that uses ‘a constellation of satellites and patented GSM technology’ to provide location information to a web browser, email, cell phone, or pager. The technology will be adapted to track personnel and vehicles in high security oil projects.
ISM CEO Mario Quenneville said, ‘Consumers and businesses want better security and the GotchaGPS device can provide this for large installations or individuals—these devices are the cutting edge in protection and security.’ Last month ISM acquired ‘exclusive distribution rights’ to GotchaGPS for the ‘American, Canadian and International markets.’
* This appears to embed technology from Falcom Stepp.
Aker Kvaerner has launched a new data acquisition system, ‘PodEx,’ said to be ‘a cost-effective solution to the challenge of maximizing recovery from mature subsea fields.’ PodEx adds new functionality and instrumentation to existing subsea control systems and requires minimal installation time. PodEx uses spare subsea circuitry and power to supply well and reservoir data to topside systems for analysis.
Aker VP Erik Taule said, ‘The new solution contrasts with previous, complex alternatives which have required either changing or redesigning the original control system. Many brownfield sites would benefit from better knowledge about their wells’ performance.’ The Aker system also integrates with control systems from other manufacturers.
Expro Group, now rebranded as just ‘Expro,’ reports successful deployment of its Cableless Telemetry System (CaTS) in a ‘challenging’ offshore well in the UK North Sea. CaTS was positioned in the W160 well on BP’s Mungo platform. The old cabled permanent gauge was failing and BP wanted to restore the flow of real-time reservoir pressure and temperature data without a workover. The CaTS system comprises a range of downhole instrumentation and control valves along with a battery powered, wireless data transmission system. This transmits low frequency radio waves, leveraging the natural waveguide of the well’s tubing or casing. CaTS provides pressure and temperature information for production monitoring and optimization of high-rate gas wells. Bi-directional communication can also be deployed for cable-free control applications.
Expro ingeniously used a failed permanent gauge and cable as a signal path to relay the CaTS signal to the surface. The CaTS system was deployed on slickline in November and is now interfaced to the platform’s SCADA system. Bottom hole pressure and temperature data is received in real time by BP onshore.include ("copyright.inc"); ?>