A lot is going on in the parallel universe of utilities with the advent of the ‘Smart Grid.’ The upgrade to the US electricity metering infrastructure promises, inter alia, fine grain control of consumer devices that will support real-time price information exchange. Smart Grid blends e-business with ‘green’ business and is fast becoming a major conduit for US ‘stimulus’ funding.
Last month Microsoft jumped on the Smart Grid bandwagon with the announcement of a ‘Smart Energy Reference Architecture’ (SERA) that claims to address technology integration across the ‘smart energy ecosystem.’ SERA ‘supporters’ include Accenture, ESRI and OSIsoft.
The Smart Grid envisions a world where thousands of devices ‘plug and play’ into the grid through ‘common standards and interoperability frameworks.’ The battle for a utilities ‘framework’ is hotting-up with announcements from Siemens and Silver Spring of a ‘Smart Energy Network’ and from IBM, of its ‘Solution Architecture for Energy and Utilities Framework’ (SAFE).
In a separate announcement, Microsoft has joined the upstream oil and gas Energistics standards body. The press release states that Microsoft is to leverage its ‘proven experience in bringing technologies and solutions to the oil and gas industry to deliver the reference implementation [our emphasis] of Energistics’ standards such as WITSML and PRODML.’
As ‘reference implementation’ sounds pretty much like a ‘reference architecture,’ we checked-out the position paper ‘Microsoft Smart Energy Reference Architecture’ to see what the future may hold for E&P.
This 130 page document includes a description of a ‘holistic life-user experience’ leveraging Microsoft SharePoint. The section on standards shows a smorgasbord of OASIS, ICE and NIST standards, ‘linked’ through an ‘ontology.’ The document offers illustrations of partner ‘implementations’ before enumerating just about every technology that Microsoft has to offer, from Complex Event Processing to the Azure data cloud. But there is scant evidence of an ‘architecture’ per se. While it is a given that most all vendors deploy Microsoft technology in some form or other, SERA is currently more PowerPoint than protocol.
Where does that leave the E&P ‘reference implementation?’ We asked Energistics CEO Randy Clark to clarify the situation. Here is his reply, ‘Thanks for your interest in the Microsoft/Energistics press release. As to the term ‘reference implementation,’ simply put, Energistics does not designate any implementation from any member company with special or unique status.’
France’s computer champion has made its first high performance computing (HPC) sale to the oil and gas vertical with the announcement that Petrobras’ CENPES research and development (R&D) unit is acquiring a 250 teraflop ‘Bull-X’ cluster. The new machine, from Bull’s ‘Extreme Computing’ product line, is a CPU/GPU* hybrid and is to be used in Petrobras’ seismic imaging research effort.
Bull upped the ante in its HPC effort a couple of years back with the acquisition of French HPC systems integrator Serviware. Serviware has implemented HP-based supercomputers for Total E&P and last year installed a 17 teraflop Linux-based Supermicro cluster at the French Petroleum Institute (IFP).
The Petrobras/CENPES machine would notionally appear in the top 20 of the TOP500 list of supercomputers—assuming that its quoted teraflops translate directly into a Linpack number.
The Bull-X supercomputer will be located in a Petrobras’ data center, currently nearing completion, on the campus of Rio de Janeiro University. Bull has installed six other supercomputers at Brazil’s federal universities. More from www.bull.com.
* Central/graphics processing unit.
What is a silo? We all know they are ‘bad,’ that their walls have to be broken down to encourage ‘collaboration’ and inter disciplinary understanding. But what exactly are they? How do you know when you are inside one, how do you know if you are breaking down the barriers, and most importantly, how do you know when you are building a new one?
Let’s start with some obvious ‘silos’ in the upstream. By and large they can be mapped to the main tradeshows and conferences. In the upstream, we have a geology silo, a geophysics silo and a petroleum engineering silo. Moving downstream we have a process silo, a pipeline silo, an engineering silo and so on. All could equally be described in terms of job function, or education and training with much the same divisions.
As an attendee of many of the tradeshow ‘silos,’ I can say for sure, that the silo walls are already in very poor shape. While not exactly broken down, you do get to see the same people at different shows—perhaps wearing slightly different hats and with a slightly shifted discourse. But the ‘seismic story’ that is told at the SPE* these days overlaps and dovetails nicely with the simulation stories told at the SEG** or AAPG***. There really is a lot of interaction across the disciplines—and this is indeed a good thing. Should there be more? Undoubtedly. Should we break down the barriers further? Maybe and maybe not. The mapping between education, job function and Society has stood the test of time. The discipline-specific silo is probably a necessity and the boundaries are fuzzy enough to allow for significant information exchange.
But there is another way of carving up the world which turns the silos into less attractive propositions and that is capital allocation. The silos do not only tell us which tradeshows to attend but they are often used to divvy up available funds. While surfing the web the other day, I came across one of those instant ‘surveys’ that webmeisters like to put on their home pages. On the Nickles New Technology Magazine home page I was invited to vote for where R&D money should go. The choices were—Exploration, Drilling, Production, Environment and IT.
I found this rather curious. Is IT really a separate cost center? Nickels is not the only one to see the world like this. As we reported in our April 2009 issue, ExxonMobil’s Russ Spahr noted that technologies like the digital oilfield are ‘chasing the same barrels as other asset management processes.’
The idea of IT as a separate cost center and silo is salutary when you consider the lip service that is paid to breaking down the silo boundaries through ‘collaboration.’ You might think that an IT ‘silo’ was unnecessary because it is naturally a horizontal activity. Its toolset is shared across the industry and while it does have its local idiosyncrasies, a geophysicist and a refiner would probably be able to understand each others’ IT before they understood each others’ processes.
To go back to the ‘what is a silo’ question, we now have a few pointers. The existence of a dedicated conference is a good start. Some job descriptions referring to the specific discipline. Cost centers of same. And the ultimate accolade—a professional society. But before you get the idea that this editorial is a pitch for the latter—a Society of Petroleum IT (SPIT!), perhaps—I invite you to reflect on one of the drivers behind an embryonic silo. The first record we have of the ‘digital oilfield,’ was in the December 2001 issue of Oil IT Journal where we reported on the first CERA study on ‘New Technology as Key to Petroleum Future.’ This was sponsored by Sun Microsystems. The date is interesting. Oil IT Journal had already been publishing for 5 years before then. And we were by no means the first. The AAPG produced Geobyte in the 1980s and Hart’s Petro Systems World was likewise in print some twenty years or so before the ‘invention’ of the digital oilfield. Both publications are long gone by the way.
What exactly happened in 2001? It was not so much that IT suddenly arrived at the oilfield, more that the horizontal vendors’ marketing departments were getting their acts together and going after juicy vertical targets like oil and gas.
Sun Microsystems’ position in oil and gas may be somewhat diminished these days, but Microsoft has picked up the digital oilfield baton and directs a significant amount of marketing effort at its ‘collaboration’ and ‘productivity’ solutions for oil and gas along with cash contributions (Microsoft was the main sponsor of the SPE Gulf Coast Section’s 2009 Digital Energy Conference for example). Silo building chez your client is a good move. If you are an IT vendor, your job gets a lot easier if you have someone on the other side of the table with all their checks already written out to ‘software vendor.’
To judge the necessity or otherwise of an IT ‘silo’ I invite you to compare and contrast the SPE’s various ‘digital energy’ initiatives and the SEG’s high performance computing (HPC) session—a report on this will appear in next month’s Oil IT Journal. On the face of it these are just two more ‘silos’ to contend with. But they are quite different. The SEG HPC ‘silo’ is very much bottom up—catering to a community with a pressing need (seismic imaging in the complex terrain of the Gulf of Mexico sub salt play) and fulfilling a real technical role of information dissemination. On the other hand much of the SPE IT Section’s discourse involves variations on the theme of the business benefits of collaboration—or worse, the tired old dogma of the ‘big crew change’ and the imminent personnel penury. C’mon! Hardly a day goes by without an announcement of more layoffs. If we get any more ‘productive’ there’ll soon be nobody left to ‘collaborate’ with!
* Society of Petroleum Engineers.
** Society of Exploration Geophysicists.
*** American Association of Petroleum Geologists.
Halliburton’s Landmark unit reports on Shell’s global roll-out of its OpenWorks R5000 release. We caught up with Landmark’s senior software and services director Chris Usher at the Houston SEG.
What is involved in the Shell deployment?
Shell has deployed our DecisionSpace integration environment and the R5000 data tier. The deal also involved migration to the new OpenWorks R5000 data model and the global roll-out. The system has now been running live for three months, has reduced data duplication. Shell is also leveraging the new single project model*. This has eased workflow development and allowed Shell to port its own applications to the new environment.
What are these application?
Shell’s proprietary interpretation systems.
This is mostly a data management deal..
Yes. It involves enterprise data management across OpenWorks (OW) and the Engineering Data Model (EDM)—now both in the R5000 release. Shell also has the dev kit. Oh, and Shell’s real time operating centers are managed by Landmark so we can provide enhanced data support to drilling.
What does Shell’s workflow entail?
Daily project data from OW/EDM is captured to the Petrobank CDS**. Data in the CDS can also be pushed back into applications such as SeisWorks. The new environment supports the ‘classic’ apps and Shell’s taxonomy is embedded in the new system. The CDS acts like a ‘public’ PetroBank—but behind the firewall.
Does Shell deploy a ‘project world,’ a Google Earth for seismic?
This could be done with the new, on-the-fly cartographic transformations. But this is not how Shell does it. They do work with one very large Gulf of Mexico Project. GeoProbe is also connected to ION’s large trans-Atlantic ‘Span’ data sets.
Does Shell co-develop and productize stuff like Statoil?
Yes, the CDS was a funded development.
* See our ECIM report (OITJ October 2009).
** Corporate data store.
What’s Paradigm highlighting at the SEG?
At the EAGE last June we announced EPOS 4, our data infrastructure, and the suite of applications, our ‘Rock & Fluid Canvas.’ Now we are showing these tools in real world use, in asset-based problem solving. Typical workflows include sub-salt exploration in the Gulf of Mexico, Barnett shale frac jobs and West African turbidite investigations. Many presentations at the SEG are about seismic imaging with RTM*. Earth Study 360 is different—it is about local investigations. We get RTM quality or better—it acts like a seismic dipmeter for fractured shales.
How is Skua going?
We are optimistic. We think that 2010 will be the year for Skua. There is usually around a 3 year cycle from roll-out to take-up. Skua’s UVT transform models rock properties in a pre-deformation state. This can be applied to seismic data to validate their interpretation.
What is your development environment?
Today everything runs on Linux—but we are increasingly moving to Windows. There is a lot of interest in the Trolltech/Qt porting pathway. In HPC, everything runs on the latest Intel chips and on high end workstations. We are also using NVIDIA GPUs.
For graphics or number crunching?
Graphics.
Not CUDA?
We try to avoid having to re-engineer applications. Actually I’m rather skeptical about GPU-based computation, especially for vendors. We are more interested in trends like touch screens and 3D—which is driven by arrival of 3DTV. I’ll be surprised if 5 years from now our software still looks the same. Ergonomics is getting more attention as young people are now getting RSI** too!
Is the industry ready to put time and money into radically new stuff?
That’s a good question—somebody will!
Maybe some startup will come up with something and get bought up
Yes that is the usual route—we try to do both. And we continue to leverage the expertise of the Nancy School of Mines team.
How big is Paradigm today?
We have 850 employees.
* Reverse time migration.
** Repeat strain injury.
Back in 2007, Nvidia’s graphics processing unit (GPU)-based accelerators were ubiquitous at the Society of Exploration Geophysicists (SEG) annual convention. Two years on, Nvidia continues to dominate the scene with its Quadro range of graphics accelerators and the Tesla GPU-based compute engines.
On the Landmark booth, a quad-screen display was showing a staggering 32 megapixels of seismic scenery, albeit with a rather ugly wide bezel.
Mechdyne’s Mosaic was showing a zero bezel display of 4 x HD (1920 x 1080) along with great 3D (with glasses—but these are getting much more wearable).
On the 3D front, Mechdyne was also showing a spectacular prototype Sony 3D LED backlit TV—a taste of things to come in the home cinema? Maybe there is a tipping point here as the 3D on these displays is of an immediacy that has been lacking previously—like the glasses, it is just less of an intrusion.
Down at the programming level, the 8.1 release of Visualization Sciences Group’s (VSG—previously TGS) Open Inventor toolkit, under the hood of many upstream applications, embeds Nvidia’s ‘CompleX’ scene-scaling acceleration engine. VSG was used, inter alia, by CGGVeritas to show off its seismic library.
On the HP stand, a $5,000 laptop with a 3D Quadro FX3700M was demoing SMT’s Kingdom Suite—described as the ‘democratization’ of 3D.
Finally, Nvidia announced that Hess is deploying the HP Z800/Nvidia SLI multi OS workstation we reported on earlier this year (OITJ June 2009).
The Information Store (iStore) has just published a white paper1 outlining the information challenge of the digital oilfield (DO) along with updated information about its PetroTrek solution. iStore advocates a ‘federated master data approach’ where data remains in place to be retrieved for decision support. PetroTrek was used by ‘one of the world’s largest integrated oil companies’ to track production from a data set of over three million wells in the continental US—feeding composite data from Oracle and other data sources to the major’s hundreds of joint venture partners.
iStore has developed the PetroTrek Scripting Language (PSL), a fourth–generation programming language specifically developed for petroleum enterprise data management. PSL, a proprietary language, was developed ‘to overcome the limitations and complexities of 3GLs like TCL, PHP and PERL for E&P solution development.’ PSL comes with a desktop IDE2 for testing and deployment. The platform-independent solution currently supports UNIX, .NET, SharePoint, Microsoft Surface and Windows Azure solutions and provides data connections to mashup E&P data sets and drive visualization components, such as Silverlight charts, log displays, seismic viewers, and Bing maps. PSL is available to 3rd party developers. More from istore.com/PTK.
1—A free download from www.istore.com.
2—Integrated development environment.
Halliburton’s GeoGraphix ‘value brand’ has been upgraded, leveraging Microsoft Direct-X graphics and GPU-based 3D scene rendering. Discovery 3D was on show at the SEG convention and included an intriguing ‘coal mine’ like display of the reservoir incorporating seismic data. But what attracted the crowds was the use of a Microsoft X-Box controller to navigate the 3D volume. Discovery 3D ‘makes high performance 3D visualization and interpretation tools accessible and affordable for all GeoGraphix users.’
GeoGraphix believes that younger geoscientists will ‘embrace and use this popular gaming technology.’
Under the hood of this new functionality is Direct3D, a component of Microsoft’s proprietary DirectX graphical API. Its latest manifestation, Direct3D 11 is a component of Windows 7 and introduces tessellation, GPU-based ‘multithreaded rendering’ and compute shaders. The latter, according to Wikipedia, ‘supports non-graphical tasks such as stream processing and physics acceleration’ and is ‘similar in spirit to OpenCL and Nvidia’s CUDA.’
Microsoft’s technology is potentially doubly disruptive, in terms of a possible displacement of the ubiquitous OpenGL and of Nvidia’s CUDA GPU-based number crunching. More from www.geographix.com.
Kelman Technologies unveiled its ‘next generation’ seismic data management toolkit at the Houston SEG. iGlass offers a table and map interface and can be hosted by Kelman from its Houston and Calgary locations. iGlass is built on PPDM 3.8, Oracle SDO, and ESRI ArcSDE products. IBM Tivoli storage solution is currently supported but Kelman plans to de-couple iGlass from vendor-specific storage.
iGlass stores and manages geophysical data types, including seismic, LIDAR, VSP, micro-seismic and gravity. An iGlass Editor is currently under test. The system supports spatial and text-based search, tape header visualization and work order management. The latter enables FTP-based movement of SEGY files to clients’ sites. Detailed 3D and 2D survey/line information is available for editing. A ‘wizard’ helps users check for duplicate data sets. Role-based audit trails track editing activity and create aliases for lines where critical information has changed. ‘Interest sets’ can be created for ownership with enhanced performance over native PPDM. A future release will include the INT seismic trace viewer for online QC. Connectivity is provided to Fugro’s Trango system and AutoDesk’s mapping tools. Blue Marble is embedded to provide CRS management and conversion. More from www.kelman.com.
Speaking at the 2009 Nokia/Qt Developer Days last month, Midland Valley Exploration (MVE) CTO Mike Krus described how MVE has been using the Qt cross platform GUI toolkit to create advanced geological modeling applications for oil and gas. MVE has been using Qt for seven years to enable 2D and 3D geological modeling and visualization on Unix, Linux and various flavors of Microsoft Windows. MVE makes copious use of Qt technologies including sockets, widgets, graphics, the Webkit, designer plug ins and unit tests.
MVE’s latest product, 4D Move, used Qt from day one—the other packages have now all been ported to a cross platform common code base. 3D development uses Systems in Motion’s Coin/SoQt extension.
Krus demonstrated an impressive range of graphics—conventional 2D/3D modeling, a variety of data views and raster/vector 3D mapping combos. Qt’s Webkit enables JavaScript/C++ interaction for rich web-based functionality. Qt concurrent simplifies multi threaded code parallelization. According to MVE, Qt has provided a ‘better mixed-platform integration, a shorter release cycle, and improved quality.’ Krus noted that ‘Qt makes hard things easy.’ More from http://qt.nokia.com/about/contact-us.
Exprodat has released V200 of Team-GIS Acreage Analyst with augmented workflow capability, improved analytics and reduced data preparation requirements. A new ArcGIS Desktop tool, Team-GIS Directory provides ‘consistent, structured and intuitive access to spatial data’ and added functionality for ArcMap users. Finally, Team-GIS Segment Analyst provides a toolkit for building common risk segment maps for use in exploration play fairway analysis.
INT has released GeoToolkit 4.1, providing new graphics capabilities needed to developers of upstream oil and gas. The C++ library supports for seismic, contour and well displays, scaled display and hardcopy. GeoToolkit provides cross-platform portability via Trolltech’s Qt.
NextComputing has teamed with Sharp Reflections (an EnVision/Fraunhofer ITWM joint venture) to deliver a hardware and software combo tuned for Fraunhofer’s Pre-Stack Pro software.
Acorn Energy unit Coreworx has teamed with Fluor Corp on an upgrade to its IMpart interface management software. The new solution targets nuclear and offshore oil and gas ‘mega projects.’
Epsis’ ‘TeamBox’ is delivered as a Windows PC pre-loaded with all hardware and software required for an instant collaboration room. The ‘plug-and-play’ package connects with existing equipment such as videoconferencing systems and laptops or PCs, adding functionality for display management and collaboration with remote users.
New Century Software has launched Facility Manager WebEdit, a lightweight pipeline GIS database editing tool. Built on the ESRI ArcGIS Server platform, the application combines web mapping functionality with pipeline data maintenance features.
Fusion Petroleum Technologies has announced the official release of its GeoPRO seismic processing and analysis system which includes Fusion’s proprietary ‘Cimarron’ statics technology.
TerraSpark Geosciences has launched InsightEarth WellPath, a well planning and geosteering solution. The company has also released an API to Insight Earth.
Invensys Operations Management has rolled out Wonderware Intelligence 1.0 a toolset for real time operational data collection. The system connects historian and operational data sources to dashboards such as Microsoft SharePoint, mySAP Enterprise Portal and Wonderware Information Server.
ISS has released BabelFish Sentinal Version 1.7 with a new ‘token based’ pricing model that supports a ‘cost efficient’ entry level monitoring. The system received an enthusiastic endorsement from an unnamed senior process engineer at BP’s Australian Kwinana refinery.
PAS has unveiled a new ‘Integrity Disaster Recovery’ solution to accelerate the restart of processing plants following a disaster. The services and software combo provide a centralized recovery mechanism for all automation systems, including legacy systems that are not supported by standard IT tools.
Peloton has announced WellView 9.0 with enhanced schematics, Excel templates for pivot tables and graphs and a ‘time tracks’ for date and time based data.
PetroVR 7.3 from Caesar Systems sees new resource tracking functionality with ‘operational resource requirements’ describing resource usage in terms of the lifecycle of production and injection wells and facilities. Resource usage tracking has been enhanced to include capital and operating expenses and reallocation of durable resources.
A study by Q Associates on anti-vibration technology under development by startup Green Platform strongly endorses Green’s anti-vibration disk enclosures. Random write performance improvement was up by from 34% to a ‘startling’ 88%§
Troika’s Magma 4.0 seismic data management includes ‘Marlin’ a spider that locates TIF, SEG-D, RODE and other formats. ‘Sequin’ reads any trace sequential or multiplexed format for loading to Magma.
Speaking at the French ESRI User Group last month, Alexis Mayet introduced Total’s ‘Tr@ce’ crisis management system. Prior to the Tr@ce deployment, crisis management data was dispersed across Total’s assets and required clean-up and harmonization. Data gathering kicked off in 2007 and now the system is operational at 21 subsidiaries along with the Paris HQ. Each site has a dedicated crisis management owner—and identical crisis management procedures are used throughout the company.
Total addresses crisis management at three levels—with a management plan at HQ, an emergency plan for each subsidiary and an intervention plan at each site. Tr@ce is designed to manage emergencies such as natural disasters, blow-outs and pollution incidents.
The system uses ESRI’s ArcIMS map server to publish geographical information and technical data on the web. The system was designed and built by French GIS specialist Mobigis which is also responsible for data validation, maintenance and support. Tr@ce helps Total organize its troubleshooting effort, manage security and provides decision support to management.
Along with cultural data, Tr@ce displays site-specific information on wells, pipelines, facilities and personnel locations. Documents can be attached to a geographical location. Tr@ce uses a central web server housing maps and data. The technology includes an Oracle/ArcSDE database and an internet map server based on ArcIMS 9.2. The system was developed with ESRI’s ArcGIS Web Application Developer Framework (ADF) for Microsoft’s .NET Framework. ADF gives Tr@ce a consistent ‘look and feel’ and assures consistent layer naming and symbology. Next year Total plans to extend the system with the inclusion environmental data.
In the Q&A, it emerged that the choice of ArcIMS had been a difficult one and that a port to ArcGIS server might be considered in the future. More from www.mobigis.fr/en/.
In the excellent SEG Forum on the Road Ahead, Guus Berkhout(Delft/Delphi) argued that the current race for higher and higher trace counts and exponentially growing data volumes was both unsustainable and unnecessary. Instead of conventional shooting at a regular interval, Berkhout argues for ‘non-coherent’ seismic acquisition. The same number of shots gives better results if shot ‘incoherently.’ Overlapping shots are ‘de-blended’ in processing. The results as presented are spectacular. Further gains are to be had by migrating multiples—’they are sources too!’ and from considering the underside of reflectors—an ‘inverse data set.’ Berkhout used the analogy of the Energy Internet (a.k.a. Smart Grid) to suggest that swarms of micro robots will be used to deploy sensors on the sea bed—or robot ‘dragonflies’ that will land briefly to record a ‘non-coherent’ shot!
Juan Meza (Lawrence Berkeley National Lab—formerly with ExxonMobil) noted that ‘computing is changing more rapidly today than ever before.’ 2004 saw the end of Moore’s Law and 15 years of exponential growth. Since then, while the number of transistors kept on growing, clock speeds flattened because of power dissipation considerations. Today it is the number of cores per chip that is doubling every 18 months instead of clock frequency. This is having a huge impact on supercomputing whose whole architecture is about to change. The PC/COTS paradigm is no longer driving HPC as witnessed by the RoadRunner, a petaflop machine introduced in 2008, with 6,000 ADM Opterons and 13,000 IBM Cell BEs. Another Petabyte machine, Cray XT5 Jaguar has 10,000 Opteron cores. The TOP 500 graph shows exponential growth since 1994 by extrapolation, we may see an Exaflop machine by 2020. What does this mean for programming and applications? Another Top500 metric is concurrency. As core count rises we may see clock speeds decreasing, as they handle millions of concurrent threads and offer inter and intra chip parallelism. With chips such as the IBM CELL, GPUs, Sun’s Niagara 2 and Intel’s Network Processor, ‘the processor is the new transistor.’ Meza forecasts that, following this period of rapid change, Intel will continue to be a market leader and HPC will stabilize on a new architecture and new programming techniques. The change will be like Exxon’s move from Cray to clusters. MPI will persist because of its installed base. But we will see ‘MPI+’ with the arrival of PGAS languages, CUDA, and ‘auto-tuning*,’ programs that write programs to search across an ‘optimum space.’ Long term, we can expect Peter Kogge’s DARPA ‘Exascale’ program to pay off. But this will imply that problems such as the flattening off of current architectures and power consumption will need to be solved. Meza questioned whether 100MW of power for a billion node machine is possible—noting that as the PC market investment declines, embedded processor investment is on the up. HPC salvation may come from iPhone/ MP3 player technology—to minimize power consumption. More from https://hpcrd.lbl.gov/html/FTG.html.
Tim Keho of Saudi Aramco’s EXPEC R&D unit outlined a ‘new era’ for land seismic—addressing the near surface challenge. Exploring in Saudi Arabia involves low relief structures beneath near-surface karsts (up to 600 m), sand dunes, scarps. These are ‘hard, if not impossible to model.’ Elsewhere there are sand dunes up to 500 feet in height. Traditional solutions to static correction are passé—they don’t work. It is easy to ‘lose’ low relief structures in near surface ‘noise’. Current approaches include fast autopickers and imaging—but ‘what can you image in the near surface?’ Keho claimed that Aramco’s new solution, which he is to present at next year’s SEG, ‘turns the problem around and treats the whole near surface issue as an imaging problem—microgravity also ran. Aramco’s most challenging problem is not PSDM but ‘statics’ which now must be treated as an imaging issue. A similar-approach was described recently by University of Houston’s Arthur Weglein in the Houston Chronicle**!
Felix Herrmann (UBC Seismic Lab for Imaging and Modeling—SLIM***), agreed with Berkhout that cost of acquisition and processing turn around times are impediments in the modern seismic workflow. Moreover, ‘Moore’s law is coming to an end, we can no longer compute ourselves out of this mess.’ Today’s sampling is too pessimistic. Acquisition can be optimized with sparser shooting and ‘filling in the blanks’ by adjusting sampling to subsurface complexity. New maths, the Johnson-Lindenstrauss lemma**** and ‘incoherent’ random simultaneous source acquisition mean that we are on the cusp of a breakthrough in seismic imaging. Sparse is cheap and promises faster turnaround.
Our virtual ‘talk of the show’ award goes to Mark Thomson (StatoilHydro) for his presentation on two decades of ocean bottom seismic experimentation. StatoilHydro has acquired 62 ocean bottom (OB) surveys since 1989. Acquisition began on Gullfaks and Statfjord, with increasingly technology intensive techniques—3D, 4D repeat surveys, shear wave sources and most recently focused seismic monitoring with fiber optics on the seabed. A survey on the Tommeliten field demonstrated the feasibility of imaging through a gas cloud with shear wave data. Surveys have identified wells located in the wrong reservoir compartment, pressure build up and non sealing faulting. The ROI for the technique is ‘huge.’ Data volumes have risen steadily over the years, but processing time has stayed constant at about one year per survey. Focused seismic monitoring uses Octoplan’s sensors and is a facet of Norway’s push to ‘Integrated Operations,’ allowing for near real time use of the information. Ocean bottom is moving towards densely sampled seismic ‘carpets,’ a digital fiber optic oil field, seismic cloud with autonomous seismic nodes. Ocean bottom techniques have informed conventional acquisition such as wide azimuth and dual streamer acquisition. Data from the seismic ‘cloud’ is now transferred to the office in hours and routed to stakeholder for QC and analysis.
Following Charles MacFarlane’s hugely entertaining jacket slicing act on the Schlumberger booth, Alex Ross provided an update on GeoFrame and Petrel interaction, providing insights into Schlumberger’s differentiation of the two toolsets. GeoFrame targets very big, multi-data type projects, illustrated by a 15,000 sq. km. project with 180GB of seismic accessible from a high-end workstation. A new ‘Send to Petrel’ option produces a Petrel ‘.zgy’ file of seismic data along with a zip file of the interpretation. This can be picked up as a complete project in Petrel. Ross concluded that GeoFrame has ‘many years before it,’ and Schlumberger is still adding new features. Integration with Petrel is easy—there is a reduced risk of changes in formats and units. Ross recommends ‘staying with a single vendor solution.’
Most other vendors would no doubt concur—although they may differ as to which single vendor to chose. SMT was showing the new Kingdom Geomodeling option leveraging JOA’s Jewel Suite patented gridding technology. This claims superiority to [Petrel’s] pillar gridding with better cell sizing, geometry and distribution. SMT’s new workflow includes interpretation in Kingdom, back and forth to Geomodeler and optionally via Jewel Suite and into the simulator.
On the Petrosys booth Paul Jones provided an entertaining talk on ‘Maps in the age of Twitter.’ Jones showed how OGC-compliant web map services (WMS) can be used to produce a wide variety of maps on multiple devices. Petrosys offers a ‘publish to WMS’ button to enable such functionality which allows for industry specific maps to be mashed-up with public domain data such as OneGeology, the Gaia web map system, the Canadian Geoscience Knowledge Network or with commercial services such as Valtus.
Chris Liner, (University of Houston) gave an insightful presentation on the geological sequestration of CO2. Worldwide energy consumption is measured in ‘quads’ (1015 BTU). In 2005 some 436 quads were consumed producing 27 giga tonnes of CO2. The forecast 2030 is 680 quads—the number tracks population growth—with a concomitant rise in CO2 emissions to 43GT. To put this in context, the vented CO2 in 2005 is roughly equal to four times the world’s natural gas production! There are therefore ‘huge economic/infrastructure costs to carbon capture and storage (CCS).’ The current Norwegian Sleipner CCS test is capturing around 1 million tonnes/year.
Henry Posamentier (Chevron) demonstrated the use of Paradigm’s Gocad to study the seismic expression of depositional systems and to predict lithology. Pattern recognition is the key—as Posamentier showed with spectacular comparisons of the Albertan Cretaceous and Mississippi flood plane aerial photos—’taking Vail into the 21st Century.’ An impressive seismic firework display.
Bill Fahmy described ExxonMobil’s direct hydrocarbon indicator best practices noting that a) there are no silver bullets and b) an AVO anomaly does not necessarily mean hydrocarbons are there. DHI is all about the application of technology and fundamental scientific thought. Despite the caveats, the technique is hugely successful. Worldwide, prospects without DHI show a 30% probability of success, using the technique this jumps to 50%! ExxonMobil established internal guidelines for DHI in 1997 and these are continually revised in the light of experience. The company does its own controlled amplitude and phase processing with bandwidth balancing to compensate offset wavelet changes. Prospects are evaluated with a DHI quality vs. confidence matrix, calibrated against analogs and historical data.
* links/0911_1.
** links/0911_2a and 2b.
*** links/0911_3.
**** links/0911_4.
This article is an abstract from The Data Room’s Technology Watch from the 2009 SEG Convention. More from www.oilit.com/tech.
In a webinar this month, Industrial Defender’s (ID) Chief Security Officer Andrew Ginter addressed the concerns that arise when connecting process control systems to enterprise IT.
Corporate IT is faced with the management of thousands of desktops, servers and a potentially large number of applications for ERP, CRM and more. In this context, standardization is everything, ‘standard is better than better.’ Security dovetails with the standard approach and has led to mature solutions for virus protection, intrusion detection and patch management. Corporate IT’s pecking order starts with confidentiality, next comes integrity and finally availability—‘CIA.’ It is preferable to shut down an e-commerce system than allow it to expose client credit card information.
Control systems are different. They leverage IT too, but the endgame is the management of large, dangerous physical processes like power generation, pharmaceuticals and refineries. Such systems are at risk of explosion, loss of life—and there are laws to control operations. In a sense the CIA paradigm is inverted. Control system availability is critical to safety and comes first in system design. Confidentiality is no longer the number one concern, even though trade secrets need protecting.
There has been a long history of attempts to bring the two worlds together. Initially the systems were kept apart, but this is no longer an option. It is now desirable to couple enterprise application like SAP with the plant. But problems arise—inside the plant there is a lot of older hardware, software and unpatched stuff. Why? Safety is costly to achieve and certify. Once a system is certified, you don’t mess with it. Four levels of safety may be good news, but they make change hard to achieve. Moreover, plant managers are very conservative. They have developed a sophisticated understanding of risks and will reluctantly accept say a new screen—but shy away from a big new application. Even patch management is problematical as the safety situation after a patch may change. Testing on operations networks requires vendor cooperation and is carried out on dedicated test rigs. Even then, there are surprises. A patch once shut down the whole plant!
Port scan vulnerability tests and anti-virus software are problematical as they can slow down and break mission critical components. Many vendors do not support anti-virus. Reluctance to change means that plants run some antiquated communications protocols—many in plain text and hence vulnerable to sniffing and spoofing. Password sharing is rife and even essential to avoid undue delay in logging-on to up to a dozen systems on crew change.
What’s the answer to security at the corporate IT/control system frontier? You need to pick and chose and be careful about what is deployed. Help is available—from a few specialists—including Industrial Defender of course. More from sales@industrialdefender.com.
Joshua Fredberg has joined Ansys as VP marketing. Fredberg was formerly VP of Parametric Technology Corporation.
Martin Durbin is the new Executive VP of Government Affairs of the American Petroleum Institute (API).
Rand unit ImaginIT is now a reseller for the AutoCAD Piping and Instrumentation Diagram (P&ID) package in Canada.
Jeff Heggedahl is now president and CEO of Advantage IQ unit Avista Corp. Heggedahl hails from Scantron.
Ken Blott has joined Caesar Systems as Chief Strategy Officer. Blott was previously VP at Shell’s Unconventional Resource Energy unit (SURE).
The University of Texas at Austin is getting a three-year $1 million grant from the Department of Energy to create a ‘skilled workforce’ for the emerging carbon capture and storage industry.
Seismic Image Processing has joined CDA as an associate member.
Jean-Georges Malcor, currently Senior VP at Thales, is joining CGGVeritas and will be CEO next July. Robert Brunck continues as Chairman of the Board.
Bill Barrett from the US Environmental Protection Agency has been designated leader for the CO-LaN Methods & Tools SIG. The Vishwamitra Research Institute of Illinois is now a CO-LaN member.
Keith Garza (ConocoPhillips) has joined the CygNet Board of Customer Advisors.
dGB Earth Sciences has announced the ‘Open Seismic Repository’ of free, public datasets at opendtect.org/osr.
Lord Jenkin of Roding is appointed the first President of the UK-based Energy Industries Council (EIC).
Steve Hunt has joined Ikon Science as VP EAME. Hunt hails from Offshore Hydrocarbon Mapping. Ikon has also opened new offices in Kuala Lumpur and has named Brett Farquhar as Asia Pacific business development manager.
ION Geophysical is trialing ‘crowd sourcing’ via a Virtual Trade Fair platform—blog.iongeo.com/?p=874.
McLaren Software has teamed with eSolutions for sales and support of Enterprise Engineer in the Middle East.
Peter Chouquette has joined Moblize as Director of Global Product Management.
Department of Homeland Security secretary Janet Napolitano has just opened the new National Cybersecurity and Communications Integration Center. The Center is an ‘integrated incident response facility’ for cyber infrastructure risks.
OSIsoft has announced the opening of a sales and support office in Moscow.
The Research Partnership to Secure Energy for America (RPSEA) has elected Iraj Ershaghi to its Board of Directors.
The Society of Petroleum Engineers (SPE) and the Society of Exploration Geophysicists (SEG) have signed an ‘intersociety cooperation’ agreement ‘to work more closely together on the operation of conferences’ although the release in unclear as to a possible combination of the societies’ annual tradeshows.
Rick Luke has been appointed Chief Financial Officer of Seismic Equipment Solutions. He comes from WellDynamics. SES has also named Jose Medina Senior VP, Technical Solutions and Gustavo Solorzano as VP, New Business Ventures.
SCM has released ‘Tips for writing [Petrel] Workflows*’.
Brian Grainger is now VP worldwide sales with Spectra Logic.
Zurich-based Spectraseis has launched a Joint Industry Project for low frequency seismic R&D. Participants include Cairn, Chevron, ExxonMobil, GDF Suez and Pemex.
Arne Helland is to resign his position as CFO of TGS-Nopec next May to pursue other opportunities.
Norwegian AGR Group has closed a $5.6 million transaction, selling its 50% share of Horton Deepwater Development Systems to Wison Heavy Industries.
FMC Technologies has completed its take-over of Stavanger-based Multi Phase Meters AS.
Mark Smith, chairman and COE of pipeline mapping specialist Geospatial Holdings has converted $2 million of capital invested into the Company into equity. The conversion was at a price of $1.00 per share.
Halliburton has acquired Geo-Logic Systems of Boulder, Colorado. Geo-Logic Systems provides structural interpretation, analysis and restoration software for complex geologic environments.
ION Geophysical Corp. is to receive a $175 million cash injection from China’s geophysical behemoth BGP. The deal heralds a new joint venture to provide land seismic products worldwide, combining ION’s land equipment business with BGP’s seismic operations expertise and a land recording system currently under development. BGP will hold a 51% interest in the joint venture along with a 17% stake in ION. The deal also included $40 million of bridge financing arranged by BGP. After closing ION expects to have over $100 million in liquidity from cash and spare capacity on its revolving line of credit. The joint venture will be 51% owned by BGP and 49% owned by ION.
Kongsberg Gruppen ASA has entered into an agreement to acquire the assets and business of Havtroll AS and Havtroll Teknikk AS. The companies will belong to Kongsberg Oil & Gas Technologies (KOGT) and will be integrated into Seaflex, an existing KOGT subsidiary.
Merrick Systems is to spin-off its One Virtual Source (OVS) software into a new company, OVS Group. The newly formed OVS Group, headed by Jose Alvarez and Cheo Alvarez is comprised of the original OVS team at Merrick. The spin-off will allow refocus on its production software product line and its RFID-based asset tracking system. OVS is an optimization framework for petroleum engineering workflows and operations management.
Superior Well Services has priced its public offering of 6.0 million shares of its common stock at $10.50 per share. The Company intends to use the net proceeds from the offering to repay debt.
Chevron Australia has awarded Fastwave Communications a contract for the provision of a water quality monitoring system on its Gorgon offshore LNG megaproject. Satellite communications are to be provided by Iridium. The system will provide environmental monitoring of dredging and spoil operations near the gas processing plant on Barrow Island.
Fastwave’s system consists of underwater instrumentation modules positioned on the sea floor around the dredging and spoil disposal sites. Each subsea module contains turbidity sensors, data loggers, an Iridium short-burst data (SBD) modem and rechargeable battery packs. These are connected to small moored buoys which relay the data packets through the Iridium satellites to an environmental monitoring system onshore.
Fastwave Director Nick Daws said, ‘We designed our underwater monitoring systems to take advantage of Iridium’s global coverage, high network quality and low-latency, two-way SBD links, providing a robust solution capable of working reliably under extreme environmental conditions.’
All system instrumentation is housed in waterproof, hardened subsea capsules rather than in surface data buoys, providing extra protection in the cyclone-prone region. The modules are designed to function at depths up to 50 meters with endurance of at least four months between battery recharges. More from www.fastwave.com.au.
Oil and gas chemicals specialist Tetra Technologies is to deploy Merrick System’s ‘Diamond’ RFID tags to track its inventory during offshore operations. Tetra’s EPIC Divers & Marine and EOT business will use the tags to track equipment used in oil rig decommissioning and rework. Tags are used on a wide array of surface equipment including compressors, heavy tools, equipment skids, pressure chambers and containers. Tags are used in conjunction with a wireless handheld RFID scanning device to update a central equipment database.
Tetra project manager Kris Howard said, ‘In testing and evaluating different RFID tags for use in our field operations, Merrick’s was the only one that held up to the adverse field conditions and rework process that our equipment is exposed to. Our testing included vibration, sandblasting, painting and exposing the tags to subsea pressures. We also covered the tags with layers of tape and even cement to test the impact on scanning distance and were pleased to see that the Diamond tag performed satisfactorily under all conditions.’
Merrick Systems CEO Kemal Farid added, ‘Our low frequency passive tags has been gaining acceptance as drilling operators and equipment manufacturers use them to track high-value and high-maintenance downhole and surface components both onshore and offshore. Tracking asset location, use history, inspection and maintenance using the Diamond Tags helps companies to significantly save operational time and materials, and manage their assets effectively.’ More from www.merricksystems.com.
At an ‘Executive Breakfast’ during last month’s Society of Exploration Geophysicists Convention, Schlumberger rolled-out its Ocean Store, described as a ‘collaboration space’ for Ocean developers to market their Petrel plug-ins. The official store opening will be at the SIS Global Forum in London next May, meanwhile Schlumberger is soliciting contributions from plug-in developers to constitute a ‘showcase’ of independent software vendors at the Forum.
Another development in the Ocean 2009.2 release will be of interest to data managers who can now create data management workflows such that Petrel projects can be opened and saved programmatically—with Petrel running in ‘silent’ mode. Schlumberger claims 20,000 plus Petrel licenses in its global install base. The Ocean development environment is used by ‘4 of 5’ super-majors. More from www.slb.com.
OpenSpirit had a field day at the SEG with the announcement of four new taken for its vendor-independent data integration platform. Midland Valley Exploration has joined OpenSpirit’s Business Partner Program to enable future releases of its Move package. DownUnder GeoSolutions is to add OpenSpirit connectivity to its quantitative interpretation toolset.
Austin GeoModeling’s Recon 3D is also to leverage the framework. Finally, Denver-based data management specialist EnergyIQ has joined the Partner Program and will OpenSpirit-enable its flagship EIQ Loader—providing access to vendor data including the new IHS EnerdeqML global data feed. More from www.openspirit.com.
The Australian Government has awarded 3D-GEO a contract to build a geological ‘skeleton’ of the Gippsland Basin. The model will be ‘fleshed out’ by GeoScience Victoria geologists and used in natural resource exploration. The award is a component $5.2 million four-year initiative.
ESM has formed a strategic alliance with Cleo Communications in a ‘full-service’ EDI offering to the oil and gas vertical.
Geotrace has deployed Isilon IQ’s network attached storage to power its seismic data processing effort. Isilon’s OneFS operating system has created a single global namespace for Unix and Linux-based clusters, reducing management overhead.
Dubai, UAE-based Tebodin Middle East, a consultancy and engineering firm, ahs selected the AVEVA Plant portfolio, including AVEVA PDMS and AVEVA Instrumentation, for its operations in the region.
BP has selected Stingray Geophysical to conduct two Life of Field Seismic (LoFS) feasibility studies on its UK Clair and Schiehallion developments.
Statoil has signed a ‘long term’ deal with CapRock Communications for the provision of satellite communications to a new drillship operating in the Gulf of Mexico. CapRock’s managed VSAT solution will provide data and voice comms to Statoil’s offshore personnel.
Shell Canada deployed CGGVeritas’ ‘SeisMovie’ 4D monitoring at its Peace River, Alberta heavy oil project. SeisMovie sources were activated over a three month period while buried receiver arrays recorded up to a terabyte of data per day.
Dresser Wayne announces that its iX Pay secure payment upgrade kits have been approved for deployment at Shell retailers in the US.
TGI, Colombia’s largest natural gas transporter, has selected Energy Solutions International’s (ESI) PipelineTransporter (PT) Gas Suite to support it gas management system of 3,700 km of natural gas pipeline and nearly 100 transportation contracts. PT will be integrated with TGI’s SCADA system, its SAP ERP and ESI’s PipelineManager real-time hydraulic modeling application.
Expro Group reports a successful ‘Well Cast’ from a Gulf of Mexico producing well. Expro’s ViewMax sideview camera was used to assist in a high-profile fishing operation—offering Houston-based completion engineers live, full motion video of the remote operation.
IHS is to bundle Labrador Technologies’ (LTI) eTriever web application with its Canadian Oil and Gas Critical Information offering. IHS has the exclusive ability to offer eTriever combined with any Oil and Gas data on a worldwide basis.
Royal Dutch Shell’s downstream unit is to deploy Invensys’ Wonderware IntelaTrac mobile workforce and decision-support solution as a component of Shell’s ‘Ensure Safe Production’ initiative—now rolling out at 29 of its global refineries.
The US Department of Interior’s Minerals Management Service has awarded a 5 year contract extension to TGS-Nopec for well log data management services.
Repsol YPF has purchased ABB’s CpmPlus Smart Client to improve operational efficiency at its Lube Oil Blending plant at the La Plata Refinery, Argentina.
Schlumberger has teamed with Rock Deformation Research to provide a Petrel module for structural analysis and fault seal studies, the partner plug-in to be released as a Petrel module.
Tieto has designed and built an automated production management system based on its Energy Components for Lukoil Overseas.
The US Department of the Interior’s Minerals Management Service (MMS) went live earlier this year with a new electronic system for invoice payment and reporting. MMS CTO Robert Prael described the new system’s functionality and the MMS’ technology stack to Oil IT Journal.
‘Our internal system is PeopleSoft which rides on an Oracle database. We use this to process companies’ monthly reports and payments. The MMS acts as a ‘pass through’ entity directing royalty revenue to stakeholders. We process reports and payments, providing statistics and audits. The new system provides companies with a simple invoice, along with supporting documents that can be analyzed using Access or Excel. Everyone agrees that this has greatly improved the process and we have saved on reams of paper.’
‘Before, invoices and associated reports were routinely printed and mailed to each industry reporter—often shipping boxes of paper files! Companies would annotate the paperwork and return it to us for further review. With the new system, all this information can be viewed online. The new ‘eSOA’ web site allows industry reporters immediate access to their statements and allows them to make annotations electronically. The system then sends the updates directly to the MMS servicing accountant for review.’
‘Most of our incoming production reporting processes are already on EDI. With the new portal, we are trying to get all outgoing processes electronic. We have been using Hyperion Brio Portal for years to provide industry with our reports and we are very satisfied with the product.’
‘Previously, a missing report, payment, or unpaid bill was sent out quarterly on paper. Now users log on and find out for themselves—this is real time data.’
‘Our data exchange format is PeopleSoft SPF. This can be read with a freely available viewer, you don’t need the software to use it. On the payment side, this is 99% electronic and we are pushing for more. E-Payment is safer than cutting checks. But it is expensive for a small ‘mom & pop’ shop. For these we are looking at Treasury’s www.pay.gov system which will be free. This is already being used in the Gulf of Mexico. The onshore situation is a bit different with lots of small companies paying rent.’ More from www.mms.gov.
Oracle’s ‘Digital Oil Field’ heard from a triad of upstream standards bodies. PPDM CEO Trudy Curtis proposed a standards ‘time line’—beginning with ‘individual’ standards circa 1980 to ‘corporate’ in 2009. The dream of industry level standardization has yet to be realized—and ‘global’ standards are even further in the future. Concerning PPDM’s own current activity, Curtis highlighted the ‘What is a Well’ initiative and the ubiquitous PPDM 3.8 data model standard.
Energistics’ VP Business Development Jerry Hubbard enumerated the standards bodies flagship ‘ML’ standards, ProdML, WitsML and the emerging ResqML reservoir modeling protocol. Hubbard also proposed a division of labors on the development of geophysical standards with the Society of Exploration Geophysicists (SEG) covering acquisition and processing and Energistics taking over on exchange standards for processed and interpreted data.
Ben Zoghi, Director of the Texas A&M-based RFID Consortium, listed a bewildering number of ‘supported’ standards impacting RFID including EPC-UHF, Energistics, DASH7 and ISO. Current RFIDC members include Dow and BP but Zoghi expects more majors to join in 2010. The RFIDC has a ‘live lab’ at the Brayton Fire Training Field, College Station, Texas—also know as ‘Disaster City.’ More from www.oilit.com/links/0911_9.
GE Oil & Gas’ Pipeline Solutions business has been awarded a ‘multi-million,’ six-year contract to supply Qatargas with pipeline integrity management services to enhance the monitoring and maintenance of the company’s liquid natural gas (LNG) network. GE will build and deploy a custom pipeline integrity management system (PIMS) and will supply manuals and procedures for in-line inspection (ILI), software automation and engineering assessments.
Sheikh Ahmed Al Thani, Qatargas COO Engineering & Ventures, said, ‘A critical success factor in achieving our goal of a world-class LNG facility is the continued assurance of availability and safe operation of the pipeline networks from our offshore production sites to our onshore treatment facilities.’
GE Oil & Gas’ PII Pipeline Solution will be tuned to Qatargas’ requirements at GE’s technology centers for excellence in Cramlington, UK (ILI) and Mission, Kansas (software management). The new integrity management contract extends coverage to additional offshore product lines and will leverage PII Pipeline Solutions’ extensive wet gas experience. Previouslyn GE Oil & Gas signed an 18-year customer service agreement to support Qatargas’ operations. Qatargas shareholders include QPC, ExxonMobil, Total, ConocoPhillips, Shell and Idemitsu. More from www.geoilandgas.com.
A post implementation report by Deloitte Australia on the deployment of Aveva Net on Woodside’s Angel platform provides an enthusiastic endorsement of the toolset. Angel, Woodside’s first unmanned platform, was commissioned in 2005. Today, there are over 2,000 Aveva Net users in the company. The toolset supports operations and maintenance, engineering design, training and in-field investigations and verification. Aveva Net acts as a portal for engineering information held in systems such as SAP, 3D CAD and intelligent P&ID systems.
Before Aveva, Woodside had over 250 ‘disparate’ applications and information sources. Now these have been trimmed down to 18 with a significant reduction in licensing fees, maintenance, training and support. Woodside’s Aveva Net deployment is known as the Asset Lifecycle Information System (ALIS) and is part of an established methodology that spans construction and handover. ALIS saved Woodside over $1.5m in handover costs on Angel alone. Read the full Deloitte report on www.oilit.com/links/0911_6.
Invensys’ Operations Management (IOM) division has announced InFusion SCADA 2.0 (IFS2), a hardware and software offering for oil and gas and other process industries. IFS2 distills Invensys’ 40 plus years of SCADA heritage and includes advanced integration, interface and control technology capabilities.
IOM SCADA products manager Chris Smith said, ‘Robust, reliable monitoring of remote operations can mean reduced downtime, efficient maintenance and improved security. The new SCADA software and remote terminal units (RTU) deliver monitoring, supervision and maintainability, along with simplified interaction and management.’
IFS2 software components enable SCADA developers to create reusable objects and templates and to manage HMI displays, assure data quality and equipment maintenance tags. Engineers can use repository objects to build new applications, enforcing company standards.
A new SCD2200 RTU targets complex applications requiring central stations and redundancy, such as upstream oil and gas well monitoring, wet gas, oil and gas transportation facilities and pipelines and high-level well control. The new system will be available in January 2010. More on Invensys in oil and gas in next month’s Oil IT Journal and from http://ips.invensys.com.
CygNet has extended its SCADA software support offering with CygNet for Pipeline (C4P). C4P is claimed to be the ‘first packaged vertical solution for pipeline operators.’ C4P includes implementation templates, native device connectors and applications for standardized gas operations. Darin Molone, SCADA manager, Atlas Pipeline said, ‘With CygNet, Atlas was up and running with a new enterprise SCADA solution in about three weeks. Now I can tune the system myself as often as I want, usually without IT involvement.’
CygNet VP Steve Robb added, ‘Pipeline operators have been forced to patch together information from different vendors, hindering their efficiency and responsiveness. C4P gives them a standardized way to deliver a rich information landscape and eliminate the risks associated with one-off, ‘big bang’ software projects.’
Robb told Oil IT Journal, ‘C4P extends the user base of our Enterprise Operations Platform to gas pipeline operators. The solution offers out of the box connectivity to SCADA devices and minimizes the customization involved. EOP provides unified data management with a specific schema for gas pipelines and a gateway to leading enterprise service buses including TIBCO, BEA and Oracle. We have also been working with PODS and the University of Houston to leverage their data standards.’ More from www.cygnet.com.
A new paper* by Shahab Mohaghegh, Professor of Petroleum and Natural Gas Engineering at West Virginia University and president of Intelligent Solutions (IS),
provides an update on the application of artificial intelligence and data mining to real time reservoir modeling and management. IS’ surrogate reservoir model (SRM) is a lightweight characterization of a field that enables compute-intensive techniques such as neural networks and fuzzy logic to be performed in real time.
SRM has been successfully field-tested on a Middle East giant oil field. Prior to SRM, a single simulator run on a million cell full field model took some 10 hours on a cluster of 12 parallel CPUs. The SRM approach sped up the modeling to the extent that it became possible to perform tens of millions of meaningful numerical experiments to ‘comprehensively explore’ the reservoir model’s solution space resulting in the development of a new field development strategy. The field’s operator has now decided to apply the technique on other assets.
Currently, SRM is a service offering but a new ‘Intelligent Surrogate Modeling and Analysis (ISMA)’ software package is scheduled for release in 2010.
The paper also outlines ‘top-down reservoir modeling,’ (TDRM) which Mohaghegh describes as ‘the hottest of our current workflows.’ TDRM builds its model of the reservoir from production data rather than the convention ‘bottom-up’ approach of building a geological model. More from www.IntelligentSolutionsInc.com.
* www.oilit.com/papers/IntelligentSolutions.pdf.
Kiersted Systems (KS) presented its File Review technology and Early Case assessment system at the IQPC e-Discovery in the Energy Industry conference in Houston this month. KS VP Linda Gordon noted that, ‘There is great interest in the energy sector in litigation trends and the constraints faced by electronic discovery teams. Data volumes and tight timeframes are challenging.’
KS’ Dynamic Case Assessment identifies data collection issues early in a project, provides quick-look early case assessment throughout the e-discovery process and supports strategy changes as new factors come to light. DCA provides graphical representations of logical groupings of information letting operators decide whether and how to respond to pre-litigation demands, and once litigation has begun, whether to attempt early resolution.
A companion tool, K4 Rapid Review improves the speed and accuracy of the expensive review phase with an enhanced native file-format-based workflow. KS clients include Exxon-Mobil and Shell Oil. More from www.kiersted.com.
Speaking at the Society of Petroleum Engineers’ Annual Technical Conference and Exhibition last month, John Blaymires described how Hess has built its Asset Management Platform (AMP) around Houston-based 3GIG’s Prospect Director 2.0 application. The application builds ‘information packages’ for field development plans for the Hess global asset base—currently in pilot. Prospect Director connects disparate upstream technical data and workflows into a ‘story line’ of an asset development plan. Global portfolio information can be collected into a corporate knowledge base.
3GIG released Prospect Director 2.0 last year (OITJ May 2008). The web-based application supports asset team workflows, business and decision processes, lifecycle based data, information and knowledge management, well and well work planning, AFE* and inventory management. Also last year, 3GIG signed a strategic alliance with Knowledge Reservoir to offer a range of business process and well lifecycle management services leveraging Prospect Director. Hess is also a Knowledge Reservoir client. More from www.3-gig.com.
* Authorization for expenditure.