May 2007


Chevron moots object database

An object database from Objectivity may soon underpin Chevron’s Architecture for Refinery Real-Time Interoperable Systems (ARRIS). Mike Brooks outlines Chevron’s real time problem set.

The upstream tried object databases unsuccessfully back in the 1990s with notably POSC’s Epicentre database, originally slated for deployment on HP’s ObjectSQL or UniSQL’s SQL/X. But, judging from a Chevron/Objectivity webinar this month, it looks like the technology may be resuscitated in the oil and gas vertical—with a shift in focus on federating disparate existing databases rather than building great big new ones.

ARRIS

In the webinar, entitled, ‘The plant of the future: achieving actionable information through data design,’ Mike Brooks outlined Chevron’s Architecture for Refinery Real-Time Interoperable Systems, ARRIS, and showcased Chevron’s problems in supporting refining, primarily in developing new business process workflows and decision-making in the space between control systems and Planning/ERP systems. Brooks wants to stimulate vendors ‘to fill the space with competitive products.’

Decision support

To link management systems, SAP, Historian and sensor data, decision support needs to ‘live’ above everything else. One workflow addresses the fact that there are many new ways to trade intermediate products. Chevron can sell more lube oil than it makes, leading to a complex, changing supply chain, where there is money to be made by optimizing supply dynamics of water, hydrogen, sulfur etc.

Restrictive

Brooks is convinced that ‘yesterday’s tools will not fix today’s problems,’ that relational databases, SQL and ETL technologies are ‘far too restrictive’ and an obstacle to the composite workflows. Chevron wants to federate multiple data sources, to manage metadata in real time and support composite queries across assay, equipment and maintenance databases. Such queries are corralled into workflows for checking crude, updating maintenance or lining up blends—each of which can be assembled differently for different tasks and subject to Chevron’s IT governance. Chevron has begun implementing common functions like report writers leveraging Web 2.0 ‘mashups’ to blend information across maintenance and work order systems.

Objectivity/DB

Brooks told Oil IT Journal, ‘I believe that Objectivity/DB can fill the need for a more flexible, agile, speedy and federated data management tool. We tend to handcuff our future to older technologies such as ETL and the relational database. Objectivity/DB can be a part of the toolset, providing a framework for developing services that align with the business needs. These can ten be reconfigured to meet new opportunities in evolving markets.’ Objectivity has a strategic relationship with Chevron. Partners in the Plant of the Future program include IBM and Yokogawa. More from www.objectivity.com/OIL-IT.asp.


OpenFlow announced

New software integration platform from French Petroleum Institute (IFP) leverages the Eclipse IDE and open source databases.

The French Petroleum Institute (IFP) has just announced ‘OpenFlow,’ a new infrastructure for ‘next generation software’ from the IFP and its marketing arm Beicip-Franlab. OpenFlow (previously called ICarre), comprises a toolset of basic components including a geoscience data model, database storage, data management, import/export, 1 to 4D visualization and workflow tools. OpenFlow also includes an application programming interface (API) for plug-in and algorithm development.

Open Source

OpenFlow leverages open Source software, notably the Eclipse integrated development environment (IDE) and databases including PostgreSQL and mySQL (Oracle is also supported). Industry specific data formats including Rescue and OpenSpirit allow for interfacing with third party software.

PumaFlow

IFP tools for the new architecture include FracaFlow (fracture network characterization and modeling), CondorFlow, (assisted history matching) and, real soon now, PumaFlow, the IFP’s new reservoir simulator.


Digital oilfields, silos, laggards and irritation

Oil IT Journal editor Neil McNaughton sees encouraging signs of silo boundary-breaking ventures from the vendors, but gets mildly irritated by some engineering old chestnuts.

Trawling through Schlumberger CEO Andrew Gould’s presentation to the Howard Weil Energy Conference last month looking for IT related gems, I was disappointed. IT barely got a mention. I think that this is a kind of a Freudian slip, because notwithstanding the huge additional reserves that digitization of this and that are supposed to be generating out of thin air (q.v. CERA et al.), what really turns on Schlumberger’s investors is hardware of the wireline, seismic and production tool variety.

Silo

In fact there is a very considerable historical silo wall inside of both the major vendor’s shops, between their traditional revenue generating activity and their relatively poor relations, the software units—for Halliburton, Landmark, and for Schlumberger, Information Solutions. What makes the situation even more interesting is that a lot of software that is closely tied to the tools is kind of marooned on the wrong side of the silo boundary and may or may not be aligned with the software arm’s standards, visualization paradigms, or marketing.

Old chestnut

I have discussed this in previous editorials and if I am bringing up the old chestnut again, it is because, despite a lot of talk to the contrary, silos define the industry and crossing the silo boundaries in a meaningful way is an event worthy of note. In fact I witnessed two such events in the past couple of months, one at the AAPG and the other at the SPE Digital Energy conference in Houston this month.

Geosteering

In both cases the ‘killer app’ is the geosteering and measurement while drilling (MWD) combo which is getting a lot of traction as high tech applications like coiled tubing drilling and artificial lift are commoditized. There is a pressing need to see where the drill bit is in relation to the earth model. In other words, to mash up software from opposite sides on the intra-vendor silo wall.

Landmark

At the AAPG, Halliburton was showing a blending of workflows from its Sperry MWD unit with Landmark’s applications. This is leading to a rethink about cross-silo workflows and the likelihood that Landmark’s DecisionSpace will bring more logging and tool-specific applications into the Landmark software fold. The idea is to support complex drilling activity like dual over-under wells (tar sands) with targeted modeling tools.

Schlumberger

On the Schlumberger booth at the SPE Digital Energy show I spotted a similar approach to the same problem. A twin screen display showed a Petrel 2007 model on one large screen in ‘landscape’ mode with a second ‘portrait’ mode screen showing MWD log information streaming in from the Interact server (again on the other side of the silo wall). Of course what is really amazing about this is not so much the technology, but that this has taken so long. But that is part of the nature of silo walls. They are good at keeping things apart.

SPE

On the topic of silos, it seems churlish to address similar observations in the direction of the Society of Petroleum Engineers itself, whose Digital Energy conference does a good job of bringing in folks from further flung fields from business intelligence to process control. But I have to say that it is a shame, and indicates how ‘siloed’ we still are, that there is so little recognition in the PE community of seismic and visualization technology. There is an irritating tendency to defer to other industries—financial services, defence/intel with the implication that ‘we’ are laggards.

Embarrassment

The ‘laggard’ pitch was popular a year or two ago, although I was never very sure why. Today it is a positive embarrassment as the PE community is having to turn its vest around and claim technology leadership in order to entice new grads away from, well, financial services inter alia! On the topic of irritation, I would like to suggest that it is time we stopped talking about the ‘digital revolution.’ The oil and gas business has ‘digital’ for 40 years. In 1970, seismic companies were already donating their obsolete digital recording systems to my bewildered teachers. For IT excellence, engineers would do well to at achievements in geophysics and even their own modeling community before business intelligence. For the digital oilfield, the communities to watch are plant and process control, themselves a positive maze of silos cultures and complexity!


CERA’s Capital Cost Index rise slows

Six monthly rise down from 13% to 7% as costs constrain activity.

The latest Upstream Capital Costs Index (UCCI) survey from IHS unit Cambridge Energy Research Associates (CERA) suggests that the ‘dramatic cost surge’ in the oil and gas industry may be slowing. The UCCI showed costs increasing 7% in the six months ending March 31, 2007, compared to a 13% hike in the previous six-months. Since 2000 the index has risen 79% (mostly in the past two years), compared with a 16% rise in the non energy and food index since 2000.

Ward

CERA researcher Richard Ward notes, ‘In 2006, the annual rate of project inflation was 30 percent. If this trend continues, it is possible that a plateau may be reached in 2008.’ But Ward cautions, ‘We are not yet at the top of the cost increases. Cost relief may not be seen until 2008/09.’ The survey also notes constraints due to the cost of experienced personnel and project management issues as projects are on hold pending capital equipment availability.


Oil IT Journal Interview—Pat Kennedy, CEO OSIsoft

OSIsoft founder talks about the PI data historian and its growing role in the upstream.

How did the PI Historian start out?

Our initial goal was to put systems into refineries, paper mills—pretty well any industry. We spotted the need for ‘horizontal’ data management. Vertical applications are great at their specific task—like maintenance, reliability, etc. But data structures are horizontal. This is where PI provides a common base. The upstream has a very large number of suppliers, making it very compelling to have a single ‘discovery point’ for data users.

What exactly is a Historian?

You often hear that a Historian is for ‘real time’ data. I prefer to talk about ‘time series’ data. This could be a monthly meter or an urgent fire alarm—events that may trigger a complex chain of action. We don’t necessarily know what these are. In fact we stay away from applications. But our data capture experience goes back 30 years. In the 1980s an interface to a process control device cost $1 million to develop. Nobody wanted to do it except us! We developed interfaces for just about anything. Once this was done, our clients started getting incredible amounts of this type of data. Today, one turbine today may generate 50,000 points per revolution, with optical pyrometers looking through the walls onto every blade—very important in detecting blade failure. The same pattern can be seen in all industries with more and more data to monitor and historize.

Why can’t you use a RDBMS?

You can’t! Some applications require huge amounts of data for analysis, but you can’t feed SAP with a million events per second! Once data is historized, you can filter it to make it look like a RDBMS, to report aggregate information or feed to Excel. We expose our data with Microsoft’s WebParts so that data can be leveraged by Portals like SAP’s iView or Microsoft SharePoint.

How does this relate to the upstream?

It’s a great fit. Reliability engineering systems let you drill down through data without having to go offshore. BP uses the system to remotely monitor its wells on the Troika platform—PI fuels their modeling effort. In the end, it’s people, not apps, that need the data. Fuel them with the right data and let them do their jobs efficiently.

Do you see a cultural divide between upstream and process control?

There are two degrees of separation. One is between plant floor workers who know SCADA, PLC etc. and the consulting/engineering professionals who look at longer term solutions with reservoir and reliability engineering. The other divide is between IT and process—where convergence and security issues are interesting challenges.

Can you run the plant from a Historian?

Some transmission companies are run from the PI Screen but this is not without risk. It’s unlikely to happen in mission-critical fields like oil and gas but the method is OK for ‘supervisory control.’ It will not replace low level control systems.

What’s OSIsoft’s take on ProdML?

ProdML still needs some more work to make it a ‘maintainable’ standard. We are also very interested in the OPC Universal Architecture which is destined to become the standard for new process naming conventions. It should be possible to ‘pipe’ ProdML up to OPC UA.

Is this a Microsoft only play?

Microsoft has 95% penetration on the plant floor, less in the upstream. But service oriented architecture reduces the importance of the operating system. We work with Microsoft, Linux, Unix, whatever it takes.

More from cdugger@osisoft.com.


Chevron backs high performance computing startup

SiCortex is building a supercomputer from the ground up—targeting six teraflops in single cabinet.

A start-up high performance computing company, with funding from Chevron’s Technology Ventures unit has set out to change the face of high performance computing (HPC). Conventional wisdom has it that the PC cluster offers unbeatable price/performance in HPC. Indeed, the PC cluster has largely displaced high-end bespoke silicon from the likes of Cray and SGI from the TOP 500 HPC performance classification. The received view is that microprocessor is now a commodity and that developing a competing architecture would be prohibitively expensive.

Interconnect

There are however a couple of flies in the PC cluster ointment. As node counts grow, power consumption and cooling requirements are embarrassingly high. But more significantly, connecting hundreds or thousands of PCs together creates interconnection problems that result in real CPU usage of perhaps only 10% of the theoretical maximum.

SiCortex

SiCortex’ plan to ‘engineer a cluster computer from the silicon up’ has involved taking a different slant of ‘commoditization.’ Today, building a ‘bespoke’ chip need not imply the multi man-years of past generations. SiCortex’ John Goodhue told IT Journal, ‘Our design team of 25 didn’t design the chip from the ground up, instead, we leveraged common off the shelf components (COTS). 80% of the new is COTS. Entry costs in chip design and manufacture have fallen dramatically, driven by cell phone and router companies.’

Backplane

‘We designed our chip for cluster use and have been able to put a cluster onto the motherboard, reducing interconnect times and increasing performance. The system looks like a single machine with equal access to 8TB of memory from all processors. The interconnect performance barriers are down, you can get data in and out faster. This means the system has great potential for seismic processing where loading a multi TB data set to RAM can be a significant part of the total job time.’

6 processor node

Each chip contains six 64-bit, 600 milliwatt processor cores, multiple memory controllers, a high performance cluster interconnect and PCI Express connection to storage and internetworking. A node with DDR-2 memory consumes 15 watts of power, an order of magnitude less than a conventional cluster node.

Single cabinet

The new design shrinks a potential 6 teraflops of compute power into a single cabinet with a 5’ x 5’ footprint. Software is Open Source, Linux, GNU and QLogic’s PathScale compilers for FORTRAN, C and C++. SiCortex will soon ship its first beta products, the 5.8 teraflop SC5832, and the SC648 with a half-teraflop peak. We asked Goodhue where this would rank in the TOP 500. He replied, ‘Our market is more the bottom 10k than the Top 500! But we expect to make the bottom half of the Top 500 later this year.’


Geotrace announces Diamond seismic processing package

Geotrace’s seismic processing and interpretation package designed as ‘unified subsurface information system.’

Houston-based Geotrace has announced its new ‘Diamond’ integrated seismic processing suite. Diamond provides ‘fast and efficient’ processing of large seismic data volumes and integrates all E&P data types and formats.

Weigant

John Weigant, Geotrace VP of geotechnical apps said, ‘Diamond began as a redevelopment of our seismic data processing system. Our goal was a complete re-write for modern hardware and programming tools. But Diamond goes further. We have re-thought everything—from the way seismic data is stored and manipulated to how to optimize computer resources.’

Clusters

Diamond takes advantage of modern hardware architectures, such as LINUX clusters and distributed data storage. Diamond integrates with Geotrace’s recently acquired Tigress geological and geophysical interpretation workstation, adding capabilities for petrophysical, geological, production and other downstream data types.

Stein

Jaime Stein, Geotrace’s chief geoscientist added, ‘Tigress provides the database and data management capabilities that make seamless integration, closer to the ideal of a unified subsurface information system, from core to basin scale with seismics as the ‘glue’ that ties it all together.’

Tigress

‘Diamond’s integration with Tigress underscores our intent to be the leading processor and integrator of E&P data.’

Phase II

The first phase of the software platform is capable of processing and integrating seismic data, well logs, core data, production data and reservoir models, so that they can be accessed and utilized through one software platform. Phase two, which will increase the software platform’s functionality, should be complete by year-end. More from www.geotrace.com.


Google Earth Enterprise—not quite what we thought!

Google’s enterprise-level mapping system eschews hosted data paradigm of public version.

In last month’s editorial, Neil McNaughton waxed lyrical about the Google Earth (GE) model of hosted data and thin client access. It turns out that, even though this true for the public version of GE, Google has designed its Enterprise GE infrastructure rather differently.

Patel

Google’s Sanjay Patel put us right. GE Enterprise runs on a machine inside the company firewall. In fact it can be delivered as a Google ‘appliance,’ a hardware and software bundle that is ready to run. Users log on and connect to data on the Enterprise server from the client workstations.

GIS

It turns out that GE Enterprise, while not a full-blown GIS, has a deployment model that is closer to ESRI’s than the public GE. For an oil and gas company, with a large GIS dataset, this needs to be migrated to the Enterprise server. Once it is there, it is accessible from workstations using Google’s Keyhole Markup Language (KML). One GE Enterprise oil patch customer’s well data is connected dynamically to the system via KML. GE Enterprise is bundled with Blue Marble 500M imagery and global 1KM terrain.

Public data

We quizzed Patel as to how an ‘Enterprise’ user could access the massive public Google dataset. The answer is somewhat obscure. The Enterprise version is limited to data behind the firewall. The GE Pro client allows access to the public data set. But using the Google’s Petabytes of image data in a commercial context may mean cutting further deals with the data providers. Although here Patel points out that using Google’s buying power should mean a better deal for the end user.

GIS or not?

Is GE a true GIS? The answer is yes and no. While GIS data analysis functionality is limited, contrary to our editorial, GE Enterprise shares one aspect of other GIS solutions—it adds healthy dose of complexity to enterprise IT and data management!


Celoxica’s claims seismic benchmark for FPGA-based system

AMD ‘Torrenza’ field programmable gate array said to increase processing speed 28 fold.

At the AMD Torrenza Initiative seminar this week, Celoxica showed the results of a seismic processing benchmark conducted for an undisclosed client that achieved a claimed 28 fold performance improvement. Celoxica’s ‘accelerated computing solution’ consists of a ‘field programmable gate array’ (FPGA), a hardware number crunching co-processor and HyperTransport interconnect. Celoxica enables FPGA co-processing an algorithm library and C compiler.

Jussel

Celoxica VP Jeff Jussel said, ‘We have demonstrated accelerated computing for customers in industries such as oil exploration, financial analysis and life sciences. We can now also quantify the financial advantages of our FPGA co-processing solution with a new calculator that figures ROI on hardware and support costs, power savings and improved revenue opportunity from increased computing performance.’

Goddard

AMD senior director Mike Goddard added, ‘HyperTransport and AMD64 provide a co-processor expansion capability that enables this FPGA-based acceleration. Celoxica’s integrated acceleration hardware and API demonstrate the successful of our Torrenza initiative.’ AMD’s Torrenza initiative improves support for specialized coprocessors in 64bit AMD Opteron-based systems.


Software, hardware short takes ...

News this month from Madagascar, Aveva, CiDRA, Deloitte, EPS, Ikin, Innerlogix, RapidMind andInvensys.

A meeting of the Madagascar open source seismic processing community was held last month at the Bureau of Economic Geology at the University of Texas at Austin. Madagascar (originally RSF) is a comprehensive seismic processing infrastructure that was announced last year. The cross platform package supports C, C++, Python and MatLab programmers. A key facet of Madagascar is a ‘reproducible research component’ that integrates processing flows and reports through the use of SConstructs (scons.org). More on Madagascar from rsf.sourceforge.net.

AVEVA has teamed with INOVx Solutions to integrate INOVx’s RealityLINx product with PDMS, using AVEVA’s Laser Model Interface. RealityLINx renders laserscan point cloud data for use in 3D CAD plant models. These allow for real-world capture of ‘as-built’ facilities for subsequent virtualization of operations and maintenance.

CiDRA Corp.’s SONARtrac clamp-around process monitoring systems have received ATEX Class I, Zone 2 certification from UL International Demko A/S. The ATEX directive has been approved by the EU and is mandatory for electronic equipment use in explosion-hazard areas. SONARtrac systems can be retrofitted to existing flow lines without compromising the integrity of the production system.

Deloitte Petroleum Services has just released a new version of PetroScope, its Excel-based discounted cash-flow modeling framework, which includes some 80 fiscal regime models. PetroScope 2.0 new features include expected monetary value analysis, enhanced screening and reporting and incremental economics and an improved user interface.

Weatherford unit eProduction Solutions has announced WellFlo 4.0, a new release of its production optimization package. The package was redesigned to be more intuitive and to better match well models with reality. An enhanced GUI ‘empowers users to do more with models and data.’

Ikon Science has announced a RokDoc modeling plug-in for Schlumberger’s Petrel, adding rock physics-based predictions to the seismic interpretation workflow. The ‘Modeling while Picking’ plug-in leverages Schlumberger’s Ocean API to provide interpreters with ‘interpretational insight.’ Events in Petrel update the 2D RokDoc model in real time.

Innerlogix has released QCPro 3.7 for the assessment, correction and synchronization of upstream data. The new release adds parallel processing to improve compute capacity and performance. Innerlogix estimates that by year end 2007, QCPro will be processing over a billion business objects per day.

RapidMind has released an upgrade to its compiler for the Cell BE and graphics processor units. RapidMind is currently testing the system on IBM’s Cell Blades hardware.

Invensys’ Tricon controllers have achieved Achilles Controller Level 1 cyber-security certification from Wurldtech Labs. Security testing is increasing in importance as process control moves from proprietary communications to open protocols such as Ethernet, TCP/IP, and OPC to integrate safety systems with distributed control systems.


PGS deploys vibrator tracking from Fleet Management Solutions

Satellite tracking links PGS seismic vibrators to head office for real time monitoring and supervision.

PGS Onshore has deployed a satellite-based fleet monitoring solution from San Luis Obispo, CA-based Fleet Management Solutions (FMS). The solution connects PGS’ seismic acquisition vehicles operating in Alaska’s North Slope to PGS for remote monitoring. Seismic vibrators in the field are monitored for engine and asset trouble codes, fuel consumption, miles per gallon, hard breaking, power take-offs, engine RPM and oil pressure. Data is concentrated onto the FMS MLT-300i.

Crozier

PGS head of survey, Kevin Crozier said, ‘We now have a system which provides alarms via e-mail and text-enabled phones whenever a vehicle tries to enter a prohibited area, such as a pipeline right-of-way or well pad. All commands, events, reports and alerts are transmitted, received and available within seconds. This is an absolute requirement, as our crews work in some of the harshest conditions in the world.’

Henley

The link to PGS’ head offices in Norway is provided by Iridium Satellite. FMS CEO Cliff Henley explained, ‘FMS designed the system to take advantage of Iridium’s worldwide gap-free coverage and robust two-way, low-latency data links. Safety is key to PGS and they require mission-critical data delivery in seconds, anywhere in the world. Iridium, with its fully meshed network, has met these requirements.’


Robbins-Gioia teams with Landmark on project management

Deal to offer project and portfolio management services to Halliburton clients.

Halliburton unit Landmark has partnered with project management specialist Robbins-Gioia to offer project and portfolio management services to its oil and gas clients. The Robbins-Gioia/Landmark solution promises that exploration projects are completed on schedule and within budget. The solution also addresses the challenge of selecting which projects should be tackled first, leveraging Robbins-Gioia’s portfolio management methodology.

Marselle

Robbins-Gioia CEO said, ‘As we expand our presence in the energy sector, partnering with Halliburton ensures that our clients obtain the best in both subject matter and industry expertise.’

Meikle

Landmark VP Doug Meikle added, ‘Adding Robbins-Gioia’s hundreds of professional project managers to our team will bring its process-driven solutions to our clients and heralds new levels of efficiency for our industry.’


SPE 2007 Digital Energy Conference, Houston

The Society of Petroleum Engineers Digital Energy Conference’s included updates on Shell’s ‘smart fields, Chevron’s ‘digital oilfield,’ and reflection on the difficulty of recruiting the ‘renaissance engineer,’ a hybrid PE/IT specialist. BP showed a Microsoft’s Virtual Earth-based application in its Arkoma Basin assets and an ‘intelligent closed loop integrated digital system’ for artificial lift operations in the San Juan basin. Schlumberger presented emerging semantic techniques for securing remote operations and showed a mock up of a Petrel/Interact combo for geosteering.

Around 600 attended the Society of Petroleum Engineers’ (SPE) Digital Energy Conference in Houston last month. Shell Chief Scientist Charlie Williams’ keynote traced the history of ‘smart fields.’ These were initially more about communications than IT, with microwave links from offshore platforms to the Shell building in New Orleans. Early experiments in the mid 1970s with computer assisted operations equipment were abandoned and it was not until much later that the smart fields (SF) concept got traction. SF drivers include deep complex reservoirs, secondary recovery and the need for energy efficiency. Communications have made a lot of progress, but Katrina showed the limits of today’s infrastructure.

Hardware

Williams enumerated Shell’s SF technologies including smart well control valves and monitoring with permanent downhole gauges, flowmeters and distributed temperature sensors. Getting data to the surface involves a tortuous path through packers, tubing hangers and the well head and into the surface control system. But these techniques have enabled ‘smart snake wells’ to produce from Shell’s Champion West field in Brunei, long considered un-developable because of its stacked, faulted reservoirs. Champion West now contributes 25% of Brunei Shell’s production from wells 8 km long with 4km in the reservoir. A proof of concept on another Brunei field, Iron Duke, has involved ‘retrofitting’ smarts. This resulted in a 15% production hike and a two year reprieve on water breakthrough.

Future

Looking to the future, Williams sees computer assisted smart fields, proactively managed as a single dynamic system. ‘Discrete’ technologies are important enablers but it is the holistic management that is key. SF screening is important; Shell has a SF opportunities framing methodology. Williams warns of systems designed by engineers and management, but that operations can’t handle. SF is not about technology but more about closing the value loop with technology used by and embedded within people, process and tools.

Round Table

Randy Krotowski (Chevron CIO) thinks the digital oilfield is an idea whose time has come. Most of the challenges have been solved, with 4D seismic, visualization and decision support centers. What remains are the ‘people’ challenges—convincing them that this is a good idea and especially, getting operations and engineers on board. Chevron has iField engineers who are optimizing operations, data architectures, and making it easier to introduce new technologies. Chevron is leveraging the PPDM data model and ‘PPDM XML web services’ in its SEER data warehouse (OITJ April 07). Ricardo Beltrao described Petrobras’ in-house developed ‘GeDIg’ system for integrated digital field management. Petrobras uses its own software to optimize operations on the 2,000 well Alto do Rodrigues offshore heavy oil pilot. The Barracuda-Caratinga collaborative offshore control center for digital operations (CGEDIG) is located in Rio, some 450km distant. Iraj Ershaghi, who works at the Viterbi School of Engineering at USC, commented on the ‘IT-ization’ of petroleum engineering. A lot of today’s buzz words and technology are not what engineers learn at school. Today we need ‘renaissance’ engineers with IT knowledge. The question then arises, do we train IT specialists in engineering or vice versa? Ershaghi answered with a medical analogy—where the same problem exists. Would you prefer to be operated on by an IT tech who had retrained in surgery, or by a surgeon trained to operate sophisticated equipment? Ershaghi also suggested that the SPE do more to communicate the fact that its technology is cutting edge. We need to sell better at universities where there is a huge problem with the low number of petroleum engineering students. The Chevron USC CiSoft’s MS in smart oilfield technologies is part of the answer. In the Q&A, one perspicacious observer contrasted Ershaghi’s ‘cutting edge’ description of oil country technology with the SPE’s recent self-flagellation as an industry characterized as a technology laggard!

Time-based database

Mike Strathman’s suggested that we should look again at time-based (as opposed to depth-based) drilling. The idea is to have a unified view of all real time data. By using a data historian, as deployed in the process control industry, all real time data is collected in one place. This allows for detailed analysis of current situations in the light of historical data, leveraging drillers’ expertise and allowing for a quick response to operational issues. The data Historian is much more performant than an RDBMS. Time based information supports queries along the lines of ‘what else was going on at that time?,’ ‘How long have we been drilling in in this formation etc.’ AspenTech’s solution in this space is the InfoPlus.21/Web.21 combo of Historian and analytics. A new ‘WellTrends’ package displays log data and allows drag and drop of data streams to log tracks.

Co-visualization

Mike Weber (BP North America) explained how its Arkoma Basin unit was co-visualizing Pipesim and SCADA data in a Microsoft Virtual Earth-based solution from IDV Solutions (OITJ February 07). Phase I of the project resulted in map-based visualization of Pipesim model data. Phase II extends this to data from OSIsoft’s PI Historian over the 520 wells in the Red Oak. Microsoft SharePoint services and Web Parts also ran as did Schlumberger’s Avocet production data server. The plan is to offer ‘evergreen’ models, to perform ‘what if’ operating scenarios and in general, to ‘break down silo boundaries’ with visualization. A movie showed VirtualEarth with check boxes turning map layers on and off and bubble maps of production. Data can be exported to Excel for further analysis. An animated pipeline display warns operators when pigging needs to be done. GIS data is ‘piggy-backed’ on the Virtual Earth data server, limiting data and bandwidth requirements on site.

Semantic web

Schlumberger Fellow Bertrand du Castel gave a suitably erudite and rather obscure presentation on the application of semantic web technology to homogenize security across various digital oilfield subsystems. The Internet, SCADA systems, and ISO 27001 all expose different security models. What is missing is a common security ‘ontology,’ a problem that extends across other facets of ‘remoting’ operations, real time, automation and ‘augmentation.’ Citing Thomas Sheridan’s book, ‘Humans and Automation*,’ du Castel described the ‘prize’ as ‘goal-oriented, distributed workflows’ enabled by ontology, process model, interaction and ‘interstriction’ models. Baysian logic leveraging Norsys’ Netica also ran. All will interact through a services-oriented, semantic web, OWL and UML and BEPL. Du Castel showed pilot smart fields where composite business process modeling has been deployed.

Closed loop

Peter Oyewole’s presentation covered an ‘intelligent, closed loop integrated digital system’ as deployed on BP’s San Juan Basin prolific coal bed methane field. The system manages tubing flow control, plunger lift and other artificial lift systems and has resulted in increased gas production, better equipment reliability and in the development of an efficient cheap deliquification process. Remote operations connect RTUs to Invensys’ Industrial Application Server. Data is fed to Maximo for work order generation, E-Choke (for choke analysis and FDA (field data analysis). Multiple SCADA interfaces are integrated through an abstraction layer to Modbus. The system allows BP to switch to and from plunger and artificial lift as appropriate and to control tubing flow. Condition-based soap injection and other remedial actions are also enabled.

Operations Support Center

Schlumberger was showing its Operations Support Center—a crossover solution combining Petrel’s new real time capability with the wireline division’s Interact Server, Perform toolkit and real time logs. A compelling mock-up showed a twin head display with a portrait mode screen for the log/real time data and a landscape view of the Petrel model.

* John Wiley, 2002.

This article is a summary of a longer, illustrated report produced as a part of The Data Room’s Technology Watch Service. For more information please email info@oilit.com.


SPE DEC07 high performance computing special session

Shell, BP and the US Council of Competitiveness outline trends in oil and gas HPC.

The special session on high performance computing (HPC) began with a video from the Kafkaesquely-named ‘US Council on Competitiveness,’ (USCoC—compete.org). The Dreamworks-produced video showed how HPC is essential to weather forecasting, the US Navy, medicine, the entertainment industry and, naturellement, seismics. Curiously, the video was narrated by a penguin, a reference no doubt to Linux’ ubiquity in HPC, but this must be the first time Linux is considered ‘the OS that dares not speak it’s name!’

Tichenor

The USCoC’s Suzy Tichenor believes companies need to ‘out-compute to out-compete’ and in this context, HPC is an ‘innovation accelerator.’ HPC provided critical compute horsepower for Chevron’s Jack development. Barriers to take up include lack of talent, lack of scalable production software and cost/ROI issues. These are compounded by what Tichenor describes as a ‘bi modal’ market—with missing mid-range machines.

Shell

Jim Clippard enumerated some ‘petascale’ problems such as ‘seeing’ (seismic) and ‘draining’ (reservoir modeling) the earth. Compute-intensive reverse time wave equation migration ‘makes the invisible visible’ in the sub salt section of the Gulf of Mexico. Achieving such compute horsepower involves power and heat issues. Shell’s facility costs $20k/year in electricity. For Clippard, the future is parallel, even though programming such machines is a challenge. Bottlenecks such as memory and interconnect latency differ for different jobs and machines. There is a need to manage heterogeneity, ‘IT folks hate this!’

BP

Keith Gray has ‘one of the most fun jobs in the company,’ managing BP’s 100 TeraFlop HPC installation. BP’s focus is on subsalt seismic imaging and has ‘delivered results and shown breakthroughs to the industry.’ BP’s compute capability has grown one thousand fold in the last eight years. The seismic machine now sports 14,000 cores and 2 Petabytes of storage. All of which implies a significant effort in data management, code optimization and parallelization. There is also a need to strike a balance between systems that let R&D develop its ideas while production users have the scale they need. Some very large memory systems offer straightforward FORTAN programming for researchers. Gray believes we may have pushed too hard towards commoditization and are seeing fewer breakthrough technologies.

Software vs. hardware

A debate ensued on the need to progress application software as well as the hardware. Some opined that this should be left to the compiler writers to avoid the need to rewrite application code. On today’s clusters, there may be only one in four systems actually working while the other three hang around waiting for data. Steve Landon (HP) agreed that too much money was going into hardware over software. If this is not fixed, the yawning gap will stretch and the software for the Petabyte machine ‘will not be there.’ On the subject of architecture, BP’s decision three years ago to opt for a cluster over shared memory ‘irritated both its R&D and its parallel processing communities equally!’

Open Source

Developers working in the Open Source movement complained of the lack of feedback and collaboration in the industry. BP is very interested in Open Source. All BP clusters run Linux with a mix of open source and commercial debuggers and job schedulers. Shell is in the same boat but expressed caution regarding un-maintained code. BP is willing to ‘try any model—maybe to pay for open source development.’


Folks, facts and orgs ...

Ziebel, Absoft, Aker, API, Deloitte, Energy Solutions, SolArc, Halliburton, ESRI, Oildex, Schlumberger & more.

Stavanger-based well service company Ziebel AS has acquired Knowledge Reservoir (KR) of Houston. Ian Lilly is to head-up KR’s new Asian-Pacific HQ in Kuala Lumpur, Malaysia.

UK-based Absoft has won a £120,000 contract from Centrica Energy for SAP support on its Morecambe Bay gas fields.

Erik Wiik heads-up Aker Kvaerner’s new facility in Houston, a purpose built facility providing operational support of drilling equipment, well intervention technology and subsea systems.

The American Petroleum Institute has just made its safety standards freely available from its website, api.org. API RP 49 concerns drilling and H2S hazards, RP 76 covers safety and third part personnel and RP 67 deals with explosives safety.

Ethan Cheng has joined Deloitte Petroleum Services’ Singapore team as support and development engineer for PetroScope clients in Asia.

The US Department of Energy has released its archive of two decades of twenty years of unconventional gas research data from the Office of Fossil Energy’s National Energy Technology Laboratory. The DoE has also just established an Ultra-Deepwater Advisory Committee to advise the Secretary of Energy on technology R&D for ultra-deepwater and unconventional resources.

Energy Solutions reports sales of its oil and gas pipeline management package, PipelineStudio, to PetroChina and Sinopec inter alia. Other new clients include Sirte Oil Co., Libya, Techint (Argentina), Mott MacDonald (United Arab Emirates), Snamprogetti, Total and PT Erraenersi Konstruksindo of Indonesia.

Oklahoma-based Enogex has selected SolArc’s RightAngle solution to manage its natural gas liquids supply, transportation and marketing.

Graeme Philp has been elected vice-chairman of the Fieldbus Foundation’s EMEA Advisory Council. Philp is Chief Executive of UK-based MTL Instruments Group.

FileTek has named William Loomis, CEO and Philip Pascarelli, President.

GE and BP have teamed to develop clean fuel power plant technology. Five US power plants will burn fossil fuels to generate hydrogen and CO2 which is to be sequestered in underground reservoirs and in some cases, ‘may result’ in enhanced oil recovery. The hydrogen is used generate electricity.

Halliburton and the Tyumen State Oil and Gas University are to open a new training center in Tyumen, Russia. Halliburton’s drilling and formation evaluation division has acquired Vector Magnetics’ active ranging technology for steam assisted gravity drainage applications.

Robert Brook has joined ESRI as pipeline industry solutions manager. Brook was previously with New Century Software.

Claude Joseph has joined Oildex as product manager for SpendWorks.

Thierry Pilenko has been appointed Chairman and CEO of Technip, succeeding Daniel Valot who is retiring. Pilenko was previously CEO of Veritas, now part of CGG. The group also announced that Guy Arlette has been named president of operations.

WellDynamics has appointed Derek Mathieson as acting president and CEO.

Schlumberger has acquired UK-based geomechanics software and consulting boutique VIPS. VIPS is henceforth the ‘Schlumberger Reservoir Geomechanics Center of Excellence’ under the direction of VIPS founder Nick Koutsabeloulis. VIPS’ flagship ‘Visage’ package uses finite element modeling to investigate reservoir stress changes during production and injection. Schlumberger plans to develop links from Visage to ECLIPSE and Petrel. Schlumberger has also bought Insensys Oil and Gas, a provider of fiber-optic measurements services for integrity surveillance of subsea production systems.

Hyperion has delivered an operator training simulator to the ExxonMobil/Saudi Aramco Luberef II project.

The EU has just published Directive 2007/2/EC concerning the establishment of an Infrastructure for Spatial Information in the European Community (INSPIRE).

The Pipeline Open Data Standards organization (PODS) has submitted the final draft of its external corrosion direct assessment (ECDA) data interchange standard to the National Association of Corrosion Engineers (NACE) for review and ballot.

The US Census Bureau has just published a report on Information and Communication Technology (ICT) for 2005, covering e-business infrastructure spend in the US. According to the US Department of Energy, a new ‘generalized travel time inversion’ seismic technique developed by the DoE and Texas A&M University will increase recovery of ‘up to 218 billion barrels of by-passed oil in domestic fields.’ Total project costs was $890,000 which gives the project the greatest ROI of all time. In celebration, we hereby award the DoE copywriters the first ever Oil IT Journal Hype Award!


P2ES new BI tool leverages Business Objects

Excalibur Report Studio announced and sales chalked-up to EXCO and Taylor Energy.

Petroleum Place Energy Solutions (P2ES) has just announced the Excalibur Report Studio (ERS), a data mining interface to P2ES’ Excalibur Energy Management System (EEMS). EEMS is an integrated financial and operational solution built on IBM’s Unidata embedded DB2 database technology (OITJ March 03). ERS is built on the Microsoft .NET framework, extracting and transforming data in the EEMS database for use in third-party reporting packages such as Business Objects’ Crystal Reports.

Eikermann

P2ES senior VP development Mark Eikermann said, ‘Crystal Reports is one of the most popular reporting tools on the market. It made sense to choose it as our preferred reporting front end.’ ERS offers direct interaction with the underlying UniData database, bypassing the ODBC layer and allowing for enhanced connectivity to modern reporting applications.

EXCO

P2ES also reports recent EEMS sales to Dallas-based EXCO Resources and Taylor Energy of New Orleans. EXCO has seen rapid growth through acquisitions (140 since 1997) and deployed EEMS to replace multiple systems and overlapping processes with a single environment for reporting and analysis. P2ES hosts and manages EXCO’s real time production environment from its data center in Calgary, Alberta. Taylor chose Excalibur to replace an ‘obsolete and unsupportable’ legacy system.


FIATECH Annual Technology Conference

Washington meet focuses on data interoperability with the ISO 15926 ‘Work in Progress’ standard.

The US-based FIATECH organization held its Annual Technology Conference in Washington last month, the highlight of which was the release of a ‘Building Information Model’ (BIM) for the process industry. The BIM was largely driven by increased activity levels in the offshore oil and gas industry and completes earlier work by FIATECH, the Norwegian POSCL/CAESAR organization, Det Norske Veritas and the USPI organization.

ISO 15926

This transatlantic collaboration has resulted in a functional ‘Work-in-Progress’ (WIP) database derived from the ISO 15926 plant data standard which is freely accessible on the fiatech.org website. The WIP is comprised of the Reference Data Library (RDL), which contains the ISO-approved core library set of class descriptions and Object Information Models (OIMs), as well as proposed classes and model extensions.

Petronas

Yusoff Hjsiraj announced Petronas’ backing for the new standard as a component of the Petronas Information Management (PCIM) Data Model which is now mandatory for all vendors and contractors. Contractors are free to choose the applications that work best for them—so long as they leverage the ISO 15926 RDL. A successful pilot on the Angsi field (a joint venture with ExxonMobil) led to the adoption of a single data model for design, construction and handover. ISO 15926, RDL, and the ISO Capital Facilities Information Handover Guidelines (CFIHG) are now core components of Petronas’ IM strategy.

Interoperability

Adrian Laud described Noumenon Consulting’s work with Bentley Systems and Aveva on ISO 15926-based interoperability on major capital projects. The current shortfall in engineers ‘can only be solved through interoperability.’ According to a NIST report, the lack of interoperability costs the US $15.8 billion per year (exactly!). Over 50 major commercial projects are currently addressing this through ISO 15926 deployment. ISO 15926 is the neutral delivery standard for all project data. A claimed 20-30% reduction in engineering man-hours has resulted as well as reduced cost of structural steelwork and in general, better, safer plants. Case studies of successful ISO 15926 use include BP’s Greater Plutonia development and the CSPC Nanhai Petrochemicals Project, a Sinopec/Shell joint venture. Woodside has also deployed the standard to access engineering design data in a Brownfield environment.

IDS Project & WIP

Magne Valen-Sendstad’s presentation focused on the well-endowed Norwegian Integrated Operations (IO) Project. This leverages ‘Intelligent Data Sets’ to support new collaborative work processes within and between organizations and to support data though a 25-75 year lifecycle. The three year project runs through 2008 with a budget of $2.7 million.


DTS Standard mooted as optics extend to offshore platform

SensorTran and J-Power systems propose IDOPTS standard for distributed temperature sensing.

At the recent Subsea Fiber Optic Monitoring (SEAFOM) conference, SensorTran and J-Power Systems announced a new initiative to establish standards for fiber-optic distributed temperature sensing (DTS) equipment. The International Distributed Optical Performance Testing Standards (IDOPTS) working group will develop standards for performance and specifications to help potential buyers evaluate DTS solutions.

Kalar

SensorTran CEO Kent Kalar said, ‘Optical monitoring standards will provide much needed clarity in the market. The industry suffers from a lack of performance specifications, creating confusion as to different solutions’ capabilities. IDOPTS will enable companies in the oil and gas, energy, utility and environmental industries to accurately evaluate distributed optical monitoring products.’ IDOPTS standards will target performance definitions, testing and performance evaluation.

ODI

In a separate announcement, SensorTran has teamed with Ocean Design, Inc. (ODI) to offer fiber optic sensor solutions for offshore facilities. The deal combines SensorTran’s DTS offering with OTS’ subsea interconnect systems and targets simplified deployment of DTS, for offshore engineers.

Backscatter

DTS uses backscattered light from a laser beam to provide a temperature profile along thousands of meters of fiber. The technique is used to continuously monitor downhole temperature, a key element of well surveillance. The DTS standard is not related to previous POSC work in this space.


Bearing Point implements Total E&P Canada’s SAP

Consultants align oil sands financial and supply chain processes.

Technology consultants BearingPoint report the successful implementation of an SAP solution for Total E& P Canada’s oil sands operations in Alberta, Canada. BearingPoint implement Total’s corporate SAP template to align its financial and supply-chain processes with other Total operations around the world. BearingPoint’s Canadian and French teams collaborated with Total’s Paris-headquartered parent company.

Grillot

BearingPoint Canada MD Michel Grillot said, ‘Total selected BearingPoint because of our track record of on-time, on-budget SAP implementations. Cross-culture and cross-border teamwork was a critical factor in the success of this project, demonstrating our global capability.’ Total plans to invest from $10 to $15 billion over the next decade in Alberta’s oil sands.


Hydro awards WellDynamics well steering framework contract

Five year NOK 540 million deal covers ‘SmartWell’ intelligent completion technology and services.

Hydro has awarded WellDynamics, a joint venture between Halliburton Energy Services and Shell Technology Ventures, a framework contract for the provision of SmartWell intelligent completion technology and services. The seven year contract is valued at around NOK 540 ($90) million and concerns SmartWell activity on Hydro’s North Sea Grane, Oseberg and Brage fields.

Longorio

WellDynamics’ CEO Phil Longorio said, ‘This contract extends our long standing relationship with Hydro leveraging SmartWell completions to improve reservoir control and increase ultimate recovery.’ Hydro has drilled wells in its North Sea fields with as many as six laterals. The new SmartWell completions will enable fine control over fluid flow from branches within a single well.

Zonal isolation

Hydro believes the technology will prove critical to its production optimization effort. WellDynamics’ intelligent completion technology includes solutions for subsurface flow control, zonal isolation, and real-time monitoring and control from surface facilities.


Decision Dynamics’ Oncore for major Alberta oil sands project

Project tracking and cost control package deployed in million dollar deal.

A large Canadian Engineering, Procurement and Construction joint venture (EPC) has awarded Decision Dynamics (DD) a $1 million contract for the deployment of its ‘Oncore’ software for project tracking and cost control of its Fort McMurray, Alberta oil sands development.

Real time

Oncore automates invoice generation from contract terms, streamlines validation and provides visibility of real time, line-by-line spend to managers. Oncore will also track progress of major reimbursable contracts during the construction phase.

Zinke

DD president Justin Zinke said, ‘Oncore will add value by reducing the effort of managing the complex reimbursable contracts involved on this project. By eliminating manual calculations and providing real-time visibility of project costs across multiple contractors, Oncore provides tight financial control at every stage of construction. This contract continues our expansion in the oil sands industry and demonstrates Oncore’s value to mega-projects in the construction and services segments.’

Time Industrial

Oncore, formerly known as Time Industrial, tracks labor, equipment, materials and other costs for capital and operations/maintenance projects by line item and provides robust analytics for contractor performance monitoring (OITJ June 06).


Neoris delivers SAP portal to Pemex Básica gas unit

Visualization and integration portal leverages xApps for Manufacturing Integration and Intelligence.

IT consultant Neoris and SAP have teamed on a process visualization and integration portal (VIP) for Pemex’ Básica gas and petrochemicals unit. The portal provides decision support across Básica’s production, planning and operations.

Pemex Gas

Básica operates 10 gas processing complexes and several hundred thousand tons of storage. A large number of industrial IT systems made information integration, reporting and analysis problematical. A study group of operational and business specialists identified SAP’s NetWeaver-based xApps for Manufacturing Integration and Intelligence (xMII) the solution to the integration issue and outsourced portal development to Neoris.

Muruzábal

Neoris’ CEO Claudio Muruzábal said, ‘SAP xMII allows daily or real time integration of data from SAP and non-SAP sources. The VIP portal combines information from field systems with back office tools for decision support, sample management, the PI data historian and several SAP modules. VIP dashboards serve in-context data to production, planning, quality, maintenance and HSE specialists. Key performance indicators can be accessed from any web browser and ‘mini dashboards’ have been developed for information delivery to PDAs and cellular phones.’ Neoris is headquartered in Miami.


Marathon Technologies partners with Transpara on visual KPIs

everRun fault tolerant server delivers key performance plant information to mobile workforce.

Transpara has teamed with Marathon Technologies to enable process and utility workers to minimize downtime and assure 24/7 real-time access to operational data.

everRun

The deal combines Transpara’s Visual KPI package with Marathon Technologies’ fault tolerant everRun server. Visual KPI delivers operations data to browsers, smart phones, PDAs and Blackberry devices. everRun synchronizes two standard Windows servers to create a single application environment running across both servers simultaneously. If a component or a server fails, the Windows application continues uninterrupted. The process is transparent to the user, the application and to the operating system.

Phillips

Marathon Technologies president Gary Phillips said, ‘Transpara has hit on an important trend—aggregating data from multiple sources and delivering it through mobile devices. With everRun, we add a cost-effective answer for automated availability, reliability and data
protection.’


PlantWeb to enable digital automation of North West facility

Oil sands ‘upgrading’ facility to deploy Emerson’s ‘intelligent’ digital control system.

North West Upgrading (NWU), has awarded Emerson Process Management a contract to digitally automate an oil sands bitumen upgrader to be constructed 28 miles northeast of Edmonton, Alberta. The $2.6 billion facility will have a 231,000 bopd capacity when complete in 2015.

Bitumen

The upgrader will transform bitumen from Alberta’s tar sands into light, low sulfur products suitable for use by refineries to produce gasoline. Emerson’s PlantWeb digital architecture will form the backbone of the facility’s ‘intelligent digital’ technology control systems.

Pearce

NWU president Robert Pearce said, ‘Finding ways to reduce the high cost of both construction and oil sands processing is a key objective of our project planners. By standardizing on Emerson’s PlantWeb technology, we expect to achieve a consistent and durable process. This technology will be an important factor in reducing costs while supporting plant reliability and productivity.’ Emerson will also supply engineering and project management services throughout construction, commissioning of automation systems and ongoing service support.


SL Corp. and OSIsoft offer Java API for PI Historian data

Enterprise RTView exposes OSIsoft’s PI Historian data to Linux/Unix server-based applications.

OSIsoft has partnered with Sherrill-Lubinski Corp. (SL) to offer a Java/Unix interface to the PI System data Historian. The partnership offers users a customizable interface to PI data from Java apps running on UNIX and Linux servers. SL’s Enterprise RTView real-time information delivery platform lets users deploy custom dashboards, alerts, and reports. RTView connects to multiple enterprise data source, including the PI System.

Gamer

OSIsoft director Patricia Garner said, ‘The partnership offers users a customizable solution for connectivity to UNIX servers. By partnering with SL, we can now offer rapid development of dashboards and decision support tools.’

Hi PI

The deal extends OSIsoft’s High Availability PI System (OITJ Jan 07) which enhances PI functionality by enabling access to data during scheduled and unscheduled down times without requiring special hardware or clustered environments. More from cdugger@osisoft.com.


Honeywell to upgrade MOL’s European facilities

Ten UniSim operator training simulators to roll out over three year period.

The Hungarian national oil and gas group MOL has awarded a contract to Honeywell for the provision of automation solutions for MOL facilities in central Europe. The ‘strategic business relationship’ sets out to reduce production costs and increase yields through the deployment of 10 UniSim Operator Trainer Simulator projects at MOL’s Duna and Slovnaft refineries over the next three years. A further five existing Honeywell TDC 3000 control systems will be upgraded and the current Integrated Service Agreement will also be extended out to 2014. Additional projects will be defined each year according to MOL’s budgets and business priorities.

Fekete

MOL senior VP Laszlo Fekete said, ‘Long term strategic partnerships with key suppliers help us strengthen our business. Honeywell has been a trusted supplier for over two decades, and because of the company’s deep knowledge of our business issues and needs, it was the ideal partner for this project.’


Roxar rolls-out FracPerm 2.0

‘Data-driven’ modeling approach claimed to reduce risk in fractured reservoir management.

The latest release of Roxar’s fracture modeling package, FracPerm, promises simplified workflows and reduced uncertainty in fracture modeling. FracPerm 2.0 enables geologists and reservoir engineers to create permeability maps for use in fluid flow simulation and history matching of oil and gas reservoirs.

Irap RMS

FracPerm operates alongside Roxar’s IrapRMS geological modeling package, adding geological properties, geo-statistics and geo-history in a ‘data-driven’ approach, designed to maximize data use and model QC. FracPerm 2.0 includes a redesigned interface and a new plug-in structure for improved integration other software.

Esslemont

Roxar CEO Sandy Esslemont said, ‘Two-thirds of the world’s proven reserves lie in areas with acknowledged issues of fracture-affected recovery. When FracPerm was launched in 2005, it brought fracture modeling from a niche discipline to a widely used tool helping geologists, and geophysicists improve their reservoir models.’ FracPerm users include Hydro, Lukoil, Saudi Aramco, ADMA, OMV, MOL, Surgutneftegas, Petrochina, Pertamina, and CuuLong Vietnam.


Autonomy/Virage to digitize BP’s film library

‘Intelligent’ XML tags to make ‘rich content’ historic video resource searchable.

BP has awarded a contract to Autonomy unit Virage for the digitization of its extensive video library. Virage’s VS Archive technology will automate capture, encoding and indexing of BP’s 11,500 recordings, some dating back to 1915.

3,500 hours

BP has 3,500 hours of footage documenting a century of global exploration, including events of international significance and the Oscar-winning short film ‘Giuseppina.’ By digitizing this content BP will ensure its preservation and leverage internal and external value by making footage available to employees, broadcasters and film makers via an internet portal.

IDOL

Virage’s VS Archive adds XML tag information to rich content, leveraging Autonomy’s Intelligent Data Operating Layer (IDOL). Video and rich media are thus searchable along with other enterprise content. VS Archive also embeds techniques for scene change detection and speech analysis.

Rig 20

Something worth looking out for in the BP Archive is the 1951 blockbuster of a fire at the Naft Safidi oil well in Iran and its extinction by Myron Kinley. A classic!


Aleyaska Pipeline renews SAIC’s pipeline IT contract

1,800 desktop application and 100 server support contract renewed in $35 million deal.

Aleyaska Pipeline Service Co. has renewed Science Applications International Corp.’s (SAIC) IT outsourcing deal for a further five-year period. The fixed-unit-price contract is valued at $35 million. SAIC has provided these services since 2002. The contract contains two one-year options for recurring services for an estimated value of $14 million. If all options are exercised, the total value of the contract could reach $49 million.

Bames

Alyeska CIO Erv Barnes said, ‘At the end of the original contract, SAIC was challenged to provide even better service and support along with cost efficiencies. This competitive process resulted in Alyeska and SAIC entering into a new five year contract.’

1,800 desktops

IT services include local and wide area network support, security, application maintenance and enhancement, and cross functional services. SAIC also will support approximately 1,800 desktops and approximately 100 servers and multiple storage area networks.

Trans-Alaska

The contract will be performed in Anchorage, Fairbanks, Valdez, and along the 800 miles of the Trans-Alaskan Pipeline.


Leica Geosystems Geospatial Imaging acquires ER Mapper

Geospatial image processing solution extends Leica’s enterprise software offering.

Leica Geosystems’ Geospatial Imaging unit (Leica) has acquired Australian geospatial software house ER Mapper for an undisclosed amount. ER Mapper sells geospatial image processing solutions, in particular, a high performance Image Web Server (IWS) for the management of very large image datasets.

Morris

Leica president Bob Morris said, ‘The ER Mapper acquisition brings leading enterprise technology to our portfolio of solutions, supporting our strategy of providing geospatial information to larger, enterprise-wide markets. We are also gaining market access to industries like mineral and oil and gas exploration, complementing our own natural resources expertise.’

Shell

Last year (OITJ June 06) Shell rolled out a bespoke ER Mapper-based solution to expose its twelve terabyte satellite imagery archive. In another deployment, Anadarko showed how ER Mapper could be used alongside VoxelGeo and StratiMagic for innovative seismic geomorphological investigations (The Data Room, Technology Watch AAPG 2005).


MOL uses Palisade @Risk in enterprise risk management system

Hungarian oil and gas company uses Monte Carlo optimizer for investment decisions.

In a recent webinar, Peter Saling of Hungarian oil and gas company MOL Group, showed how the company is using Palisade’s @Risk to perform ‘risk aggregation’ across all of its business units. MOL’s Enterprise Risk Management (ERM) methodology measures, manages and reports financial, operational and strategic risks using a common methodology.

ERM

MOL’s ERM model is used in strategic decision making—adding quantified risk analysis to the previous NPV-based project comparison. The ERM model tracks some 80 different kinds of risks using @Risk’s Monte Carlo simulation.

Goal-setting

Saling outlined the procedure as follows. Firstly an NPV probability distribution for each asset is generated in @Risk, along with an estimate of Value at Risk (VaR). Next, RiskOptimizer is used to optimize MOL’s asset portfolio using efficient frontier analysis. New opportunities can then be evaluated in the light of the existing portfolio and the analysis updated to include new assets. MOL’s ERM application is now being extended to support capital allocation decisions, performance management and KPI goal-setting.


© 1996-2024 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.