Oil IT Journal interview—Harpreet Gulati, Invensys

Invensys has just signed a ‘multi-million’ contract with Shell for simulation software. Harpreet Gulati talks to Oil IT Journal about how ‘common reconciled’ data has laid the foundation of enterprise simulation in the downstream, how the approach is moving upstream, and the big ‘gotcha.’

Following the signing of Invensys Operations Management’s ‘multi-year, multi-million dollar’ contract with Shell for the provision of simulation solutions to its global upstream, downstream and petrochemicals operations, Oil IT Journal interviewed Invensys’ director of simulations and optimization, Harpreet Gulati.

Oil IT—Tell us about the deal with Shell.

This agreement covers the heritage SimSci technology (Romeo, Data Reconciliation, PipePhase and Pro/II). SimSci has some 45 years history in refining and is now a global standard for design and optimization that is used by Exxon, Shell and now Total. Shell in particular has been both customer and partner for over 25 years. By partner I mean that Shell technology and IP feeds into a product’s development. The key philosophy is that data, quality, validated basic data about what is happening in the refinery is the foundation of all monitoring and optimization. Great effort is placed on ensuring the accuracy of basic data. This ‘common reconciled data’ approach is the focus of a joint Shell/Invensys effort.

Is this stored in a real time database?

We do have our own RTDB (Wonderware). But we integrate with what the customer has in place. This may be Osisoft PI or Honeywell’s PHD. Whatever is there.

And what about the IT infrastructure for interoperability between these point applications?

Well, it is not really for me to talk about Shell’s IT infrastructure. But Shell has now standardized on our simulation tools across all upstream, downstream and petrochemicals divisions. These are indeed stand-alone applications. But this significant commitment means that Invensys is now a member of Shell’s IT infrastructure design team. Shell is trialing a lot of new IT trends and is for instance a heavy user of virtualization. This means that we have packaged our tools for deployment in a virtualized environment.

Of course Invensys has its own data integration strategy. Our software is modular and shares, for instance, thermodynamics and Excel drag and drop connectivity. This ensures interoperability and data consistency. But Shell uses a lot of other applications and its focus is different to ours. We can always pass data back and forward between applications. In fact most use cases only require a limited subset of data interchange.

Is the Sim4Me portal in the mix?

Yes, Sim4Me is used by non specialists such as operators and planners to access the simulators. The portal also acts as a bridge to other environments like mechanical and control engineering. Shell has not been using Sim4Me for long, but initial feedback is good and we expect take-up to increase as new versions are rolled out.

Is this dynamic or steady state simulation?

Dynamic simulation is a part of the deal but the main focus today is steady state simulation for design and process optimization. Shell is heavily into design, revamp and improving operations with analysis and decision support. One Romeo module, automated rigorous performance monitoring (ARPM) was developed with input from Exxon and Shell. ARPM provides model-based advice for predictive monitoring and optimization. This shows trends, correlations and provides KPIs to the Historian and dashboards. The key issue for predictive monitoring is that it can tell you not just where you are now, but where you should be. You can perform ‘what if’ analyses to see what would happen if you clean a heat exchanger or compressor blades. It is all about operating close to the ideal efficiency level. This can be done by tracking the ‘delta’ between ideal and actual performance over time and using modeling to support the decision making process.

Do you take the AI approach with canned scenario-based model comparison?

No, we do not use the correlated modeling/AI approach. Most all of our modeling is first principle science based on sound chemical engineering principles.

This is all very well in the refinery, but the upstream is a different kettle of fish with more unknowns in the well bore and reservoir.

Indeed, and the upstream has different skill sets and culture. The traditional upstream is not so concerned about efficiencies in the produce, deplete, abandon process. But this is changing, especially in Shell which is increasingly using the optimizing techniques of the downstream. This is happening at more mature producing assets as well as on complex assets such as FPSOs, which look very much like refineries anyhow with heat exchangers and columns. The upstream is getting more sophisticated. Mature assets benefit from ‘what if’ modeling, to investigate possible upgrades to surface facilities as more water is produced.

But to get back to the well bore and the skill sets, do you plan any partnerships with the ‘upstream upstream’ to better integrate the well bore and reservoir.

Yes. We are working closely with Computer Modeling Group of Calgary to incorporate its GEM fluid flow modeler. This targets an SAGD oil sands development, a field where traditional reservoir modeling tools fail. Other partnerships in the upstream are in the offing.

Another issue in the up/downstream divide is that in the refinery you can always solve a problem with more measurement. This can be hard in the upstream context.

Sure but the upstream is changing. There are more three phase meters deployed, fields are more and more instrumented. We are moving on from the days of periodic well tests. Today there is more real PVT and mass balance measurement in the oil field. But while the upstream is getting more sophisticated, there is one big ‘gotcha.’ It is relatively easy to develop a tool and get it to work, initially. It is much harder to ensure that it is still running and delivering benefits a couple of years down the line. Will people still use it? Will they trust the data? This is where our strategy of sustainable integration comes in, automating as much as possible and eliminating data re-entry. We are back to the data infrastructure and the importance of validated, accessible data. But what goes for data is even more true of software, this is an endemic problem, folks buy an application, deploy it and use it for a while, then the excitement goes and the usage declines, and the system gets neglected and unsustainable.

Vendors are at fault too, with upgrades and changing interfaces...

Absolutely, part of our offering is ensuring that new releases are not disruptive. More from Invensys.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.