OITJ Interview—Sandy Esslemont, CEO Roxar

Roxar’s CEO talks to Oil IT Journal about the change of ownership, its plans for the future, the shortage of geoscience personnel and the IT cultural tussle centered on the digital oilfield.

Describe Capita’s acquisition of Roxar.

Our previous owner, Lisme, was formed to take Roxar off the stock exchange. Lisme acquired the 94% outstanding Roxar shares—but always with an exit in mind. This past year has been particularly interesting for us, ending in a very welcome change of ownership.

Was it a good deal for Lisme?

The deal is valued at $200 million which represents a very good return on their investment. They are happy! And we are. We needed someone to help us move to the next level.

Can you can update us on your financials?

Turnover was $130 million in 2005 with 69% from metering. About 25% of total revenues are from our subsea business, up from 10% in 2003. Subsea includes multi-phase metering, wet gas and sand monitoring. Putting on the seabed what we already do on the surface. Software has grown to remain at around 30% of revenue—in other words a 10-15% annual growth. But we are now seeing a very hot market for personnel, as operators take a lot of our people. This is not very helpful in the long run, but it is what happens every time. We had plans to hire 100 people in 2006, half in our software division but in the circumstances, this will be a challenge. There is no problem with the technology, in fact we are turning down opportunities because of the people shortage. Geoscience and reservoir engineering are the real problem areas. The hardware side of the business is different, there is a much bigger engineering labor pool to hire from. Software developers are no problem either.

This must make it hard to develop your consulting business?

That’s true for geoscience, but we are also targeting production and process with our IRPM offering* and our real time model update. We are doing more and more in this space with WITSML and live MWD/LWD data, moving to ‘close the circle’ and proactively leverage more live data. This is not necessarily the same stretched market as geoscience.

Is Dacqus* still relevant here?

Actually we have split into a data acquisition and flow measurement group and the RSS (software) unit. RSS includes our FieldWatch group which is working on a new field-wide real time data acquisition package we acquired in Russia last year as ResView 2.0. This will be rolled out as FieldWatch in the next year or so and we are planning to roll out more real time data acquisition and analysis tools.

That still represents a bit of a gap to Irap?

Yes but we can do live data to Tempest (our reservoir flow modeler) and even if real time data integration with Irap is not feasible now, this is what we are striving for.

So your not thinking of an Irap spin-off.

We are still putting a huge development effort into Irap, with teams in Russia and California, near Stanford. We are also re-engineering our software to allow for development of tools outside of RMS—not modules as before. RMS is a fully integrated suite, but FracPerm, which we released last year, was developed as a stand-alone Windows-based product. We now have four development teams working on FieldWatch (RT production optimization), FracPerm (fractured reservoir characterization), Tempest (Fluid Flow simulator) and Irap RMS (geo modeling).

How do these tools interconnect?

FracPerm for instance is a Windows-based product and is not closely coupled with RMS. Over time these tools will be integrated as a suite. But we don’t want to have everything in the same product. It takes too long to develop and deploy an Irap module. A Petrel user can buy FracPerm, an Eclipse user can buy Tempest.

Schlumberger is pushing Petrel as a development environment for plug-ins. Is this of interest?

No. Schlumberger usually ends up developing its own stuff! We want Petrel customers to move to RMS!

Are you involved with PRODML?

We are keeping an eye on it. We definitely subscribe to this activity.

We have written before about a culture clash between the upstream and the process control communities. Do you see different ‘solutions’ to the same problem in WITSML/PRODML and SCADA S95 etc?

We have already seen this in the hardware space. Downhole temperature and pressure measurement may be specified by the reservoir community but the equipment is more process-engineering based. More often than not the two cultures don’t communicate. There is a big divide at asset level between production and reservoir.

How is this going to impact the digital oilfield (DO)?

That’s a good question! Some think of the DO as facilities, some think it is reservoir. We have one person working full time studying DO solutions, just trying to figure out what they are and what the business model will be. In fact even operators are challenged within their own organizations. For us, reservoir engineering should sit in the middle of the DO. I see the DO as a big ‘de-bottlenecking’ exercise. Petroleum and reservoir engineering has a lot to learn from process. In the downhole domain, there is very little automation in the form of closed loop decision making. We are still deploying systems with a single valve controlled from the surface. This is laughable compared with what happens in a plant. It’s like things were 30 years ago. In the end all this will only work if both camps work together (as we do). Today the DO is just one big disconnect. It will be very interesting to see what the IT picture of the DO looks like in five years time. But before then, I believe that the people crisis will force technology adoption.

* See our previous interview with Sandy Esslemont in the April 2004 issue of Oil IT Journal.

This article originally appeared in Oil IT Journal 2006 Issue # 3.

For more information or to comment on this topic email here.