Reporting from frequent conferences as we do means often listening to presentations that do tend to repeat themselves. Several presentations at this month’s ECIM data management in Haugesund, Norway addressed the problem of software interoperability and data exchange across the enterprise. This issue has been with us since we began reporting back in 1996 and it was something of an ‘old chestnut’ even then. Why then are we still talking about interoperability, silos and applications as data thieves? One thing is that the early promise of an architectural approach to the problem, built around a database (Finder was doing pretty well at the time and POSC, now Energistics, was getting a lot of attention with its Epicentre data model) has largely failed. Meanwhile, stand alone apps, the Petrels, Kingdoms, proliferating Excel spreadsheets and innumerable tools for production data have actually increased in prominence. Statoil’s Lars Olav Grøvik related that in one joint venture, the operator reported real oil production from a non existent well and that looking for data still delays projects. Unitization is a pain point even though modern information systems ‘should be able to produce an error free unitization database’ (page 6).
While such issues persist, I would like to turn now to some of the solutions that are available—particularly as this issue of Oil IT Journal is full of interoperability-oriented offerings. If you are into upstream applications, most can now trade data with each other. Boutiques like Petris, iStore and others have developed busses and data translators that let you play one app to the tune of another. In other fields like process control, tools like OSIsoft’s PI System (see page 5 for our report from the 2011 EU regional seminar) have more data adapters than you can imagine—as do systems from companies like ISS (see our interview with CEO Richard Pang on page 3). If you are of a strong disposition, you may still hanker after a central database solution to the problem in which case the folks from Teradata would love to talk to you about their work with ConocoPhillips and Western Refining (page 7). On a smaller scale, this is the approach that Geologic Systems is working on with its PPDM-in-a-Box solution (page 10). Your integration needs may go beyond the upstream. What if you want to roll in data from your ERP system? Well there are plenty of solutions here too—from the aforementioned companies and, as of this month, from Stonebridge which has announced a master data management solution tuned to P2 Energy Solutions software line-up. In next month’s issue we will be reporting on a new solution from Tibco’s OpenSpirit unit which has added ERP integration to its solution leveraging the Tibco’s ActiveMatrix Business Works architecture.
There are plenty of tools out there, so why haven’t we got further along the road to interoperability? IBM data guru Sunil Soares presented an ECIM keynote on data governance and the enterprise architecture question (page 6). Soares believes that it is both desirable and feasible to bring the whole enchilada into a single IT framework. His use case was the question ‘how many employees do we have?’ This simple question seemingly defies current IT systems. Do you only count full time staff in the HR system? What about part time consultants in the badging system? Or some other category in SAP? Anyone who has worked on an integration effort will realize that this sort of question crops up all the time. If life were as simple as mapping from N_EMP in one system to EMPCOUNT in another the ‘problem’ would have been solved way back. Perhaps the real problem is the problem itself. If you can’t define what ‘an employee’ is, then no information system in the world will give you an unequivocal answer to the question ‘how many do we have?’
There is another difficulty with the EA approach and that is the extent to which you have to throw away all your existing applications and infrastructure to make it work. The modeling approach is OK if you can be sure that your applications will be reading data from the EA. This may or may not be possible. If your solution is of the middleware/SOA variety, will it be able to talk to your existing data stores? How much of your data resides in the applications? Geologic Systems’ Wes Baird describes the problem of what happens here when you update a well there (page 11). Grøvik also touched on this issue calling for better separation between applications and data stores.
Which leads me to another Norwegian data effort—the ‘integrated operations’ in its latest ‘high north’ IOHN flavor which was highlighted at the 2011 Semantic Days conference in Norway earlier this year (page 7). This interoperability effort is based on the premise that all data sources and all applications will be retooled around semantic web standards, or at least will see semweb wrappers developed. This is not going to happen any time soon.
Let me now add in another request from the design team. What if we need to roll in data from all our systems, from ERP/SAP and so on and so forth and we need to share entitled data with our partners. This is on the face of it quite a big ‘other’ requirement. We have already had to rewrite all our data servers, all our apps and now all our partners have to do the same? This is not going to happen either. Largely because for every joule of energy spent on investigating a ‘standard’ approach, a kilo joule goes into developing and marketing commercial systems. Witness last month’s lead, where we reported on the OSIsoft/Industrial Evolution solution to partner data sharing in the Gulf of Mexico.
To wind up on a lighter note I will leave you with an observation made by Landmark’s Janet Hicks who thinks that data management is like dusting, ‘you may have done it today, but you will have to do it again tomorrow.’ Which leads me to the corollary of the repetitive presentation, the repetitive editorial. I plead guilty.
© Oil IT Journal - all rights reserved.