Different philosophies to data management were outlined as the heavyweights slugged it out on pretty well all fronts simultaneously. Landmark's offering, a multi-vision presentation of Robert Peebler's vision of the "Square-eyed Enterprise" has been given by so many people at so many locations that we will not trouble you with it here. For those who wish to know how Landmark will solve their premature baldness and generally improve their sex life, you can read all about it in the October edition of The Leading Edge.
Geoquest - aided and abetted by Geco Prakla gave an altogether more prosaic version of how their wares will help an E&P shop to manage its data. From the bottom up, the Geoquest view of the data management universe starts with the SeisDB, LogDB, AssetDB etc. range of data-store products which are all run through a master catalogue resident in a "PPDM compliant" database within the Finder product. Atop all this sit the applications, themselves wired together via GeoFrame, a "POSC compliant" data model used for project databases. (The relative positioning of the POSC and PPDM data models is an amusing exercise in arm-waving around a minefield of misrepresentation. For some, POSC is the data repository, while PPDM is the data delivery product in the project database. Others still actually have a sandwich whereby, POSC sits on PPDM which sits on POSC.)
Geco-Prakla have been dragged in on the data management act, but only to say that they too use Finder to manage their data surprise, surprise. What would have been really interesting would have been to hear how a processing shop really handles the workflow of a multi terabyte per month throughput of data. We might learn something. According to John Kingston, before Finder this was done on spreadsheets. This would seem like a counter current "evolution" if this is the case. Given the choice would you rather manipulate your data with Excel or Oracle forms? Its a strange world! Schlumberger are now positioning GeoFrame as fully POSC compliant, but to what end, and extent is it compliant? Like almost all real world implementations of the POSC model, GeoFrame steers clear of POSC when it comes to actual data storage.
Survey of Data Management
As Neil McNaughton (who he? ed.) put it in his talk on The Survey of Data Management in the E&P Business, the performance limits of the relational database are such that storing bulk data within the database is a non-starter in the commercial world. Citing a paper given at the EAGE in Amsterdam by Bril and de Groot, McNaughton illustrated this with the following statistic: A 1000 x 1000 point grid extracted from the relational projection of Epicentre took 3 hours. Reading the same data from a binary file took 3 seconds. So it is common knowledge that bulk data is not stored in Epicentre, and Geoquest readily admit this. But why are we using Epicentre?
Wasn't there some promise of interoperability. How can applications interoperate if the data, their very life blood, is stored in proprietary formats behind a POSC-compliant facade? Privately Geoquest admit that interoperability within GeoFrame is limited to companies who acquire their (proprietary) API. They further admit that it is very unlikely that Landmark would either want to or be allowed to acquire the GeoFrame API so that the problem we all first thought of, that of trading data between Landmark and Geoquest apps will not be solved by this route. Notwithstanding these philosophical observations, Geoquest made a have valiant attempt to re-package their product line as a data management system, while Landmark are on something of another plane right now - hence the above scoreline.