March 1997

Schlumberger companies win the contract prizes from Triton and Mobil (March 1997)

GeoQuest and Omnes have signed world-wide contracts with Triton Energy and Mobil for the provision of a single source solution for communications and IT’ Landmark announce the first 'synchronized release' of their software line – Release97.

Two crucial three year contracts were signed in the last month by Geoquest. They cover the provision of Worldwide Software, Hardware and Support Contracts to both Triton Energy Limited and to Mobil Technology Company. Both contracts have been awarded to the Geoquest together with Omnes, a joint venture between Schlumberger and Cable & Wireless plc. This venture was initiated to "provide a single source for communications and information technology solutions to the energy industry worldwide." Omnes, through seven regional offices, operates in over 50 countries. These agreements show the increasing importance of communications in geoscience computing. It is no longer sufficient to be "just" a software house, a solutions provider must be able to provide and support secure communications over the extended network that makes up the modern international E&P company. The Omnes joint venture is reminiscent of Landmark's alliance with Ross Perrot's EDS information services group featured in PDM's launch issue way back in July 1996.


Triton Energy awarded a three-year master agreement to provide geoscience applications, data management software, desktop computer hardware and software support, and field communications to some 18 Triton offices worldwide. "Quality software and timely, effective IT support are critical to the efficiency of our exploration and production teams," stated David Martin, Triton's director of information technology. "The unique combination of GeoQuest's advanced technologies and Omnes' global IT service capability will help facilitate the development of Triton's significant resource base and our assessment of new ventures." GeoQuest will provide four of its industry-leading reservoir characterisation and data management software applications to Triton offices worldwide.


Omnes will provide IT and communications support including remote desktop management and software deployment, data connectivity to remote Triton sites, and full-time on-site support at Triton sites in Dallas, Bogota and London. Omnes' ServiceDesk will provide a single point of contact 24 hours per day, 365 days per year for all Omnes and GeoQuest products and services to Triton users worldwide. "Together, the GeoQuest/Omnes team will deliver maximum value and service quality to meet Triton's global support needs," stated Magne Sveen, GeoQuest vice president of operations. "By outsourcing the geoscience and engineering information technology infrastructure, E&P companies worldwide are focusing on their core businesses while reducing support costs." Outsourcing non-core activities, such as the IT infrastructure, is a cost-effective method to successfully manage the continually increasing information load.

Finder the key

Triton Energy Limited is a Dallas-based international oil and gas exploration company primarily focused on high-potential prospects around the world. Triton has exploration activities underway in Colombia, Ecuador, Malaysia-Thailand, China, Guatemala, Italy and Oman, and is negotiating oil and gas opportunities in other countries. For Mobil Technology Company, Geoquest is to enter into a "worldwide enhanced supplier relationship", GeoQuest will provide its industry-leading Finder data management system, including the production management and AssetDB server extensions, to 13 Mobil locations in seven countries. Finder will enable Mobil to "maximize the value of their data assets, ensuring efficient data storage, secure accesses and rapid delivery of the data to Mobil's geoscientists and engineers".

Perfect partner

Speaking at the Stephenson and Associates E&P Data Management '96 conference held in London last September, Mike Mrozek, Mobil North Sea Ltd., described the processes involved in making this decision in his paper "Finding your Perfect (data management) Partner". Mrozek described Mobil's prime business objective as being a 50% cut in Mobil's IT costs. This was to be achieved by drastically reducing the number of Mobil's suppliers. The prime technical objective was "to define requirements for a Master Data Store environment and recommend a single vendor partner for Mobil E&P globally." The decision as to the ideal partner was made on the basis of four criteria, business fit, economic fit, technical fit and strategic fit. In the last category the vendor was required to "share Mobil's vision, philosophy and near and long term plan for technical computing in E&P".


One crucial element of Mobil's philosophy historically has been a strong commitment to POSC, and POSC membership (at least) was a specific requirement for Mobil. Geoquest's strong recent "POSCturing" has been a gamble that has paid off in this context. Talking of the business drivers for the deal, Larry Bellomy, the IT business manager for Mobil Technology Company, stated that "The growth in the amount of data and information continues to pose challenges for Mobil. This partnership leverages standard tools and data management best practices through our partner GeoQuest, while allowing our people to focus on what they do best--using the data to develop new business opportunities." Services to be provided by GeoQuest will include data management consulting services, installation of software, migration of data from legacy systems as well as on-site support for data loading and database administration. Extensive training will be provided to Mobil's data administration and user personnel. "We are looking forward to building an enhanced supplier relationship with Mobil personnel worldwide," said Magne Sveen, GeoQuest vice president of operations. "Using the Finder data management system, Mobil's geoscientists and engineers will be better able to access and exchange information and data as they focus on E&P opportunities worldwide." Mobil Technology Company is responsible for providing research and development and consulting services for the global upstream and downstream business of Mobil Oil Corporation.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_1 as the subject.

What media for the Petabyte? (March 1997)

PDM has invited manufacturers of high density media (HDM) to supply information from which we have compiled the following analysis of capacity and cost. Current E&P favourite, IBM’s Magstar, is also the most expensive media on the market

As seismic surveys acquire more and more data at an ever increasing rate, the conventional technologies for data recording have been showing the strain over the last few years and recently the debate has been hotting up for as to the probable replacement for the 3480/3490 cartridges of the '80s. A modern seismic survey can generate around 2 Terabytes of data. Written to the last generation industry standard 3480 cartridge with 200 MB capacity, this survey would take up 10,000 tapes. Double that for dual recording and you can see that tape handling is starting to become a serious problem.

Some history

The race for a replacement has been taking place over quite a time span, and solutions have been cropping up from a variety of sources. Firstly the 3480, just like the 9 track before it, underwent considerable development itself, with increased tape length and number of tracks. The top of the line 3490E pushed the capacity to 800 MB, and illustrated a general point; each time a new tape appears, it undergoes a series of developments and usually sees its life span extended as its own technology is pushed to the limit. With the 3490E, the last generation of cartridges reached the end of its effective extensibility. This was in 1991/92. About that time the seismic industry was beginning to make demand much higher capacities, and it was easy to see then that a change of media would be necessary to keep pace. Interestingly, IBM was testing the next generation, the 3590 even before this date, in 1990, but such is the product development cycle and the rigorous nature of the testing program that the 3590 was not issued until 1995.

Window of opportunity

This left quite a gap in the tape market and represented a window of opportunity to IBM's competitors. This competition came mainly from the data versions of the digital broadcast video marketplace, with a serious challenge from Emass using the D2 tape designed and manufactured by Ampex. Some geophysical contractors installed these machines in the early 90's, and today, with the mature, D-D2 version of the technology Ampex holds the record for capacity per tape and also for the lowest tape cost per megabyte. Notwithstanding this initial success, the D2 systems do not appear to have totally convinced the seismic industry. The rugged conditions of field seismic acquisition are a far remove from a bank's data warehouse, and during the last year or two, a new battle has been raging, between IBM's 3590, which was finally issued in early 1995, and a new offering from StorageTek, again a derivative of a video broadcast format, the DD-3. Other players in the game were Sony, with a two data formats derived from digital video SD-1 and DTF, Quantum's DLT and Cybernetics with DTF.

Bad timing

As ever, in such matters timing is everything and the DD-3 format came out just a bit late to change the geophysicists minds, who by mid 1996 were pretty well made up in favour of IBM. Despite a relatively low capacity of 10GB (it its first incarnation) and probably the highest tape cost per megabyte of all, the IBM 3590 has won the battle to be the next industry standard for acquisition. Does that then mean that it will prevail throughout the industry? Certainly, once you have cornered the acquisition marketplace, the processing house is yours, as indeed is the tape archive - at least the field tapes. This is starting to look like total domination! However, the chain of recording, processing, interpretation, trading and storage of seismic data involves a considerable amount of movement and reformatting of the data en route. The data will probably move from one media to another - generally from tape to disk several times during its lifetime. This means that it is possible that more than one media may be used in the process. So there may be a place for the competition yet. For instance on a company's data server where the higher capacity of the helicoidal technology may be a persuasive argument. Indeed PECC use STK's DD-3 technology in their Queensland National data Bank in Australia, but this is naturally not the case for the IBM dominated PetroBank and GeoBank sites.

Record every geophone?

Another force for change would be a quantum jump in the data to be recorded. Supposing that someone decided that we should record every geophone? That would put the cat among the pigeons, and might open up the race again. In this issue we look behind the commercial arguments involved in this battle of the tape formats into some of the technical issues that have driven the seismic industry to their decision. We have polled the major manufacturers of tape solutions and offer our readers an opportunity to compare for themselves the merits of the various offerings. Graph 1 shows the capacity for the different formats. Most all of the media are well below the 50 GB mark, with the 3590 a skimpy 10GB. Having said that, this format is well suited to a modern 3D seismic boat, which turns over a leisurely 5 cartridges per day for a 6 streamer boat. Indeed the massive capacity of the largest DD-2 drive is achieved with a fairly substantial physical cartridge size, perhaps 10 times the volume of the 3590 cartridge. Our second graph is of tape cost per Gigabyte. This should be treated with caution since media costs are akin to state secrets and are very dependent on time, place and market conditions. What is true is that the 3590 is one of the most expensive at over $5 per Gigabyte. This price is anticipated to fall as volumes rise, and if and when IBM allows a second manufacturer in on the act. This survey has been something of a new venture for PDM and we would appreciate your feedback. Some of the issues are still quite contentious and we hope if nothing else, to spark off some debate.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_2 as the subject.

More information on Petabyte storage (March 1997)

The results of the 'Petabyte' survey go beyond what we could reasonably include in an issue of PDM. If you are interested in more information on this subject, please contact The Data Room, 7 Rue des Verrieres, 92310 Sevres, France. Phone and Fax (33) 1 4623 9596, email

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_3 as the subject.

IBM 3590 (Magstar) (March 1997)

PDM provides some technical background to the IBM Magstar High Density tape storage system.

IBM claim a worldwide market in the oil business of around 1 billion dollars. IBM considers the exploration market segment as being a powerful driver in storage systems because of the high demands that seismic recording put on both volumes, I/O bandwidth and reliability. Consequently E&P gets an early look at most of IBM's cutting edge technology which appears to have served IBM well in the battle for the High Density Media format for the current generation of seismic recorders. It is estimated that the E&P sector installs around 10% of the worldwide number of tape drives, but this market share increases to 25% of the world's total when the number of media is considered. Apart from IBM's 9 track (a.k.a. 3420) tape credentials (early SEG tape formats actually specified an IBM hub, and the proprietary EBCDIC SEG header is a relic of this blatant favouritism) the later 3480/3490 drives are claimed to be the most widely installed tape device in the world.

Form factor

The adoption of the 3480 form factor (jargon for the physical dimensions of the tape itself) for the new 3590 cartridge was a further argument in favour of the latest IBM offering.

Magstar - as the 3590 is also known - allows for multi volume files and multi file volumes. Furthermore the drive hardware is scaleable, from a single tape drive, through a single drive with autoload, to full blown robotics systems. IBM claim the cost/performance of the 3590 even makes it a candidate for on-line storage, with drives theoretically being able to replace disks in high performance computing environments. Such drives may even be striped, i.e. data can be written simultaneously across several different drives, effectively multiplying I/O bandwidth by the number of drives in use.


In the past, drive hardware specifications have been, or have become public domain information, allowing other manufacturers to compete with IBM. The 3590's innovative magnetoresistive head technology is for the time being a closely guarded exclusivity. Maybe IBM have learned something from their "giveaway" attitude to the PC technology they pioneered. As is usual with new technologies, manufacturers manage the expectations of their clients and the savvy of their engineers carefully. Not too much performance at first, a bit more later on, then a bit more, until the last drop of potential has been squeezed out of the technology, (and the last bean has been extracted from the clients). Thus Magstar capacity, which is today of 10 Gigs uncompressed per cartridge is set to double, and then to quadruple over the lifetime of the product. The 3590 would appear to have won over the hearts and minds of the seismic acquisition and processing industry. This is a highly significant coup for both IBM and 3M who manufacture the tapes that will become de-facto standard issue in the estimated 300 seismic processing centres around the world.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_4 as the subject.

The War of the Wear (March 1997)

Tape wear is presented by competing vendors as a market differentiator – IBM stresses the inherent advantage of linear recording, but StorageTek does not agree..

IBM make great play in their marketing strategy of the simplicity of the linear tape drive. They claim that the complex tape path involved in helical scan recording causes excessive head wear, and potential damage to the tape. They further argue that in the the Magstar, the tape is not even in physical contact with the head, reducing wear on head and tape alike. StorageTek retort that an independent study be researchers at Carnegie Mellon University has shown that the Data D-3 media used in the Redwood cartridge "meets or exceeds the life of existing high-end data recording technologies". Other tests involved baking the tape in a high humidity environment, and a "shoe shine test" involving destructive testing by reading, reversing and re-reading the media. Results of this test showed reliability in excess of 20,000 passes. A final gruelling test involved stopping the tape while the scanning head continues to rotate, with the tape under tension. Data loss was only observable after some 40 hours of this torture.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_5 as the subject.

Helicoidal (Video Derived) Tape Formats (March 1997)

Some fundamental differences exist in how data is written to magnetic media, PDM attempts to explain the pros and cons of linear and helicoidal techniques.

The technology used for the storage of digital data on tape falls into two camps, linear and helicoidal. The lineage of the former is well known to the geophysical industry, being a major consumer of linear tapes since the early days on 21 track, 1 inch tapes, through the evolving 9 track tape with densities increasing from 800 bits per inch to 6250 bpi. In the 80's the round reel tapes began to be superseded by the still linear cartridge tapes of the 3480 format, and today from the same stable, we have the 3590, which shares the 3480's external physical characteristics.

The demands for bandwidth in the last decade or so have pushed the scientific world to look for higher capacity drives than current linear tape technology could provide, and there has been considerable development of non linear, helicoidal devices from a variety of manufacturers. These devices all come from the world of video.

VCR technology

The helicoidal technique is used in home video recorders and for those who have peeked inside their VCR, involves dragging the tape out of its normal path, over a skewed, spinning head, which reads and writes the data diagonally across the tape. More recently the broadcast world has gone digital, and it is actually sampled (digitised) data that is written to tape. This data is written in frames, or in individual pictures and the technology is designed around the writing of one frame per scan of the head. The helicoidal technique effectively multiplies the useable length of the tape, while allowing the linear speed of the tape to be kept within reasonable limits. The idea of recording data to these devices was first spotted by the US Navy, whose sonar recordings from their submarine detection arrays were pushing the linear tape technology beyond its limits. The seismic industry, whose activity involves a different kind of underwater recording followed close behind.

To blip or not..

The passage from analogue to digital video potentially offered the data storage world a free lunch, with the advent of a new cheap (because of the high volumes) digital technology. Things were not quite so simple. The video industry, even when recording high quality for broadcast can put up with a certain amount of noise on the image, partly because no on is going to be killed by, or lose money from a blip on a video, and partly because such blips are rather amenable to on the fly "error hiding" processing. Digital Video, which has been developed and standardised by various ANSI classifications appears in a variety of manifestations; D1 (Sony), Metal Oxide tape, high quality for broadcast, D2 (Ampex/Sony), Metal Particle tape, lower spec, but higher capacity and D3 (Panasonic), although the latter has not gained widespread acceptance in the broadcast industry and Panasonic is now rolling out a D5 format.

Confusion labelling

It should be noted that contrary to ideas received, the D1, 2 3 etc. nomenclature does not necessarily represent an evolution towards greater sophistication and capacity. In fact the highest specification drives in the video industry are based on the D1 technology. In the following we refer to the initial video specs with a DV prefix. The move from digital video to data has involved considerable redesign of the hardware to reduce the error rate to acceptable levels for the computing business. This has generally been done by third parties. Emass and Ampex developed their "DD2" format (also known as DST) from DV2. StorageTek acquired Panasonic's DV3 helical scan technology and produced their "D3" Redwood drives for a rumoured $100 million investment. Meanwhile Sony produced a digital tape format ID-1 from the DV1 spec, and is now shipping a new format DTF based on the Digital Betacam format. This format is used in Sony's PetaSite Robotics with a mind boggling 2.5 Petabyte maximum capacity.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_6 as the subject.

The War of the Robots (March 1997)

IBM and StorageTek are currently competing both on tape formats (with STK's D3 and IBM's Magstar) and on robotic systems (with IBM's 3494 library and STK's Powderhorn and oter tape silos).

Both the competing tape formats share the same external physical dimensions, so theoretically allowing for instance 3590's to be used in STK silos. Of course the new tapes would require new drives to be installed in the silos, which would require some degree of co-operation between the robotics and the drive manufacturers. This cooperation has been conspicuous by its absence until recently, mainly because of STK's large share of the robotics market world-wide. This made STK less than enthusiastic about IBM installing their drives and tapes into the large STK installed base, particularly during the roll out of the Redwood D-3 cartridge. Arm-twisting by STK's main clients in the geophysical industry at the last SEG convention has eased the situation to the extent that STK no longer consider it a criminal offence to mix and match the technologies.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_7 as the subject.

Bandwidth and I/O (March 1997)

Bandwidth – the speed at which data can be moved to and from the tape drive – is a crucial factor in chosing a tape storage system. Next generation interfaces are set to up data transfer rates from today’s 10MB/sec to over 100MB/sec

As processing demands increase, what looked yesterday like white hot technology has a sort of cool air to it, and physical limitations of an I/O spec can become painfully real. Hardware cannot be considered in isolation, and particularly if a distributed, multi vendor computing environment is to be supported, then thought should be given to what type of interface is and will be supported by a drive. Bandwidth is not the only consideration, issues such as the maximum distance between peripherals and the server, and the physical cable media required all come into play. The general issues of networking hardware will be looked at in a future edition of PDM, for the meantime we'll just take a quick look at some of the options for connecting tapes and disks to the workstation.

The current standard interface is SCSI which in its latest manifestations (WIDE and ULTRA WIDE) supports cable lengths of around 1.5 metres, and maximum bandwidth of 40 Megabytes per second (MB/s). Next in the pecking order comes Quantum's Low Voltage Differential SCSI which has been pushed to give 80 MB/s over a 25 metre cable. Fiber Channel is the ultimate specification today, offering 160 MB/s over 500 metre distances, and thus being well suited to a file serving rôle on a large departmental server.

Exponential growth

Fiber Channel was initially approved as a standard in 1993 and in 1994 Sun and HP formed the Fiber Channel Systems Initiative. The rational behind this alliance was the exponential growth in demand for storage capacity. By 1998, an IDC estimate shows some 500,000 hosts using Fiber Channel for a market of 2 billion dollars world wide. Drivers in the design of Fiber Channel were the aforementioned needs for capacity and greater physical distance, particularly in the context of networked multi-media, video on demand and Inter/Intranet. The ultimate potential of a Fiber Channel network is staggering. The scaleable architecture offers Gigabit bandwidth over 10 kilometers and uses a DB9 type 4 wire connector. Networks and peripherals can share the same physical medium. Ancor are very active in this field and have a Fiber Array product consisting of up to 112 drives of 9 GB capacity giving around 1TB per controller. A neat safety device is a battery backed up cache of 128 MB, so that if a controller goes down, the cache can be removed and the controller swapped out. This functionality if of considerable importance in transaction processing, where your bank is for instance crediting your account.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_8 as the subject.

Bandwidth – what is it anyhow? (March 1997)

For those of you who may be new to the game, or need a reminder of that long-forgotten course in information theory, bandwidth is the capacity to move information around.

Bandwidth can be measured in the frequency domain, so that a high capacity ATM link will be capable of transmitting some 155 Megabits per second. Note a potential confusion here, the computing world thinks in bytes (8 bits) whereas the telecommunications world is more economical with their bandwidth, meting it out in individual bits. The information content of a message is the product of bandwidth and time, so that you can overcome bandwidth limitations if you are prepared to wait. This may be acceptable for archival purposes, but on a seismic boat, recording has to be performed at high speed in real time.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_9 as the subject.

The Other Tape Contenders (March 1997)

The small systems world (PC's and workstations) has its own culture of backup media, some of which have been extended to make up quite credible robotic systems of multi-terabyte capacity.

Again the helical vs. Linear argument rages with a variety of linear contenders QIC, and Quantum's Digital Linear Tape (DLT). On the helical side of the equation we have the Exabyte, derived from the 8mm video standard, and digital systems using 4mm Digital Audio Tape. The place of these systems in the geophysical scheme of things is hard to evaluate. Exabyte is quite widely used in departmental backup and even in smaller processing shops, but these smaller systems have not really caught on in the mainstream seismic recording business. Is this prejudice?

An interesting contender also comes from IBM in the form of the "mini" Magstar MP (featured in th December 1996 edition of PDM). While IBM understandably do not wish to complicate matters by comparing the MP directly with the 3590, the price performance of MP is fairly astonishing. A US retail of $13,500 for the entry level 20 tape mini robot offers 100 GB uncompressed storage.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_10 as the subject.

Over 800 attend Landmark's Worldwide Technology Forum (March 1997)

Landmark Graphics Corporation (LGC) held the 1997 Landmark Worldwide Technology Forum in Houston in February and it proved to be both a popular and successful event.

First and foremost a platform for an exchange of technical information from both Landmark's clients and their own technical team, the Forum also provided a platform for the "vision" of LGS's president and CEO. Citing industry-wide "revitalised optimism about the dynamics of finding and producing oil and gas" Peebler went on to describe LGC's position as "a participant in the new age of knowledge based oil companies that are finding and producing oil and gas reserves that were previously impossible to detect and uneconomic to produce". Peebler stated LGC's goal as being "to deliver the industry's broadest, most integrated and innovative suite of solutions to increase the asset teams productivity across the oil field life cycle". We have listed some of the contributions likely to interest readers of PDM below, please contact LGC for further details.


Integration and data exchange between Open Works and third party products was featured covering fault seal analysis with FAPS, geostatistics with ISATIS. Data Management issues were addressed in various talks. Amoco Canada described how a single shared data repository for a project reduced a project's data management overhead from 70 to 30%, while Pan Canadian described their sophisticated 10 terabyte seismic data store built around Landmark's Common Access Interface utilising third party products such as Panther's SDMS and running on IBM ASDM archive software. Many case histories described advanced utilisation of LGC products, of particular interest was the contribution from Amoco describing their Post Appraisal and Archival methodology. This rigorous approach to post mortem analysis of drilling prospects illustrates how far we have come since the days when a dry hole was regarded as an embarrassment best forgotten.

800 attendees

Prior to LGC's Technology Forum we speculated as to the wisdom of dividing the conference scene into LGC and GeoQuest events, and indeed when it comes to debating general G&G issues, the AAPG, SEG etc. provide us with quite adequate forums. The 800 attendees at the LGC event have proved that there definitely is a market for these product focused events, and indeed the polarisation of the marketplace makes separate conferences an inevitability. Boy we have moved on some from those heady days when everyone was going on about interoperability!

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_11 as the subject.

Epicentre Exchange Workshop (March 1997)

Prism have announced a Workshop on Epicentre Exchange to be held Wednesday March 5th in POSC's Houston offices and again on Thursday March 27th at the Shell offices, Rijswijk, Netherlands

The following topics are on the agenda Epicentre Exchange Manager - Solving Business Problems, Technical Overview, Usage Scenarios, Database Lifecycle, Epicentre Exchange Manager for POSC/CAESAR models, Future Plans - Multi Model Data Exhange (MMDTI).

To register please E-mail:

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_12 as the subject.

New G&G Website (March 1997)

A new G&G website called the Petroleum Exploration Internet Resources is 'a voluntary site providing access to prime internet resources of interest to petroleum geologists, geophysicists and geochemists in the business of hydrocarbon exploration worldwide.'

Check it out at!~poppet!home.htm. Its probably a better way of finding what you are looking for than Yahoo!

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_13 as the subject.

GeoBanking on a roll as PGS acquires TTN (March 1997)

Petroleum GeoServices now owns all of Tape Technology and has the major stake in PetroData, the prime contractor for Norway’s DISKOS National Data Repository.

NorgeJust before the turn of the year, PGS (Petroleum GeoServices) of Norway acquired one hundred percent interest in Tape Technology Norge AS (TTN). This results in PGS now having the dominant, two thirds, interest in PetroData as, the company which operates the PetroBank for the DISKOS Group in Stavanger. The remaining one third is still in the hands of IBM.

More specifically, it represents a significant increase in PGS's capacity and experience in the preparation of data prior to loading to a GeoBank. Effectively, it confirms PGS as the leading player in data management worldwide, with two wholly owned GeoBanks (one in UK, the other in the USA) in addition to its majority interest in PetroData.

Geobank USA

GeoBank-USA, launched at the 1995 SEG in Houston, has been in business for about a year now, with almost 1 Tb data already loaded to its 27 Tb capacity robotics tape system. Generally speaking, customers gain access to their proprietary data on-line; or purchase speculative data on-line from seismic data brokers like Diamond Geophysical, Jebco or Seitel, for example.

On this side of the Atlantic, the hardware and software now constituting GeoBank-UK were installed in PGS Data Management's new offices in Maidenhead (just west of London) in the middle of July, 1996. After bedding-in the system, database loading began in earnest at the beginning of October. By year-end 1996, PGS Data Management (UK) Ltd had already loaded more than 250 Gb to their 20 Tb capacity tape system. The data currently being loaded comprise PGS Exploration's comprehensive and rapidly expanding portfolio of multi-client 3D surveys.


Concurrent with these developments, PGS are implementing a World Wide Web site affording eager data hunters the means of realising "Geodata-on-Demand" without the need to invest in costly high speed telecoms. From any place on earth, an experienced geo-surfer with a high-speed modem or, better still, ISDN connection to the Internet will soon be able to stop by PGS's web site ( and browse on-line the geophysical equivalent of "thumb-nails" of seismic data that may be of interest. In addition, there's a full description of acquisition and processing parameters, and the survey's size.

Naturally, the geo-surfer won't be able to download the actual dataset over the Internet, for apart from anything else this could tie up the phone line for days. Neither will there be any risk of hackers gaining access to the actual data residing on the database. An effective "firewall" making this impossible will also handle questions of licence and copying charges, which will be clearly spelled out on-line together with an interactive order form ensuring rapid processing of all valid requests.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_14 as the subject.

PESGB kickoff Data Management Group (March 1997)

The interest in last years Data Management Seminar organised by the Petroleum Exploration Society of Great Britain (PESGB) was such that it has spun off a more permanent sub group which met for the first time early March in London.

Around forty participants debated how best to approach the problems of data management in the UK Oil Industry which would seem to be as intractable as ever, despite the best efforts of standards organisations, vendors and national data banks. As Jill Lewis from Troika put it, the multiple solutions current used for data basing need integrating, new larger datasets need new approaches to their management, and guidelines, or at least some strategies need developing within companies to decide when to jettison obsolete data. Citing the hypothetical case of a small company with 10,000 seismic lines to archive, Lewis pointed out that if about ˝ hour was allotted to deciding what to do with each item, the data manager was looking at a potential 10 ˝ years worth of work. Both Lewis and Mark Wilson the chairman of the PESGB Data Management Group appealed to the assembled audience for case histories of data management projects with nitty gritty details of the problems involved, and how they had been resolved.

More standards?

The group was more circumspect with the suggestion that it should play some sort of a rôle in setting standards. As Wilson aptly put it, the idea of yet another standards organisation "makes you feel faint just thinking about it". Notwithstanding such understandable reticence, their was strong support for the group playing a rôle in what was euphemistically termed "conventions" i.e. in recommended practices for the naming of countries, companies and the like. All of which are problems which although eminently simple to resolve, as was suggested by using ISO standards and the like, still plague the daily life of the data manager trying to index an old dataset, or load a file from a trade associate. Future meetings are planned with an informal monthly meeting scheduled coupled with more formal quarterly presentations by invited speakers. Great thought was given to the venue, with protagonists of a pub slugging it out against the "serious" meeting room brigade. In an unusual spirit of inter-operative co-operation, for data managers that is, a compromise was quickly reached in that the chairman was mandated to seek out a pub with a meeting room. If all such decisions are reached so easily, this group will be a powerful force in data management indeed.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to with PDM_V_2.0_199703_15 as the subject.