January 1997


CGG conjours up its Stratimagic 'revolution' in seismic interpretation (January 1997)

New seismic interpretation software from CGG will integrate existing environments such as OpenWorks using novel CORBA based techniques.

CGG will be releasing the first commercial version of Stratimagic in January. This innovative product is destined to replace Interpret, the seismic interpretation module of CGG's Integral Plus integrated suite of E&P software tools, but more significantly is also being sold as a "plug in" to first, Open Works, and later to GeoFrame based products. CGG is bullish about the take up of this product, having sold beta versions to Amoco and Shell as well as to two other US major oil co.'s and to one of its strongest competitors in the geophysics field. The Stratimagic name is a three way play on words, with the overlapping concepts of stratigraphy, image processing and of course, a touch of magic!

 

PC?

First shown at the Denver meeting of the SEG in November, Stratimagic is a radically different product in many ways and represents a bold departure from both the conventional user interface of most E&P products and also from the "politically correct" solutions to data management as promoted by POSC and others. The user interface is "object oriented" i.e. sparse. With simple menus, consistent presentation and context sensitively presented icons which are located on the menu bar to save pixel space. Data is retrieved utilizing what CGG term a data server, but we prefer to term this technology middleware. Stratimagic works without a data model, and can even work without its own data at all. Accurately spotting the niche market for plug-ins to the market leading integration platforms, CGG has developed a CORBA based technique for getting data out of Open Works and soon, GeoFrame using techniques reminiscent of those developed by Panther Software. In fact CGG are talking to Panther about a co-operative effort in this domain.

 

seamless

To the end user, this technique allows, for instance, a request for a list of available surveys within an area to go out across the network and interrogate seamlessly applications on other platforms, retrieving an assembly of data which may reside partly on SeisWorks, partly on Charisma and of course on Integral Plus. The actual data can be similarly manipulated irrespective of its actual country of residence! Of course because the product can operate in a stand alone mode, Stratimagic has its own internal data structures. But in another break with convention, these are proprietary, and do not even call upon a relational database system for their manipulation. Of course, as regular readers of PDM will know, this has significant advantages in terms of performance, but moreover, given that the interoperability issue has been resolved using the middleware approach, the need for standard data models has largely been circumvented.

 

chameleon

Object Oriented programming seems to be bearing its fruits both in terms of the robustness of the product and the user friendliness of the product. A variety of intuitive drag and drop functions allow for instance a seismic display to be dropped onto a base map which then flashes up the line location. Similarly dragging a horizon display over to a seismic line causes the appropriate horizon to be displayed with the same color coding as the map. For those circumstances where drag and drop is not appropriate, a more conventional pick list can be used, but even this is modeled around business object technology and presents the user with a logical context sensitive tree list. The motor inside StratiMagic is Elf's Sismage product which was developed by PetroSystems in 1991 under a contract to Elf, who have now licensed the product back to CGG. While the stratigraphic technology is that of Elf's Sismage, the code has been completely re-written to integrate CGG PetroSystem's new object oriented platform. CGG understandably emphasize the fact that the Sismage motor has been in use for nearly five years and hence has benefited from considerable debugging.

 

patent

But "what does it do?" I hear you cry. Well it does quite a lot of "normal" things for a seismic workstation. Its' autopick routine is innovative, based on sign bit cross correlation of the first derivative of the seismic trace (is that clear?). Before you all rush out and try to code this yourselves let us inform you that this technique, like the other sensitive parts of StratiMagic's anatomy are protected by a variety of Elf's patents (this is quite a vogue!). One of these patents covers the actual stratigraphic component of Stratimagic. In this context it might be useful to say a few words about what is and is not meant by stratigraphy in the context of Stratimagic. For the seismic stratigraphers amongst you, this product does not incorporate the Vail-esque aspects of seismic stratigraphy covering eustacy. It is more in the Brown-ian camp of geometrical stratigraphers. So it offers functions such as picking terminations, with a coding of their nature (onlap, downlap, toplap etc.) and also the possibility to pick directly 3 dimensional closed forms such as the envelope of a channel or plug.

 

neural net

Another functionality allows for the mapping of the seismic character within a given interval. Sophisticated neural net algorithms divide signal character within the chosen interval into a number of bands of similar signal character. These can be either auto-determined, to offer a machine derived interpretation of the seismic stratigraphy, or guided by a deterministic breakdown of seismic response such as that due to a pinch out, where tuning effects can be modeled. Having described the stratigraphic component, it must be said that some aspects of the product fall short of what the stratigrapher might expect. For instance, there is currently no integration between the conventional pick and the stratigraphic pick. So that even if you already have picked the upper and lower boundaries of a channel, using the richness of the auto picking routines, these must be re-picked to define the channel itself using a rather primitive point and click manual technique.

 

Voxanne

Now while this is disappointing, it is after all a beta, and what is important is that this product can walk well in its infancy and it is quite reasonable to thing that when it grows up it will run fast. I say this both metaphorically and with reference to performance. Stratimagic incorporates a 3D visualisation engine, Voxanne, which performs credibly on Sparc 10. No need for special graphics processors or supercomputing workstations for this baby. This looks like a solid platform which will grow. A joint venture with the French Petroleum Institute will bring geostatistical functionality to Stratimagic late 1997. The image processing aspect of Stratimagic is currently limited to another Elf patent called MixMap. Here the technique is to provide a map of one attribute in false relief which is simultaneously color coded with another. An illumination widget such as that found in satellite imagery products such as Earth Resources Mapping allows for shifting the color coding and relief to locate corelations between the attributes on display. Programmed entirely in C++, this looks to be a pretty solid product. The three hour long demo exclusively for PDM went without a hitch and included a variety of non pre-programmed auto picks and displays. Currently the product works with multiple 3D surveys, 2D integration is planned for late 1997.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_1 as the subject.


CGG finalizes PECC acquisition (January 1997)

CGG are in the process of acquiring the remaining 49% of UK based Petroleum Exploration Computer Consultants (PECC) completing a deal initiated in 1994.

During the bidding process for the Sonatrach database project in Algeria, PetroSystems, a wholly owned subsidiary of CGG acquired 51% of UK based PECC. This acquisition was strategic for CGG in that it brought them both industrial strength remastering capability, and a state of the art database repository in PECC's PetroVision product. The deal was for a two phased acquisition, with the second phase subject to certain conditions to which PDM is not party. (We don’t know everything after all!) In any event CGG seem well pleased with the progress of the deal having elected to exercise their option to complete the acquisition of the remaining shares. Completion is scheduled for the 1st of january 1997 at which point the PECC name will disappear, and the unit will integrate CGG's E&P Data Services Division.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_2 as the subject.


1996 in retrospect and some resolutions to make 1997 a vintage year. (January 1997)

PDM’s editor Neil McNaughton thumbs through his 1996 jottings to provide some lessons from the past year and to suggest some New Year resolutions.

While PDM cannot yet claim a years existence, we may as well jump on the bandwagon of the end-of-year festivities and reflection and try something along the lines of "1996, how was it for you?" and perhaps suggest some new year's resolutions for the data manager. Now trying to turn data management into a festive subject, or even lending it a festive slant is not the easiest of tasks, but we'll try. We will also profit from our short life to date to extend our rearward look to encompass in fact the whole history of data, managed or otherwise from the beginning of time.

 

landfill

Data is funny stuff, most of its uses are not what it was intended for. What may have been recorded as the definitive survey over the hottest prospect in the best basin, say 20 years ago is almost certainly just taking up space today. If the data is lucky, its space may be that of an air conditioned vault with smoke detectors and inert gas fire control systems. Less lucky data may be taking up space in a salt mine, perhaps alongside containers of low level nuclear waste. Data whose luck has finally run out may be "re-cycled", burned or still taking up space, but in a landfill site. It could be argued that these last categories are, in reality, the only ones where data is actually serving a useful purpose. So why do we hang on to data? Some data types, such as cores, while undoubtedly making the best candidate for landfill, do have a special, permanent raison d'être. To acquire them anew would cost a lot, as in the case of new seismics, but unlike seismics, they do not date, you would not expect a new core to give the orders of magnitude improvement one sees from a spanking new 1000 trace 3D survey. So how about this for the first PDM new years resolution. Lets integrate the landfill site into our corporate workflow (note the essential jargon rendering credible and sweetening a bitter pill). It doesn't matter what that 1976 survey was worth when it was acquired, if you have more recent data, or a depleted oilfield or anything that makes you sure you will never use it again, then throw the stuff away!

 

dead wood

Now lets work forward through the legacy data chain - with a slightly different approach. If data - and I am thinking particularly about interpretation results, maps, picks, backups etc. - have not been rigorously indexed and stored in a manner that will allow meaningful re-use to be made of them, then throw them away too. This is not an entirely negative exercise. If you do not adopt a radical approach to trimming the dead branches, you will soon not be able to see the wood for the trees. Searches will bring back so much junk that finding the valid datasets amongst the dross will become impossible.

Next resolution is just the corollary of the first. Implement a policy governing what data is to be kept and how it is to be indexed. Be radical, keep only top level interpreted data, throw away all those backups, intermediate processing tapes, velocity analyses. All that stuff that you just know will never be looked at again. Be even more radical, tell your contractors, no thanks, we don't want all those intermediate tapes and paper. After all if you don't QC it when it's done, what's the point in being able to find out where it was done wrong when its too late? To my knowledge, no-one has ever been to court over a mis-picked velocity analysis!

 

silly

If you are not sure about any particular type of data, there is an acid test to se whether data is worth anything, try to sell it! Offer some samples of your legacy data to a broker. If he turns up his nose at it, then junk it! But please please, under no circumstances give it away to a university. They might just accept it (they usually are desperate for real world data) and then another generation of students will, like I did, have a totally distorted and out-of-date picture of what the data is like out of academia. Having cleared out your cupboards you will be able to turn to another source of material which in the mean time is attempting to fill them up again. This is the Brave New World of simulated data. If we were to thing of the most solid, reliable data that is generally acquired, we would have to think of a core (again). Close behind would be a 3D seismic dataset and so on through what I propose to call the silliness spectrum, measured by the Silliness Index (SI). The core then has an SI of 0, and something really silly, like a geostatistical simulation of inter-well pore space would naturally have an SI of 1 (or 100%).

 

bit bucket

More and more of our data has a very high SI as our trendy technology allows us to visualise, model and manipulate the hypothetical. The head of research of a major oil co. has been touring the world telling eager researchers that the future of the reservoir lies in our being able to wander through the pore space wearing a Virtual Reality (VR) headset. The space will be generated by stochastic simulation of course, i.e. made up. What does this mean to the guardian of the data warehouse? It means that data storage should be dependent on the SI. Data with an SI in excess of around 80% should be stored in the bit bucket (/dev/null, UNIX's ready made "virtual data store"). Above 50% we might like to consider storing some rules for generating the data, less than 50% SI data will be admitted for cleansing prior to storage.

 

cleanse thy data!

Which leads us to the next resolution. Thy data shall be cleansed! Having attended many conferences concerned with data management, and having seen behind the scenes at quite a few data loading and transcription shops, I can tell you a secret. The data management community likes to talk about data models, but the real problem of data today is not that it is un modelled, but that it is unclean. A scan through back issues of PDM (there are six as of today!) will show how issues such as entitlements, line names, well names etc. are the real banes of the data manager's life. The world would certainly be a better place if all our data was correctly named and indexed in one big flat file than it is in today's all singing and dancing relational world.

 

know thy formats

Next resolution, know thy formats. It doesn't really matter whether formats are standard or not. Everyone knows that a standard (say SEG-Y) is really just a theme upon which the vendors and contractor's musicians will produce their variations. This doesn't really matter so long as you know what they are. Get formats from application vendors and seismic acquisition contractors and store them along with an example of a data dump in ASCII on a 3 1/4" floppy. This will mean that when you need to know, or when someone else needs to know the information will be there. In this context a special warning about a new breed of formats which are hitting the streets. SEG's RODE, MADS and the API RP66 are all manifestations of a new super format type which can best be categorized as being Object Oriented.

 

Flexible

These novel formats have the new hot property of Flexibility. Readers of our Quotes of the year piece in this PDM will spot the data manager's F-word and be warned. These formats need special treatment and extra information must be extracted from their authors unless you want your data to be completely lost to posterity in a few years.

Well I could go on but this is getting to be a bit of an unseasonable harangue. Most important, have a good year of successful data management. You should do, this is where the action's at, the oil price is up and data is everything except in short supply! May your relations be normal, your formats flourish and your bytes burgeon.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_3 as the subject.


How was it for you? (January 1997)

Data Management Quotes of the Year.

processes that result in documentation." Michael Ring, POSC, London 1996.

"Another problem we have is the failure of Open Systems in the battle with Microsoft." John Sherman, Landmark, Houston 1996.

"Any GeoFrame application will run immediately on Open Works when Open Works is POSC compliant." Bertrand du Castel, Geoquest, Denver 1996.

"Better IT could mean savings of $3-5 billion." Michael Ring POSC, Amsterdam 1996.

"Current analyses using Rate of Return (ROR) on investment are flawed because if you do nothing, you may get infinite ROR. You get what you measure, if you measure cost savings, you end up with cheap systems." Bob Warren, Mobil, Houston 1996.

"DAE to Epicentre requires a 15 times expansion in the number of calls from a native application." Bertrand du Castel, Geoquest, Paris October 1995

"DISKOS was intended to be POSC compliant but this was not possible at the time". Gunnar Sjogren, Statoil, Amsterdam 1996.

"Don't archive on novel media!" Charles Hewlett, Lynx, London 1996.

"Don't let data specialists run the show!" Steven Spink, Getech, London 1996.

"Flexibility is the F word for data exchange and applications." Bill Quinlivan, Geoquest, Amsterdam, 1996.

"For the 6500 wells in the UKCS there are around 300,000 tapes & 3 million sepias. All this has cost around $50 million in reprographic and tape copying costs." Fleming Rolle, Paras, Houston 1996.

"In Norway, cumulative data acquired to date is 1 million magnetic tapes and 300 terabytes of data (through 1993)." Richard Koseluk, PGS, Houston 1996.

"Landmark rarely loses a sale because of lack of standards compliance". John Sherman, Landmark Graphics Corp., Denver 1996.

"Never replace your original data with a compressed version." John Kingston Geco Prakla, Amsterdam 1996.

"Non E&P standards bodies such as EDIFACT CALS AIAG/PI-STEP have all faltered through lack of support from end users. They fail because they promote connectivity, application interoperability and data exchange. They should promote the business case and business process." Jim Noble, Cap Gemini, Paris October 1995.

"POSC has tried to lift the discussion above the relational plane but has failed to come up with a decent object model." Albertus Bril, De Groot Brill, Amsterdam 1996.

"POSC Spatial model is wrong!", Bertrand du Castel, Geoquest, Paris October 1995

"POSC will go on forever, managing standards." David Archer, POSC, Paris October 1995

"PPDM is not Epicentre, this needs to be fixed!", Bertrand du Castel, Geoquest, Paris October 1995

"Preference will be given to POSC compliance in procurement policy." Bob Stevenson, Shell, Paris October 1995.

"Relational technology is not good enough for what you are trying to do." Doug Benson, Oracle Corp. Calgary 1996.

"Standards are about data management not data usage." Albertus Bril, De Groot Brill, Amsterdam 1996.

"The Data Management Trap - because Data Management doesn't make money, it doesn't get money." John Sherman, Landmark, Houston 1996.

"The ideal number of people needed to develop a data model is one." Helen Stephenson, Stephenson and Associates, Houston

"The 'logical' data model is one that is in a book, not one that is implemented." John Sherman, Landmark, Amsterdam 1996.

"The Network Computer is a castrated PC", Bernard Vergnes, Microsoft France, Paris 1996.

"There is a board level attempt underway to bring POSC & PPDM together. This could be a merger or redefined role to produce one model to enhance performance" Paul Maton, POSC Europe, London 1996.

"Total spend on E&P IT is estimated at $2 billion per year.", Bill Bartz, POSC, Paris, 1995.

"Use a database system to manage data, not to further database skills!" Steven Spink, Getech, London 1996.

"We are still looking for the Rosetta stone of Data Management." Steven Spink, Getech, London 1996.

"We believe very strongly in Discovery." David Archer, POSC, Denver 1996.

"We don't want data model's, we want integration." Anonymous user quoted by Stuart McAdoo, Denver 1996.

"We will probably never have a single data model, we'll still need a department of re-formatting for the next 10 years or so, what with versioning, tweaking by vendors and change. " Helen Stephenson, Stephenson and Associates, Houston

(On computer security) "For GECO the problem is that clients will not let Geco Prakla onto their networks! Passwords are no barrier to a determined hacker or saboteur." John Kingston Geco-Prakla, London 1996.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_4 as the subject.


ESRI announce new Internet Mapping Solutions (January 1997)

ESRI’s latest Internet toolkit, MapObjets, add-ons promises out-of-the-box web publishing of ArcView data.

Offering the possibility of publishing your maps on the web ESRI have announced " simple-to-use" solutions for deploying maps on Intranets or on the World Wide Web. This can be in the form of a custom solution built with MapObjects Internet Map Server toolkit or an out-of-the-box GIS and mapping solution using ArcView Internet Map Server. Products such as GeoQuest's data Management Information Server, a Web browser front end for Finder demonstrate the immediate usefulness of this type of approach to distributing GIS based information on an Intranet. ESRI's technology goes beyond simply viewing static maps, users can browse, explore, and query active maps. The product line comes in two basic flavours, MapObjects Internet Map Server designed for Windows developers and ArcView Internet Map Server allowing the ArcView GIS to be used "out-of-the-box" to put mapping and GIS applications on the Internet. Applications built with the MapObjects Internet Map Server extension can access spatial data formats supported by MapObjects such as shapefiles, coverages, SDE layers, and many graphic images.

 

Microsoft IIS support

In addition, MapObjects Internet Map Server includes a Web server extension that works with Netscape Server, Microsoft Internet Information Server, and other server products that support NSAPI/ISAPI web server extensions. The Web server extension provides a unique framework for request management and load balancing that provides fast, efficient, and scaleable map serving capability., MapObjects is scheduled for release in December 1996. With the ArcView Internet Map Server extension, ArcView Internet Map Server makes publishing maps on the Web "almost as easy as printing a map". It includes a built-in setup wizard and ready-to-use Java applet to help you publish your data quickly. Interactive maps can be created from a number of different types of spatial data including shapefiles, coverages, SDE layers, DWG, DXF, DGN, and a variety of graphic images. In fact, any map you can make in ArcView can be easily published on the web. ArcView Internet Map Server works with Netscape Server, Microsoft Internet Information Server, and other server products that support NSAPI/ISAPI web server extensions. ArcView Internet Map Server is scheduled to start shipping in the first quarter of 1997 and requires ArcView GIS Version 3.0 running under Windows 95, Windows NT, or UNIX.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_5 as the subject.


Geoquest insists 'We will open GeoFrame to all-comers'. (January 1997)

Geoquest announce that GeoFrame will offer 3rd party data access and make claim for POSC 'compliance'.

Francis Mons of Schlumberger Geoquest, speaking at the POSC European annual shindig, held this year in Cap Gemini's Chateau at Behoust, France stated that the GeoFrame Software Integration Platform (SIP) is now available to anyone who wants to attach their product to the GeoFrame data bus. Possibly in response to PDM's probing (we were skeptical about Geoquest's real desire to allow for interoperability with Open Works, Landmark Graphics Corporation (LGC)'s competing integration platform) this clarification opens the door to an approach from Landmark, although none has been forthcoming as yet. For Geoquest, the GeoFrame SIP represents a major change in computing architecture. Applications will no longer have their own databases, but will run off a common POSC compliant database at the heart of the GeoFrame architecture. This will be populated from the corporate data store at project inception.

 

Mexican stand-off

While GeoFrame is painted as "pure POSC", GeoQuest are also backing project Discovery, the POSC PPDM merger project. The intent is to migrate Finder and SeisDB using the results of Discovery when they become available. Subsequently, as Discovery gobbles up more and more of Epicentre, GeoQuest "will remain active, implementing the results of Discovery as available and following testing and acceptance by the POSC and PPDM boards". Is that quite clear? The current situation is a bit of a Mexican stand-off, with LGC offering up their SIP to third parties to enable integration around their platform, and GeoQuest doing likewise with GeoFrame. Perhaps someone should step in with a mega bit of middleware (a mega bit, not a megabit, ed.) to bridge the gap?

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_6 as the subject.


ELF led consortium awards LightSIP to Prism Technologies and IBM. (January 1997)

Elf, Shell and Statoil have awarded the development of a 'Light' (that should be 'Lite'!) version of the Petrotechnical Open Software Corporation (POSC)'s Software Integration Platform (SIP) to Prism technologies of Newcastle, UK and IBM.

The LightSIP project will result in "the first commercially available POSC Data Access and Exchange Facility (DAEF) implementation on a published POSC relational implementation". In other words, according to IBM, this is the first time that a truly POSCian data management solution, albeit a trimmed down version, has ever been developed. Other "POSC complaint" solutions are generally stand alone products with applications accessing the database directly. This "politically correct" POSC implementation will have data access through the DAEF, which is there to protect the development in access methods from changes in the underlying database technology, in particular the long awaited migration from the relational database to the object oriented database (will it ever happen?). The POSC DAEF will be "a key technology to enhance data sharing within multi-discipline geoscience teams". IBM claim that future adoption of the DAEF by E&P application providers will enable widespread use of the new DAEF POSC standards (once everyone has mastered the acronyms that is!)

 

critical path

Philippe Chalon, methods and standards department manager at Elf Aquitaine says "Elf considers that the availability of a POSC DAEF component is on the critical path of POSC take-up. We will be recommending our E&P applications providers to adopt the LightSIP product when it becomes available, as POSC standards are central to our technical target architecture." For their part, IBM intend to implement the LightSIP DAEF atop their PetroBank data management product suite.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_7 as the subject.


Father Christmas visits the DOE. (January 1997)

Ultra Super computing arrives in the Sandia National Laboratory with tera-flop computer.

One frequently reads postings on the Usenet from IT hard men boasting about how nothing but a twinhead Sparc 100 with 2 Gigs of main memory will do for their personal needs, or that their disk is bigger than the next guy's. Well the United States Department of Energy's Sandia National Laboratory and Intel Corporation, have just announced the "Ultra" supercomputer which is likely to have the IT bully boys rushing out for fresh quiche in panic. The bottom line of this mega machine is the teraflop, one trillion- operations-per-second performance. This mark was established using the highly respected Linpack measurement method. The achievement of 1.06 teraflops -- or trillions of floating point operations-per-second -- shatters the previous performance record of 368.2 gigaflops (billion-operations-per-second) by over 250 percent. "Today's accomplishment is computing's equivalent to breaking the sound barrier," said Dr. Craig R. Barrett, Intel executive vice president and chief operating officer. "Just a few years ago, a teraflop was an intellectual barrier that nature dared us to cross. Now that we’ve surpassed that barrier, we have the computing horsepower needed to address the Grand Challenges of Science. We could be at the threshold of robust scientific discovery, triggered by access to teraflop-level computing performance."

 

Teraflop

According to the DoE, Teraflop computing will not be applied to E&P. But the list of what it will do makes suspicious reading. To make things sweet, the DoE has announced that the new beast will be applied to solving what are termed "The Grand Challenges of Science" which include "issues in Applied Fluid Dynamics, Meso- to Macro-Scale Environmental Modeling, Ecosystem Simulations, Biomedical Imaging and Biomechanics, Molecular Biology, Molecular Design and Process Optimization, Cognition, Fundamental Computational Sciences, and Grand-Challenge-Scale Applications". In other words, a down-play of what "Ultra" is really for, modeling nuclear tests now that real world testing is no longer cool!

Other suggestions as to what this power may be applied to are "simulation of disease progression that could help doctors and scientists develop new medicines and drug therapy for debilitating diseases such as cancer and AIDS; severe weather tracking to minimize the loss of human life and property; mapping the human genome to facilitate cures for genetic-based diseases or birth defects; car crash and highway safety; and environmental remediation methods to clean up and reclaim polluted land". So if the DoE is just painting a rosy ecopicture, then maybe it is really for E&P modeling too. Anyone out there with an inside track on this?

 

9,000 Pentiums

Here is the bit for the IT machos, besides its' 9,260 200 MHz Pentium Pro processors, the beast is kitted with a mind boggling 573 gigabytes of system memory and 2.25 terabytes of disk storage. (But we know that that will fill up just as quickly as the 64K of the old Commodore Pet, so what the heck!). Peak power consumption is estimated as 850 kilowatts. And the whole system weighs in at a staggering 44 tons and requires 300 tons of air conditioning to cool it. The current record was achieved with "only" 7,264 of the planned 9,200 Pentium Pro processors which will equip the final production version. The Intel/Sandia Teraflops Computer is currently under construction at Intel’s Beaverton plant. The system will be installed in stages at Sandia National Laboratories in New Mexico in the first half of 1997. At completion, the computer will have 9,200 Pentium Pro processors and is expected to perform at sustained rates of 1.4 teraflops and peak rates of approaching two teraflops. Just 25 years ago, Intel introduced the world’s first microprocessor, which delivered 60,000 instructions per second. Wow even that's pretty quick!

 

tax dollars

U.S. Secretary of Energy Hazel R. O’Leary said, "This achievement firmly re-establishes U.S. computer industry leadership in developing high-end systems. Four years ago the U.S. government, industry and academia set a goal. At that time it was not clear how or even if a trillion-operations-per-second computer could be achieved. Now thanks to U.S. innovation, it’s not only possible, it’s being done." "The Intel/Sandia teraflop computer is built from commercial, off-the-shelf products and technologies including the same Pentium Pro processor in many of today’s workstations and servers," said Ed Masi, general manager and vice president of Intel’s Server Systems Products Division. "Using commercially available technology has enabled the government to utilize the R&D muscle of the marketplace, focusing tax dollars on combining these standard building blocks into the world’s most powerful computer."

 

Linpack

The Linpack measurement method is the most widely recognized single benchmark for measuring sustained floating-point performance of high-end computers. It gives an accurate picture of the performance of a given computer on applications that require the solution of large, dense linear systems – a category that includes a very wide range of technical applications. A useful fact supplied by Sandia is that by the time a speeding bullet. bullet travels one foot, the computer will have completed 667 million calculations. Or in other words, in the 1/50th of a second or so it takes you to blink, the computer will complete 40 billion calculations. With 86 cabinets, it’s as big as a good-sized starter home (1,728 square feet, counting the space between the aisles and leaving a 4-ft space to walk around the machine). This "ultra" computer, is part of the department's Accelerated Strategic Computing Initiative (ASCI), which is developing simulation technologies to ensure the safety and reliability of the U.S nuclear deterrence without underground testing.

 

$55 million

Ultra

The announcement of DOE's "ultra" computer follows President Clinton's signing of the Comprehensive Test Ban Treaty on September 24, 1996. The $55 million machine is to be installed at Sandia National Laboratories in New Mexico and will also be used by Los Alamos National Laboratories in California. In addition to simulating aging effects on the nuclear weapons stockpile, this "ultra" computer and others like it will provide the power for medical and pharmaceutical research, weather prediction, aircraft and automobile design, industrial process improvement and other quality-of-life research. This computer architecture invention switched the trillion-operations-per-second speed objective from the impossible to the doable. Although other countries are now widely copying U.S. designed parallel computers, the U.S. remains the world leader in building and developing big, fast "ultra" computers. So PDM readers, what you would do with a teraflop? Any good ideas will be printed in a future PDM.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_8 as the subject.


Groovy Website of the month (January 1997)

New website offers comprehensive list of on-line Earth Science Journals.

If you still have any fuddy duddies in your organisation who doubt the usefulness of the world wide web, point them in the direction of the website maintained by Dr. Daniele L. Pinti of the Graduate School of Science, Department of Earth and Space Sciences, Osaka University. The URL is http://psmac5.ess.sci.osaka-u.ac.jp/Journal.html (note no www prefix). Here you can access directly the world wide web pages of 258 journals related to Earth Sciences. We paid a visit and were astounded at how many journals have online versions today. For example, just in the field of petroleum geology you can access the following AAPG, Basin Research Institute Newsletter, Bulletin of Canadian Petroleum Geology, China Oil and Gas, Electronic Oil Exploration, First Break, Hydrocarbon Processing, International Petroleum Abstracts, Journal of Petroleum Science and Engineering, Oil and Gas Journal, Oilfield Review, OPEC Review, Petroleum Chemistry, Petroleum Geoscience, Pipeline, The Oil Chronicle and Petroleum Abstracts. Daniele is on the lookout for any more online journals to include in her impressive compilation.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_9 as the subject.


Western Acquire EnTec (January 1997)

Western Atlas International Inc. (WAII) acquires U.K.-based EnTec Energy Consultants Limited EnTec Energy Consultants Limited, a world leader in the growing market for integrated oil and gas reservoir description services.

EnTec has pioneered development of techniques to integrate seismic, geological and well data to produce high-resolution reservoir description volumes that enable oil companies to better evaluate complex hydrocarbon reservoirs. EnTec has been developing quantitative integrated reservoir description techniques for 17 years. Its multidisciplinary approach to integration has been applied successfully on reservoir description projects worldwide by major, national and independent oil companies, as well as government agencies. EnTec will operate as a wholly-owned subsidiary of Western Atlas International.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_10 as the subject.


Faraguna back with Western (January 1997)

John Faraguna has rejoined Western Atlas Logging Services to assume the position of vice president, business development.

In this position, Faraguna will be involved in marketing, strategic alliances, and the development and rollout of new technology. He will be based at the Western Atlas headquarters in Houston. Faraguna first joined Western Atlas Logging Services in 1984 after obtaining a B.S. degree in geology and geophysics from Yale University. During a 5-year tenure at Western Atlas, he helped develop several software applications for interactive formation imaging and analysis, and also obtained an M.S. degree in geology from the University of Houston. Faraguna left Western Atlas in 1989 to join the M.B.A. program at Stanford University. After obtaining his degree in 1991, he joined the consulting practice of Arthur Andersen & Co., where he worked for 5 years as a management consultant to the energy industry.

Click here to comment on this article

If your browser does not work with the MailTo button, send mail to pdm@the-data-room.com with PDM_V_2.0_199701_11 as the subject.


© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.