Geophysics has an apparently unlimited appetite for science and scientific computing. Digital recording began in the mid 1960s and you might have thought that everything that could be done in seismic processing has been tried already. Quite the opposite in fact. Super deepwater, sub salt targets like ‘Jack’ in the Gulf of Mexico have brought seismic processing and high performance computing into the limelight.
At the New Orleans SEG (page 6) such issues were debated in Ebb Pye’s Visualization Theatre (page 7). The processing community has already figured out what it would like to do in the way of pre-stack depth migration, inversion and so on, but has lacked the compute horsepower to run many of these algorithms. Today with pressure from discoveries like Jack, and the power of the supercomputer, the high-end techniques are back.
One procedure, something of a holy grail of seismics, is the idea that you could go straight from the recorded data to the 3D volume. Without passing Go, sorry, I mean without all the laborious ‘velocity analysis,’ model building etc. Art Weglein of the University of Houston’s Mission-Oriented Seismic Research Program (M-OSRP) described a technique that soon may invert seismic field data into a multi-dimensional cube with all the data about the rocks that you could wish for.
This is a seductive notion which I came across in Paradigm’s marketing literature which asks, one hopes not rhetorically, ‘Why not routinely convert seismic cubes into meaningful reservoir property volumes?’ This is such a good question that it makes you wonder why we have been futzing around so long with stacking ‘velocities,’ seismic ‘attributes’ and a sometimes bewilderingly large number of seismic data cubes.
At the SEG I heard tell of one company which had accumulated 1,200 versions of the same seismic data cube. No doubt each one had some geo-logic behind it. The particular combination of ‘instantaneous phase,’ ‘spectrally decomposed,’ ‘bump mapped’ data meant something to the interpreter when it was originally derived. But the next day, month year? How do you categorize and manage 1,200 sets of the same data?
Bye bye AVO?
This got me thinking, if we can go from recorded data without producing velocities, all the popular ‘pre-stack’ techniques go out of the window. This is actually a good thing. Instead of all these pseudo-physical attributes, seismic imaging would just be giving us its best shot of what it knows about the rocks and the fluids therein. Which brings me to something of a poser, ‘What is the minimum number of seismic data cubes that need to be stored to capture all the information that is actually there.’ Your answers are welcome. My own intuition is that the correct number is much closer to 12 than to 1,200.
Looking back over 2006, I think one of the key events of the year was the SPE Gulf Coast Section’s Digital Energy conference in Houston (OITJ Vol. 11 N° 4). For me this meet was a kind of problem-setting exercise for the ‘digital oilfield of the future’ (DOFF). Two opposing views were expressed. One the one hand, the ‘upstream upstream’ engineering community tends to be rather dismissive of the technology deployed on the oilfield today. SCADA is referred to as either ‘legacy’ or ‘commodity.’ The other view, expressed by some vendors and consultants, is that the grass is in fact greener on the process control side of the fence. These folks opine that all we have to do is operate the oilfield ‘like a factory’ to achieve huge benefits.
We got the opportunity this month to check this out with an invitation to the Invensys Process Systems User Group meeting in Dallas. The IPS-UG will be the subject of one of The Data Room’s Technology Watch reports and also feature in the January 2007 issue of Oil IT Journal. But I have to say already that this conference marks what I think will be a turning point for Oil IT Journal because the picture that is emerging from the process is a lot more complicated than those expressed above.
To explain how things are more complicated than they seem I will first hazard an analogy. If you came from Mars and learned that humankind was into communications, you might imagine that it would be easy to tap into them and perhaps ‘optimize’ either dialog or destruction (depending on the Martian psyche). But then our Martian is confronted both by wired and cordless phones, Ethernet, internet, WiFi, and soon WiMax. It’s an ugly picture but one we are all familiar with, and it is unlikely to change.
The picture is similar in the ‘factory.’ Process control has to contend with SCADA, but also DCS and PID and optimizations at various stages in the plant’s lifecycle. The process control community is also getting exercised about wireless—with digital ‘canopies’ or ‘umbrellas’ over refineries and plants. So our Martian’s communications problems will get layered on top of all this. Oh and as you probably guessed, there are those in the process control industry who argue that the grass is greener on the E&P side of the fence—particularly with ProdML!
Finally a word of advice about an anti-spam service called ‘Sorbs’ which is causing us some grief. A small amount of our mail is bounced back to us from the recipient with a message that Sorbs considers it to be spam. Sorbs’ data base contains many bona-fide companies whose ISPs have been used at some time in the past by spammers. If you are using Sorbs, or if your anti spam service does, you might like to Google ‘sorbs sucks’ for more. Which by the way is a rather good formulation for getting a quick contrarian viewpoint on just about anything from hardware to software and possible Christmas presents.
© Oil IT Journal - all rights reserved.