When we started out producing Oil IT Journal’s ancestral Petroleum Data Manager way back in 1996, the material that constituted the newsletter was basically anything we could lay our hands on. What you saw was what we got! Today, as the Journal gets better-known, we receive input from many different sources. We also now know where to look and who to talk to for news and views. Our workflow has evolved from monthly scraping of the barrel’s bottom to—well editing! Making a lot of calls as to what you want to read about, what’s hot and what’s not.
Over the last couple of months you will have noticed how our overflow coverage is moving into our ‘Folks Offices & Orgs’ section. Sometimes I think that we could fill two issues a month but I suspect that for most readers, once a month is enough. It sure is for me!
Deciding what goes in and what to leave out can be difficult. Our coverage started out in the exploration sector, but has evolved to encompass oil and gas e-business, corporate IT, GIS and pipelines. As the reservoir model is increasingly the focal point of the e-field, it seemed natural to check out the SCADA arena, to see how computerized data acquisition and control systems are being used to ‘drive’ assets harder and maximize value.
But as we follow the SCADA thread—we end up in the refinery! Now I must admit that I have always considered refining to be off-limits to Oil IT Journal. All I know about refineries is that they look pretty at night and they smell bad. I used to think that while some domain boundaries might be blurred, at least refining is well demarcated and probably a good thing to avoid.
So while deciding ‘what goes in and what goes out’ for this month’s edition, I initially had no difficulty in redirecting one press release to the bit bucket. The fact that BP had signed with Aspen Technologies for its simulation and optimizing suite was conveniently ‘off topic’.
And yet, reading more closely, this software is not limited to engineering refining, but supports ‘model-based decision making across upstream, refining and chemicals’. I though that this was kind of noteworthy—after all, there is no reason to have a special kind of decision support software for upstream and another one for refineries. Fluid flow in pipes encapsulates the same kind of physics whether the pipes are vertical well-bores or to, within and from a refinery. So the Aspen/BP deal story made it in (see page 9).
Our big take-home from the IBC EU SCADA show was the growing significance of common off-the-shelf (COTS) hardware. SCADA control points are in the process of migrating from proprietary systems to internet-enabled COTS-based components. As we saw at last year’s SPE, wireless SCADA is powerful new technology for tying-in outlying data sources. All you need is COTS, the internet and electricity—the ‘COTS, ping and power’ of our title.
Carnegie-Mellon’s Coke machine
Once all your SCADA control points (and your subsea geophones, down-hole meters and fiber optics) are accessible through their own IP address, suddenly IT takes on a different aspect. I discovered this for myself a while back when experimenting with Visual Basic and some OCX controls for internet access. Believe it or not, it is possible, with about three lines of Visual Basic, to discover the ‘status’ of the Coca-Cola machine in the IT faculty of Carnegie Mellon university! This functionality has been available since the mid seventies—making it perhaps the earliest use of ‘web services’. Data sources with their own IP addresses are set to revolutionize corporate IT. Engineers and programmers need no longer be concerned as to whether systems are ‘integrated’. If they are IP-based, they are accessible, particularly to another kind of COTS—component software, whether in the form of development tools or the good old internet browser.
As a final witness in the demarcation debate I would like to call on IBM. IBM has always refrained from giving its different upstream practices a very high profile, preferring to classify oil and gas a part and parcel of the ‘process industry’. Now if you are a geophysicist or a driller such a holistic view of the business is very hard to achieve.
But once you are in production, you are part of the process—downstream from the field, everything is tied together. Follow one pipe and it leads to the treatment unit, on to the refinery and thence to the consumer. Gas production and distribution systems are particularly hard to demarcate as town-dwelling consumers are linked directly to offshore production facilities.
The move from reservoir model to the e-field may open a Pandora’s box for upstream IT as it becomes a part of the ‘process’. Fortunately, legacy systems elsewhere in oil and gas look ready for a spring clean too—according to the newly released study from Cambridge Energy Research Associates—see page 12 of this issue. COTS, ping and power are poised to breathe new life into the silo-less processes of the future. And Oil IT Journal will keep on tracking these technologies—even if we do steer clear of refining!
© Oil IT Journal - all rights reserved.