Katherine Frase, VP research, IBM, provided the keynote to this month’s Digital Plant event in Houston. Frase noted the massive amount of data ‘at rest’ in databases along with the exploding amount of data ‘in motion’ from sensors and real time feeds. Such ‘big data’ can fuel three types of analysis: descriptive (rear view mirror stuff), predictive (what if) and prescriptive (control). IBM’s conventional toolset (SPSS, Cognos, Maximo, and Ilog) already provides ‘stochastic process optimization in the face of noisy data.’ Examples include equipment failure prediction, truck route optimization and minimizing unplanned maintenance of offshore platforms.
But the next frontier is the 80% of information that is ‘unstructured,’ i.e. text. Enter natural language processing (NLP) and IBM’s Jeopardy-winning ‘Watson’ with its (DeepQA) software. Watson is a ‘massively parallel probabilistic evidence-based architecture.’ The system was primed with multiple information sources including old Jeopardy questions and answers. Watson builds multiple possible answers which are evaluated when all processing is complete.
Watson uses the Apache Unstructured Information Management Architecture (UIMA) as a ‘standardized approach to handling information.’ Actually the version that played the game was below the skill level of top Jeopardy players. The tie at the end of the first round was considered a good result by IBM’s researchers.
IBM has tried Watson on healthcare, replacing Shakespeare and the Bible with Gray’s Anatomy and the Merck Index. Early attempts to play the American College of Physicians’ Doctor’s Dilemma challenge were not successful. But IBM has tweaked the system and demonstrated Watson’s merit as an aid to diagnostics.
Frase made a rather soft sales pitch to oil and gas. In the future, Watson may help in ingesting and preserving knowledge of operating procedures from a retiring workforce. Oil and gas, with its multiple information sources such as seismics, well data and real time should be a good candidate for big data analytics.
Comment—UIMA apart, Frase failed to make clear the contribution of open source software to Watson. According to Heaton Research, Watson runs on Linux and Hadoop. IBM reported a $1 billon/year investment in open source back in 2000 and produces the excellent IBM Developer Works resource.
The UK regulator, the Department of energy and climate change (DECC) has issued clarification to operators and offshore drilling contractors in the light the Deepwater Horizon accident. The new guidance covers environmental, emergency plans, reviews and inspections. Well naming and numbering rules have been clarified and are now coupled with the oil pollution emergency plan (OPEP).
Environmental statements must include a discussion of accidental events that could give rise to a hydrocarbon release, including ‘worst-case scenarios’ such as a blow-out. Such releases must be modeled to determine beaching locations and assess potential environmental impacts. Pipeline operators are likewise required to model potential releases at key locations. Model outputs are evaluated against environmental sensitivities to inform a response strategy.
Stochastic models are needed to study uncontrolled flow occurring at different water depths. These must run long enough to identify likely directions of travel and areas at risk. The 20 page document includes instructions for organizing a control center where the Secretary of State’s representative and the emergency manager can oversee operations. More from 0201.
This time last year I reported on the ‘bumper year’ for mergers and acquisitions that was 2010. Well, A&M carried on at a fair pace in 2011, although tempered with what could be considered another bumper year of share buy-backs.
Our January issue was something of a data modeling special with a continued interest in ‘horizontal’ i.e. not oil and gas specific techniques. At the IRM-UK conference Shell, Statoil and BP presented on data modeling and enterprise architecture.
Our February was, after the fact, dedicated to the ‘cloud’ which featured at the Microsoft Global Energy Forum (although MURA did not and remains something of an obscurity). Standards were in the limelight as the National Commission on the Deepwater Horizon accident castigated the API for its dual role as pro-industry lobbying group and safety standard setting body. As far as I can tell, nothing has changed here as the API continues to lobby vocally against anything coming out of Washington while maintaining its role as standard setter. The API home page promises ‘over 160 standards at your fingertips.’
In our March issue we reported on Santos’ move to the ‘cloud’ even if this ‘cloud’ could have been mistaken for what used to be called a ‘mainframe.’ Shell reported trials of Microsoft’s Azure cloud for Office software. We also noted data ‘issues’ getting attention from the regulator, with BOEMRE finding ‘significant problems’ with BP’s engineering documentation on Atlantis. Vendors have been scaremongering the likelihood of SOX and other regulations impacting the data managers for a while now—sometimes you have to be careful what you wish for...
April saw P2 Energy Solutions on a buying spree with acquisition of Explorer and WellPoint Systems. In our Cyber Security round up, McAfee reported a ‘dramatic increase’ in cyber attacks on critical infrastructure. And there was ironically, also a successful hack of McAfee’s own website!
May brought more cloudiness—as Baker Hughes announced reservoir simulation in the Azure cloud. The inexorable rise in data volumes was confirmed with Sercel’s announces of a ‘million channel’ capable seismic recording system. At the Digital Energy conference we heard of the unstoppable rise of iPhones, iPads and much agonizing over Stuxnet.
In June we reported on Shell’s plans for standards-based identity management, also in the cloud, leveraging Covisint’s Energy Ecosystem. The annual Fiatech conference confirmed the rise of the ISO 15926 standard for engineering data management with a flagship deployment on Statoil’s Snohvit LNG project. ‘Deployment’ is perhaps a bit strong. Aveva leveraged ISO 15926 in migrating over 2,000 DGN files to some 120 PDMS databases. At the International Digital Oilfield Conference we learned of the use of Siemens’ ‘XHQ’ as the basis of its Saudi Aramco’s enterprise monitoring solution that runs its refineries, pipelines and export terminals. SAP’s strength in oil country IT was confirmed with Baker Hughes’ ‘Odyssey’ corporate-level HS&E development and a mega land project, also for Aramco.
We led our July-August issue with a great story from Industrial Evolution which managed to solve the production data exchange problem between multiple partners and facilities in the Gulf of Mexico. There was the A&M deal of the year as IHS acquired SMT for an eye watering $500 million. We also reported on ‘reproducible’ open source seismic code—something we expect to hear more of in the future. And our exclusive report on VSG/Comsol’s shale poroperm analysis for ExxonMobil brought a significant hike in our website traffic following a LinkedIn post—thank you that person!
In September we reported on yet more cloudy stuff as the San Diego Supercomputing Centre leveraged Hadoop/big data technology to index seismic and Lidar data sets. In our report from the OSIsoft EU regional meet, we heard how Total has squared the circle of real time/alarm/event detection and ‘production intelligence.’ ISO 15926 was in the limelight again at the Norwegian Semantic Days event. But a clear presentation of its business benefits remains elusive. No doubt all will be revealed when the $15 million ‘Integrated operations in the high North’ project concludes in 2012. IOHN turns on the ‘application of semantic models in ISO 15926’ for ‘proactive monitoring and management of production critical sub-systems in collaboration with external expert centers.’ We also spotted some amazing modeling technology represented at the Big Data and Big Computing in the Geosciences’ Cyberinfrastructure Summer Institute for Geosciences.
October brought more reflections on data and regulations from the Office of Pipeline Safety citing ‘poor data integration within integrity management programs’ as a contributing factor to recent high profile spills and explosions. Microsoft announced ‘Chemra,’ a Mura for Chemicals, but revealed little as to its inner workings.
In November we had another scoop with our carefully researched piece on BP’s enterprise architectural success leveraging technology from Composite Software and Netezza. But before you run off with the idea that this problem has been ‘solved,’ it is worth reflecting that the high end solution has so far connected 32 out of BP’s 3,000 applications.
A highlight for December might be POSC/Caesar’s proposal to port ISO 15926 from Express to RDF/OWL. The Norwegians have to be complimented on their semantic stamina.
~
Somehow in this wrap-up our 16th year of publication I feel that I have omitted the essential. I write Oil IT Journal thinking of folks who will devote a couple of hours per month to reading it cover to cover. To my mind, along with the headline grabbing big stories, the little stuff—like a GPS-controlled spigot on a tanker, or a clip-on wireless device that ‘digitizes’ a remote oil well—are just as interesting. We will try and keep you supplied with the tidbits as well as the big picture through 2012. Happy new year.
Speaking at the annual meet of the professional petroleum data management association, PPDM, Bruce Smith, member services manager, investigated the data management requirements of the booming hydraulic fracking business.
While fracking has been used by the oil and gas business since the 1950s, the shale gas boom has increased scrutiny of the activity with, as Smith reports, a slug of new requirements from the regulator.
Texas and other states revised hydraulic fracturing rules during 2011, Colorado and Pennsylvania are to require disclosure of frac fluids in a ‘queryable’ database and in Canada, British Columbia is to require online registration of frac information starting in January 2012. The Interstate Oil and Gas Compact Commission (IOGCC) and Ground Water Protection Council (GWPC) have set up a website for frac fluid reporting. As of October 2011, some 5400 wells have been registered by 44 companies. Reports, in PDF format, contain voluntarily-supplied information on frac fluid chemistry.
The GWPC’s own ‘risk-based data management system’ (RBDMS.NET) is used by some 20 US regulators for management of well, production, injection, surface facilities, permitting and more. The RBDMS has been extended with a frac data model covering treatment dates, chemicals, water sources, samples and disposal techniques. RBDMS includes fairly comprehensive modeling of stimulation activity and lifecycle frac fluid use across multi stage treatments including CAS and MSDS chemical tags. Another significant contribution is Energistics’ Witsml ‘Stimjob’ object for exchange of post-stimulation data between service company and operator. Operators’ requirements may be more detailed, with a need for ready access to treatment information from offset wells such as proppant, number stages, pressure and horsepower data. The current PPDM well treatment table offers summary-level data on most of the above, but, Smith concluded, ‘There have been sufficient changes in frac reporting and in information supplied by service companies that PPDM should take a fresh look at the model.’
PPDM CEO Trudy Curtis suggested that the subject could be treated in the context of a wider investigation of the data management specifics of non conventional resources in general. These include tar sands mining, coal bed methane and shale gas/oil. Some of these issues are already addressed in the upcoming 3.9 release of the PPDM data model.
The subject of ‘non conventional’ data management also cropped up in another presentation from Bruce Smith on well identification—an extension of the earlier ‘what is a well’ project. In complex wells with long multiple completions and environmentally-sensitive treatments, the unambiguous naming of well bores and laterals has added pertinence. PPDM is working to update the API well numbering standard and bring it into the modern world.
Chuck Smith (Divestco) introduced the petroleum education task force (PETF), a group of data management specialists from, inter alia, Total, Hess, Chevron, BP, RasGas and Devon, who are working to build an education program and to ‘capture domain knowledge before it is lost.’ The PETF is working to ‘align’ PPDM education with the DAMA data management book of knowledge and Common Data Access’ competency map. A framework for PPDM certification is under development as is ‘Dacum’ (developing a curriculum) - based content. A new online course on US land survey systems will probably have launched by the time you read this. Live classroom sessions are planned covering architectural principles, ‘what is a well,’ geochemistry and PPDM implementation.
Steve Cooper’s (EnergyIQ) presentation went ‘beyond the well,’ with an overview of three projects that extend PPDM’s traditional subject area. One company has leveraged PPDM ‘spatialization’ to support handheld field data capture. These have been tied in to upper level PPDM modules including facility, equipment and HSE incident. Spatial data can be mapped into PPDM’s ‘SP_’ tables, even though many deployments store spatial data in the GIS system. 95% of the required spatial attributes were mapped directly to PPDM 3.8 with the rest added as new tables in what Cooper described as a ‘robust, fully-integrated’ solution.
Other projects studied involved oil and gas meter management and AFE/drilling cost tracking. Cooper concluded that PPDM is a flexible and powerful data model. The ‘Component’ tables provide excellent support for integration projects—allowing complex relations to be made across equipment, sites and support facilities. However, it takes patience and a thorough understanding of the problem to build out the model correctly.
Neuralog’s Robert Best advocates ‘standardizing’ your PPDM implementation. Best showed how this was achieved in a recent joint development undertaken by Neuralog, OpenSpirit, Volant, Oilware and several clients. Standardization works at different levels, on natural key values for reference data, consistent use of API well numbers (for the US) and a global UWI for international work. Denormalized latitude/longitude data can be embedded in the Well table for performance. For multiple sets of data, the Active_Ind flag should be used to indicate which is the preferred set. Likewise, a records management strategy requires careful thought—again maximizing the usefulness of the natural key. While binary large objects (Blobs) can be stored in PPDM, most implementations use linked files. Here both original (source) location and current database copy locations need tracking in the records management module. Best recommends using the OGP Geomatics. To date, vendors take different approaches to ‘standardizing’ PPDM. Best suggests that standardizing the standards will provide better inter-vendor interoperability.
Earl Amankwah outlined how IHS provides support for multiple PPDM versions. This proved necessary after IHS migrated its own data services to 3.7—while several clients’ older releases required support for in-house applications and scripts. Implementing backwards compatibility proved tricky. Challenges of code consolidation, primary key changes and new PPDM columns in 3.7 have been solved thanks to coordination with clients’ teams and IHS ‘proven’ methodology. Download the presentations from PPDM.
The Society of Petroleum Engineers has just published updated guidelines for users of the Petroleum Resources Management System (PRMS) a joint development between the SPE, the World Petroleum Council, the American Association of Petroleum Geologists and the Society of Petroleum Evaluation Engineers. The PRMS was used by the US Securities and Exchange Commission as the basis for its latest reserves booking regulations.
The new Guidelines align an earlier publication with the latest flavor of the PRMS, with new chapters on unconventional resource reporting. A revised chapter covers probabilistic estimation procedures. Other topics that are either new or have been updated are resource aggregation across multiple projects, ‘commercial’ evaluations and public disclosure and reporting.
The non-conventional guidelines are considered ‘work in progress’ with some details on coal bed methane and shale gas which will be expanded at a later date to include heavy oil, bitumen, tight gas and gas hydrates. Other refreshed chapters cover production measurement and operations, entitlement and ownership and a revamped list of reference terms. More from 0601.
World wide web consortium (W3C) blogger Ian Jacobs recently interviewed Chevron retiree and standards luminary, Roger Cutler, on Chevron’s use of semantic web technology. Cutler explained that Chevron’s focus, back in 2000, was on XML and web services. Chevron later sought to exploit the ‘expressiveness and reasoning’ achievable with the web ontology language, OWL. This program was a technical success, but has not as yet produced a significant business benefit. A second effort focused on ‘challenging integration problems’ spanning equipment used in major capital projects, involving ‘tens of thousands of objects: flanges, pumps, blowout preventers and so on.’ Chevron again leveraged OWL to pull all this information together. Again, a technical ‘success’ has not yet seen at scale deployment. Cutler reported ‘We’re [still] at the stage of learning and experimenting with the technology.’
Cutler remains positive about the promise of semantic technology to bridge ‘different organizations with different data models and systems.’ Such an approach ought to enable system-wide optimization as opposed to current domain-specific work. Technology of interest includes an ‘upper ontology’ that defines general concepts like units of measure. OWL is also a promising alternative for relational technology. In one example, some 15 lines of comprehensible rules were equivalent to over 1000 lines of complex relational code. However the ‘reasoning’ bit has proved intimidating—‘It is daunting to figure out how to gain the organizational capability to support a technology that is so difficult to understand and use effectively.’ Read the full interview on 0701.
The 2011/12 edition of Cast Research Labs ‘CAST’ report on application software health has analyzed code from 745 applications submitted by 160 organizations—a total of 365 million lines of code, a threefold hike from last year’s study. Nearly half the code is Java-EE, with other applications written in .NET, ABAP, COBOL and Oracle Forms. Legacy Cobol applications do best in security while .NET applications ‘received some of the lowest security scores.’ In the ‘energy and utilities’ segment, 65% of code is written in Java , 8% in .NET and none in Cobol. Cobol’s success is attributed to a highly regulated environment.
Java-EE performed less well in performance—receiving significantly lower scores than other languages. Energy and utilities showed one of the lowest (best?) scores for component complexity. Government applications were much higher in complexity than all other verticals. Another finding was that there wan no difference in code quality from in house developed vs. outsourced applications. Another interesting fact is the cost of fixing bad code—estimated at $3.5 per line. More from 0801.
A new report from IBM’s global business services unit looks forward to ‘Oil and gas 2030.’ The opinion survey of energy industry executives (63% from oil and gas) finds that ‘despite increasing attention to alternative energy sources, the world ‘can’t forget about oil and gas.’ The biggest challenge will be ‘dealing with technology progress.’ Next comes ‘environmental concerns and government influence’. Executives anticipate an increase in strategic partnering for R&D which will be ‘too complex and expensive for any one company to manage on its own.’ Companies expect to conduct 38% less research in house and 44% less through outsourcing. Inter-company R&D is expected to double.
R&D will be targeted—on ‘supercomputing and sensing data acquisition for 4D seismic processing and reservoir modeling’ (in other words, on what we are doing today!). The study may have reached a different conclusion if 63% of the sample had been, for instance, from, say, ‘air-air’ geothermal technology providers. More from 0901.
OSIsoft has announced PI Coresight, a web-based viewer and analytics system for PI System data—1001.
Austin Geomodeling’s Scenario Manager is an interpretation version control tool that lets geoscientists analyze multiple interpretation scenarios simultaneously. GeoSM tracks changes to the model throughout a project’s lifecycle—1002.
SPT Group’s Mepo 4.0 claims to provide ‘rigorous reservoir flow model optimization under uncertainty’ along with enhanced data management and a new ‘3DViz’ visualization module—1003.
Blueback Reservoir has added a seismic reservoir characterization (AVO) module to V11.0 of its Petrel toolbox—1004.
The V9.0 release of Petris’ DataVera promises a 5-10x performance hike, better ETL functionality and usability. Multi-source merge can blend ‘best’ values from diverse sources into a master set. Connectors have been added for Peloton, PODS and P2 Energy Solution’s data sources. V9.0 includes preconfigured models and rules verified by Petris—1005.
PortVision’s new Fleet Management System, extends its AIS-based SmartOps vessel-tracking system. An activity logger installed in the wheelhouse records events and delays, along with invoicing and reporting information—1006.
Blue Marble Geographics has enhanced support for SEG-Y formatted seismic data in the V7.6 release of its geographic Calculator with a more user friendly interface that tags columnar seismic files in the point database—1007.
The V9.0 release of Kongsberg Oil & Gas Technologies’ SiteCom data server improves on scalability and adds ‘pro-active operational support’ of service company data streams. Discovery Web has been extended with an optional package of new applications for better use of real time operational data—1008.
The 8.7 release of IHS’ ‘Kingdom’ interpretation package enhances direct hydrocarbon indicator analysis on post stack data. EarthPak includes new tools for identification of gas-rich shales as well as resource evaluation with Monte Carlo uncertainty analysis. VuPAK includes new microseismic functionality for frac studies. IHS acquired Kindom’s developer Seismic Micro technology (SMT) earlier this year—1009.
StreamSim’s StudioSL V7.0 offers workflows integration of 3rd party simulators such as Eclipse, improved 3D visualization and remoting of simulation jobs. Production data can be imported from OFM, geoSCOUT and Accumap—1010.
Badleys is planning a new 6.1 release of Trap tester with conversion of TrapTester structural models to Eclipse style corner-point-grids. Geocellular models can be populated with reservoir attributes to generate 3D petrophysical cubes. These can leverage deterministic (Kriging) or stochastic simulators (sequential Gaussian). A ‘totally revamped’ OpenSpirit link permitting cross-platform data transfer and a centralized data manager has been added—1011.
Speaking at the recent GITA-backed GIS for oil and gas pipelines conference in Houston, David Rogers reported on a GIS/GPS combo that his company, ESC Engineering developed for BP’s San Juan operations. The system uses a GIS/GPS combo to mitigate rising road vehicle accident rates amongst employees and contractors. ESC brokered a licensing agreement with Garmin for a custom version of MapSource and converted BP’s detailed roads and facilities data for use with the dashboard navigation units. The system provides operators with turn-by-turn driving instructions resulting in a decrease in incidents and improved operator productivity.
David Supp described EQT Corp.’s GIS-coupled enterprise asset management system to manage its 11,670 mile pipeline network. EQT’s system combines an ESRI APDM-based GIS with IBM/Maximo EAM. Global Information Systems’ ‘GForms’ were used as a general purpose data entry and reporting interface (including GIS) to the EAM. Using ESRI as a central repository for Maximo required a major data cleansing effort. But this has paid off in a wider audience for a single source of live spatial data shared by all stakeholders via the pipeline operations dashboard.
Blue Marble Geographics’ president Pat Cunningham provided a refresher course on geopsatial data management with a twist. Along with the niceties of geoid, projection systems and so forth, there is the issue of when GIS data was recorded. US intra-plate movement of several centimeters per century mean that, since WGS 84 was established, ‘your coordinates aren’t where you left them!’
Tracy Thorleifson (Eagle Information Mapping) reflected on pipeline data governance observing that ‘your pipeline database is not the real pipeline.’ The database may not represent individual pipe joints, older systems may not have information that is mandatory for modern reporting and information may not be current. Thorleifson advocates using manufacturing processes like six sigma and ‘lean’ to minimize data defects in the pipeline database. Data edits need tracking at the attribute level—not just on a per record basis as supported by current models like PODS and APDM.
John Jacobi of the pipeline and hazardous materials safety administration (PHSMA) tempered an earlier message observing that ‘the sky is not falling in!’ The long term trend in pipeline incidents with death or major injury shows a steady, 3% per year decline since 1986. Spills with environmental consequences are also on the decline. Notwithstanding the good news, the world is changing as a recent rash of incidents has sparked media interest, fear and ‘elephants’ wading into the debate. These are influencing the congressional reauthorization of the pipeline safety program which will set the regulatory agenda for the next four years. The industry has its work cut out in the face of the multiple commissions and enquiries underway. Presentations on 1103.
Around 900 attended Invensys’ EU ‘OpsManage’ event held in Paris last month. In his opening keynote presentation, Sudipta Battacharya, CEO the operations management division, painted a picture of a fast-morphing process industry as the promise of real time data is fulfilled. Battacharya contrasts the old world of ‘centralized ERP control’ with the real-time empowered future. ‘Real time’s place is not at the CEO level—it’s true use is at the edge—in the hands of the operators. The technology already exists but not the empowerment.’ There is great potential for combining real time data with social networking type tools. But these may cross the human/machine barrier—’what if machines can tweet?’ Some of these ideas have been leveraged in a Chevron-backed refinery of the future proof of concept. Here system performance is monitored in real time, emailing operators with suggestions on how to increase profits and or keep to schedule. This kind of efficiency analysis ‘used to take hours in Excel.’
Leen de Graaf, alarm management expert with Netherlands-based UReason, described a major alarm management initiative carried out at Total E&P Netherlands. The project set out to reduce operator workload on an offshore platform through advanced alarm management. Total’s alarm philosophy is described in a document of ‘required’ or ‘recommended’ alarms—along the lines of ISA 18.02. Alarms must relate to stuff that is relevant and over which an operator has control. For instance, a malfunctioning temperature sensor should not appear in the control room alarm list. The information needs to be routed straight to maintenance. Alarms need to be prioritized as to the required reaction time and the seriousness of the event. UReason’s OASYS-AM software displays the number of alarms per hour etc and other KPIs such as most frequent ‘filtered’ alarms and a most occurring discrepancy list. Results have been good—the system has reached its target of around 12 alarms per hour per platform. The software also supports advanced alarm handling and can be used to detect abnormal process or equipment/environment conditions. The aim is for early warning of impending incidents.
Tyler Williams (founder of computer IT security specialist WurldTech, currently working for Shell) offered insights into process/IT security in the light of Stuxnet. While this brought a lot of attention to SCADA security, Williams warned of the ‘swarm effect,’ whereby everyone runs towards a fix for Stuxnet and loses focus on why such things happen. Security is really a subset of quality. Automation vulnerabilities may exist but most are probably very hard to access and may or may not have a serious consequence. Another issue is ‘work group bias’ as there are multiple stakeholders and ‘competing’ standards—like Hart vs. ISA. IT security is moving fast and automation is following with application white listing just getting started. But the chatter from some intrusion detection systems may overwhelm operators. To cut through the various obstacles, a Shell-led group has set out to leverage ‘people and ideas’ from cFATS, DHS ISA and NIST. The International Instruments Users’ Association, ‘WIB’ is working to get suppliers to provide more secure systems. The WIB’s security requirements document for vendors has been proposed as an ISA standard. The WIB is also working on end users’ security. In the Q&A, Williams opined that while it may be relatively easy to disrupt a plant, the WIB aims to make it very hard to do serious damage. The easiest attack may be a denial of service on the router—especially with wireless systems in the plant. One speaker doubted that this was the case—pointing out that disrupting a modern wireless system required sophisticated spread-spectrum equipment operating in proximity to the plant. Another commentator ‘considered the IT department as a security risk!’
Invensys’ Peter Martin returned to the theme of real time and the operational ‘edge’ observing that efficient plants are not necessarily profitable. Enter profit control with, for instance, continually adjusted contracts for energy, feedstock and product. Here front line personnel are well placed to act, provided they have the right profitability metrics in real time. Because every set point impacts value, we need to change the ingrained culture that determines a set point on the fact that, ‘we’ve always done it that way.’ Such a holistic approach needs to overcome the continuous tension between operations and maintenance. ‘These are the two groups which collaborate least. They don’t like each other because we measure them independently and pit them against each other. We need to measure both on winning the race—not on ‘availability’ and on ‘utilization.’ Because plants are usually constrained by safety considerations rather than capacity, Martin advocates real time measurement of safety risk that will allow an operator to ‘expand’ a constraint if it is OK to do so.
Sven Matthiesen, responsible for safety automation systems at Statoil’s Kalundborg, Denmark refinery, presented Statoil’s technical integrity management program (TIMP). TIMP sets out to ensure complete control of technical integrity in the face of multiple information sources and non-uniform work processes. Statoil’s TIMP tool provides quantitative risk assessment e.g. with a map of reflected pressure exposure across plant. Daily examinations of plant state are carried out with a scoring system. This ranks risks from none or insignificant through ‘substantial issues’ requiring communication to the plant manager to ‘fail,’ where immediate action is required. Performance standards specifically target safety. All are rolled up into a TMT dashboard of safety information on structures, containment, ignition control, power, communications and more. Statoil’s performance standard 15 sets out design principles for explosion barriers and provides a scorecard for the whole plant. Statoil, like others, was very concerned by Stuxnet and has been updating its safety and automation systems using the Norwegian OLF 104 Information security baseline requirements.
Thierry Guillaume (Invensys) provided a primer on the state of the art in process simulation particularly in factory acceptance testing in compliance with IEC 6108 and 61511 FSM. The Triconex TriStation simulates multiple systems in an offline environment and automates safety instrument testing. The Excel-based tool allows for scripting of tests and connects to PLC/DCS emulators via various protocols. Output can be directed to a real hardware controller or, with the SIM4ME executive to software emulators. The system has also been used in a training setting to introduce a fault, see how operators react and then show how the fault originated. The system can also perform dynamic process simulation with Dynsim—testing startup, shutdown and running in fast or slow time, rewinding if something ‘funny’ happens—watching signals fire red and green as you step through the process.
Leen de Graaf (UReason) was back addressing the subject of security, stating that, ‘the banking industry is twenty years ahead of us in cyber security!’ But the situation is improving with clear processes to fix vulnerabilities such as stack overflows in ActiveX controls. Microsoft’s security development lifecycle approach has been helpful as has the trustworthy computing program and the ISA Security Compliance Institute. Things get more difficult when considering legacy software and devices. The SEE Framework is useful—but the threats are constantly increasing. In the period from 2006-2009 around 50-60 vulnerabilities were identified. In 2010 and 2011 there have been 150. ‘It is a real problem.’
Niels Aakvaag’s company, GasSecure, has developed what is claimed to be the first wireless infrared gas detector, developed with backing from Statoil and ConocoPhillips. Wireless offers good coverage with reduced engineering. The downside hitherto has been poor power consumption. GasSecure gets around this with two detectors—one low power ultrasonic time of flight detector detects changes in ambient gas composition. When a change is detected, the second optical sensor device (which uses more power) kicks-in. Other smarts make the meter calibration free and intrinsically safe. A prototype has successfully been tested at Statoil’s Kaarsto gas plant and offshore at Grane. ConocoPhillips has placed an order for its Tor development in 2012.
Pablo Rey presented Technatom’s virtual reality application—first developed to train and practice operations in hazardous areas such as a nuclear plant. The system uses laser scan and photo realistic draped imagery and can be coupled to a 3D TV monitor. A variety of activities and emergency situations have been developed. More from Invensys.
At the IRM-UK held in London last month, Shell group data architect Andrew Schulze teamed with Accenture’s Duncan Slater to present Shell’s foundation for enterprise master data management. The project spans Shell’s global upstream and downstream businesses with standardized terminology and a high level enterprise data catalogue. A 2010 proof of concept (PoC) project extended Shell’s SAP master data management system with Microsoft’s master data services. Now Shell has a further six MDM PoC projects underway. Even at the PoC level, MDM has not been easy. The golden rule? There are no golden rules! MDM practitioners have to balance realizable projects with the requirement of enterprise deliverability.
Statoil’s lead information architecture advisor, Eldar Borge, has been studying data management methodologies. Oil data management differs from the mainstream with the ‘normal’ breakdown of 20% structured and 80% unstructured practically reversed—mainly due to large seismic data volumes. Statoil has a long history of leveraging upstream data methodologies and evolving standards. Its ‘Score’ project was effectively an enterprise architecture well before the concept was invented! Statoil is now attempting to reconcile its framework with mainstream information management concepts. Borge compared DAMA, IBM’s information governance and The Open Group’s TOGAF approach. Statoil has settled on the DAMA methodology for concept and function definitions. This is proving useful in documenting information domains and their relations with process, governance and data stewardship. DAMA is now being used as the basis for an enterprise master data pilot project. But, Borge warns, ‘DAMA describes what to do, not how to do it!’
Human resources data management architect Mark Smith traced Shell HR’s journey towards information quality ‘top quartile performance.’ The move to an integrated, global ‘HR Online’ system for Shell’s 93,000 employees has seen a major data clean-up effort. Shell HR’s information quality ‘community’ spans talent, training, pay and other stakeholders who now share accountability for the data framework. HR Online includes data definitions, a business glossary and data quality KPIs. Shell is still adding to the system and enhancing its data verification capabilities. More from IRM-UK.
Seddik Boulemkahel is Corpro’s new Resident Manager for Algeria. New North Africa manager is Mohamed Chahtour, who hails from Baker Hughes.
Roxar has appointed Frode Sedberg regional manager for Brazil for Roxar. He will be based in parent company Emerson’s Rio de Janeiro office.
Senergy has appointed Vivien Broughton as VP of HR. She was previously with Transocean.
Phil Neri has been appointed VP marketing with Terraspark Geosciences. He was formerly with Paradigm.
Filipe Soares Pinto is to head-up Aspen Technology’s Latin American operations in Sao Paulo.
Antoon De Proft has been named to Barco’s board, replacing Urbain Vandeurzen.
Richard Sandler has been appointed VP global contracts and commercial management for CSC.
Dassault Systèmes has opened its new North American headquarters, the Dassault Systèmes Boston campus, a 20,000 sq. ft. technology lab, data center and virtual reality center.
Former BP executive Patrick Dixon has been appointed chair of the office of carbon capture and storage at the UK Department of energy and climate change.
Brian Neil Robertson has been appointed CEO of Identec Solutions. His hails from DSP Group.
Vickie White has joined Exprodat as training and business development manager. She hails from ESRI UK.
Invensys has appointed of Mike Caliel as president and CEO of its operations management division, replacing Sudipta Bhattacharya.
ION Geophysical has named Gregory Heinlein as senior VP and CFO. Heinlein hails from Genprex.
Scott Barrett has joined the management team of KSS Fuels as VP customer pricing and planning management. Barrett hails from BP.
Lloyd’s Register has appointed John Wishart to lead its global energy team. He was formerly with Noble Denton.
Idox’ McLaren Software unit has named Mike Cawsey as VP Asia Pacific. He joins McLaren from Metastorm.
Baker Hughes’ Malissa Boudreaux has joined the OFS Portal board.
Co-founder of McLaren Software, David Parry has joined Cadac Organice.
Rick Piacenti has joined Quorum Business Solutions’ as VP finance and chief administrative officer. He was formerly with Milagro Exploration.
Dan Skibinski has joined Reservoir Group as director of Canada Tech. Julia McGlashan is manager, HR.
F.H. Schreve has been appointed to Fugro’s supervisory board.
Aker Solutions is hiring engineers for its new engineering office Tromsø, Norway.
Bill Barrett Corp. has appointed former ConocoPhillips VP E&P Kevin Meyers as an independent director.
Retired US Coast Guard Admiral Thad Allen has been named senior VP at Booz Allen Hamilton, USA.
Bill Spurgeon is president and CEO of Dover Corp.’s new Energy unit.
Puneet Mahajan is now VP and chief risk officer at GE Corporate.
Mark Vergnano had been elected to the Johnson Controls board.
Jan Hackett has joined Rolta to manage the new P2ES practice. Hackett was previously with Computer Sciences Corp.
Innovation Norway has awarded a ‘public grant’ of 20 million NOK toward Badger Explorer’s autonomous drilling demonstration program.
Dirivera Investments is to acquire ‘substantially all’ of Forbes Energy Services’ assets in Mexico for $30 million cash.
GeoDigital International has purchased the US assets of Norwegian Powel AS.
McLaren Software’s parent company, UK based IDOX plc., has acquired CTSpace from the Sword Group.
Lufkin Industries has completed its acquisition of the assets of Quinn’s Oilfield Supply Ltd. for $311 million in cash. Lufkin also completed a $350 million term to finance the acquisition.
Maxim Partners has acquired Pipe Maintenance, a provider of inspection, repair, maintenance and asset management services for oil country tubular goods and down-hole tools.
P2 Energy Solutions has acquired Beyond Compliance, a provider of hosted compliance management solutions, notably its Integrated Compliance Management System suite.
Technip has consolidated its wholly owned subsidiary Genesis Oil and Gas Consultants with its Houston-based deepwater engineering team into a new unit, ‘Genesis,’ now with 1,000 engineers in Houston, London, Aberdeen, and Perth.
TerraSpark Geosciences has received an additional $4 million capital injection from Lime Rock Partners. The monies will serve to ‘accelerate growth in the seismic interpretation marketplace.’ Lime Rock’s total commitment now stands at $10 million.
Following the reception of a second ‘Nasdaq letter’ last month as a result of its failure to file a 10-Q for the three months ended September 30, 2011, Recon Technology has engaged Friedman LLP as auditor, replacing Marcum Bernstein & Pinchuk (MBP). Recon states, ‘There have been no disagreements with MBP on any matter of accounting principles or practices, financial statement disclosure or auditing scope or procedures.’
The EAME edition of Honeywell’s user group held recently in Baveno, Italy heard from Total’s Luc de Wilde on the state of alarm management. Using numbers from the Abnormal situation management) consortium (ASM), a Honeywell-sponsored group with membership from several oil majors, de Wilde traced the history of alarm management from the early days of enthusiasm to the advent of alarm ‘flooding’ from multiple alarms in large control rooms, believed to have been responsible for several accidents.
Early attempts at alarm rationalization met with limited success. In an ASM benchmark study, peak alarm rates showed little correlation with alarm rationalization and 60% of consoles showed peak rates of one alarm every six seconds. de Wilde argues that alarm management should be seen as a continuous improvement commitment and needs to go beyond application of minimal best practices. Today, alarm system performance is an ‘unsolved problem.’
Ian Pinkney reported on a continuous improvement project targeting alarm management across BP E&P. BP has used ‘mind mapping’ as a preliminary to its rationalization work, performed by teams of alarm specialists, chaired by a non specialist manager. The team works to build a database of alarm severity, causes, consequences response and other parameters which are grouped into prioritization tables. The aim is for compliance with BP company specs and/or EEMUA 191.
Dominique Desplebin described how Honeywell’s OneWireless mesh solution has been deployed at a Total polystyrene unit in Gonfreville, France. Various communications solutions were studied. A pager was eliminated because of the lack of precision in its messages. SMS messages were not chosen becuse of their dependency on the GSM phone network. A dedicated 5.GHz Motorola solution was deemed unreliable and offered insufficient coverage. Total turned to Honeywell for a solution based on a secure WiFi network and PDMs for alarm visualization.
Fouad Al-Saif described process monitoring across Saudi Aramco’s 15,000 PID loops and 130 advanced process control applications like ‘finding a needle in a haystack.’ Al-Saif observed that performance monitoring software has become a key to the control engineer—particularly used to pinpoint under performance for preventative maintenance. Saudi Aramco is now working to deliver metrics to compute the financial benefit of advanced application and key PID loops along with an indication of the lost opportunity cost of sub-optimal operation.
Saipem’s Luigi Pedone made a case for an ‘integrated main automation contractor’—a.k.a. Saipem’s ‘iMac.’ The iMac concept, was developed for Sonatrach’s Arzew grass roots LNG project. iMac meant that Saipem issued all the bids, Honeywell and third party suppliers. iMac was claimed to eliminate the risk of multiple vendor interfaces, offering Sonatrach a single procurement contract.
Sasol’s Hugo van Niekerk’s presentation focused on the convergence of control systems and mainstream IT, now blending into a ‘control critical information system.’ Sasol’s Synfuels unit houses one of the largest Honeywell systems in the world with some 37 DCS systems interconnected with 450 Windows-based nodes and 200 Cisco devices. The infrastructure is managed by a central platform for process optimization, access control, anti virus and file services. The system has been successfully deployed and now provides a highly secure and scalable environment.
Veselin Kutsarov revealed that Lukoil has implemented a virtualized environment including a OneWireless infrastructure at a large oil refinery at Burgas, Bulgaria. A PHD-based manufacturing execution system has been ‘virtualized’ with VNWare, resulting in better IT resource use, scalability and reduced downtime. The OneWireless showcase deployment has brought ‘unmatched’ interoperability across all plant equipment, exposing the same unified infrastructure to Lukoil’s diverse user community. Request presentations on the Honeywell user group website.
The joint IT subcommittee (SC 27) of the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC) has just released a Technical Report 27008 designed to assist companies and organizations engaged in a review of information security controls. TC 27008 describes IT risk assessment and sets out guidelines for IT system documentation, continuous review and formal compliance.
A ground-up approach begins with an information security risk information gathering exercise—a literature search on previous incidents and near misses. The aim is to scope out the audit with a checklist of topics and a framework for future ‘fieldwork.’
Fieldwork itself consists of tests of systems in place that verify compliance with regulatory obligations, standards and best practices. Anti virus checks will include for instance, verification that these are in-place and refreshed across all computing platforms. Statistical sampling will likely be used if resources are limited.
A three phase—review, interview and test—approach is recommended, with in-depth drill down as appropriate. Interviews should include a representative sample of users with special attention to key stakeholders. Testing will span software, hardware and processes and will include access control, backups, contingency planning and (much) more. Testing can include blind, double blind and ‘grey box’ techniques. The TR concludes with recommendations for analysis and reporting and lengthy appendices of detailed procedures. The 44 page document is available for purchase.
BP has awarded ABB a $33 million, three-year frame agreement for maintenance, modification and service of the safety and automation systems of all of its oil field operations on the Norwegian continental shelf. ABB will provide system expertise for the safety and automation systems installed at BP’s main field operations in Norway, Valhall and Ula, as well as remotely operated Hod and Tambar. Telecommunications systems at Skarv are also covered by the agreement.
The North Caspian operating company has contracted with Paradigm for the provision of onsite technical services at its data centers in Astana and Atyrau, Kazakhstan. Services include back up and disaster recovery, geological and geophysical application support, and infrastructure, data management and IT consultancy services.
Paradigm has also been awarded a contract from Geopro Technology for the provision of its Sysdrill drilling engineering solution. Sysdrill will be used to model ‘challenging’ wells to reduce cost and improve safety.
Advanced EPM Consulting has achieved an Oracle partner network oil and gas industry ‘specialization’ for deployment of Oracle’s Hyperion enterprise performance management solutions.
The department of petrophysics and borehole geophysics at the Leibniz Institute for Applied Geophysics reports use of Visualization Sciences Group’s Avizo Fire toolkit for visualizing the results of its ‘digital rock physics’ micro-CT scanning studies. Avizo’s ‘skeletonization’ module is used to investigate porosity micro inhomogeneities and pore space tortuosity.
Aveva and Faro have signed an interoperability agreement covering integration between Faro’s laser scan hardware and Aveva’s engineering and design software.
Exprodat has attained Esri ‘gold tier’ status and has secured a new relationship with Esri UK to jointly provide petroleum GIS solutions.
Forum Energy Technologies has signed with DOF Subsea AS for the provision of a ‘VisualSoft’ digital video and pipeline data processing suite for its new multipurpose construction and anchor handling vessel, Skandi Skansen.
Total has completed the roll-out of a prospect assessment solution based on the GeoX system from GeoKnowledge. The new system replaces a legacy in-house developed system. The initial 2011 contract sum is in excess of NOK 8 million. The contract includes bespoke extensions to GeoX and migration of Total’s prospect database.
Hess Corp. has embedded GasBuddy’s OpenStore application in a new ‘Hess Express’ mobile app to provide consumers with deals, coupons, real-time gas prices and traffic information.
Rolta International has integrated its OneView operational analytics suite with P2 Energy Solutions ’ Enterprise Upstream suite.
Acorn Energy unit, DSIT Solutions, is to supply an unnamed Asian client with underwater security systems including diver detection sonar to protect offshore oil platforms, terminals and vessels against underwater intrusion and sabotage. Contract value is estimated as $12.3 million.
The Norwegian POSC/Caesar Association (PCA) is inviting participation in a project that sets out to cast the ISO 15926 facilities engineering standard in the web ontology language, OWL. The current ISO 15926 reference data library is implemented in the Express data modeling language. A move to RDF/OWL would better align the standard with current ‘semantic web’ data initiatives. Previous RDF mappings of the standard are deemed to have shortcomings and fail to align with ‘commonly recognized methodologies.’ The initiative has backing from PCA, EPIM and DNV. Interested parties should contact PCA on 2001.
At a recent meeting of IFLEXX, the EU data exchange community, members resolved to harmonize IFLEXX 2.0 with the latest release of the API’s PIDX standard for the exchange of commercial data. The agreement will see a gradual merger of the International FiLe EXchange XML protocol and Petroleum Industry Data Exchange standard. IFLEXX is used by 17 petroleum companies, operators and service providers in Germany—2002.
A three day gathering this month saw the International Standards Organization (ISO) debate ‘how IT can make standards development simpler, faster and better.’ Nicolas Fleury, ISO marketing director stated, ‘XML is the key to answer the new challenges ISO has to face in its role as a publisher. That is why ISO Council gave high priority to the implementation of an ISO publishing system based on XML. We have begun the conversion of the entire catalogue of ISO standards to XML format.’ Current standardization processes are perceived as ‘too slow and complex.’ More from 2003.
The Open Geospatial Consortium is inviting public comment on its ‘Gazetteer’ best practices document. The best practice describes the Gazetteer service application profile of the OGC web feature service standard. More from 2004.
The RDF Web Applications Working Group of the W3C has published the first draft for RDFa Lite 1.1 and an updated Working Draft for the RDFa 1.1 Primer. The RDFa Lite addresses criticism of RDFa as having ‘too much functionality.’ The minimalist RDFa Lite provides a jump start into the structured data world—2005.
The new version of ISO 19011 has been expanded to reflect the complexities of auditing multiple management systems. Target systems include quality, environmental, IT services and information security—2006.
Kuwait Petroleum Corp. has replaced its legacy Symantec NetBackup software with CommVault’s ‘Simpana’ backup and recovery package. KPC, a unit of the Kuwait State, was experiencing rapid growth with data doubling every year. The company needed a centralized solution to improve operations and compliance. KPC’s IT team also relies on Simpana to manage its 15,000 Microsoft Exchange mailboxes.
KPC now safeguards critical applications, including Oracle ERP and an expanding VMware environment with Simpana. Data recovery times have reduced from ‘hours or days,’ to ‘minutes.’
CommVault’s embedded deduplication software has been used to cut storage requirements by 60%. Simpana works seamlessly across KPC’s physical and virtual servers—the latter making up 70% of the environment.
KPC senior systems analyst Qais AlDoub said, ‘Prior to CommVault backup and restoring data was a nightmare. Simpana’s intuitive GUI makes backup and recovery worry free.’ KPC senior systems analyst Khaled Al-Faili added, ‘Simpana suits our needs perfectly. We have now consolidated multiple operations under one umbrella. We expect to double our storage capacity in the near future and are confident that Simpana will scale and continue to increase our operational efficiency.’
CommVault’s unified management platform has meant faster daily backups while its integrated reporting streamlines compliance with KPC’s evolving information governance requirements. IT administration has been improved and according to KPC, productivity has been increased, since disk-based data backups, including individual emails, can be quickly and easily restored without requiring help-desk intervention. More from CommVault.
Moore Industries reported recently on the deployment of its Net Concentrator System (NCS) at ENI Petroleum’s Devil’s Tower platform in the Gulf of Mexico. US government regulations mandate an open line of communications between the control rooms of the platform and drill ships working on submerged pipelines more than 100 km away. ENI also required warning of emergency situations such as a ‘dropped object’ on a subsea pipe. NCS provides a multi-protocol gateway to new and legacy sensors, instruments and control systems that offers computer-based monitoring and supervision. The device can concentrate hundreds of process signals onto a single digital data link such as existing Ethernet cable. Moore integrated the NCS with ENI’s communications system in what is now described as a ‘reliable method for dealing with potential emergency situations that meets new federal regulations and reduces the possibility of false shutdowns.’ The new system is said to be cost-effective and ‘simple enough for control room operators to use with minimal training.’ More from Moore.
AspenTech has introduced Aspen Search for rapid retrieval of relevant models of plant and process data. Aspen Search is said to help locate the right models and supporting data and to improve collaboration between engineering teams working on process optimization.
Dupont process engineer Kunle Ogunde said, ‘Users of AspenTech’s simulation tools should find Aspen Search useful in getting a head-start on new work by rapid identification and evaluation of existing models. Users can track their own models and package them for access by others. The tool bundles process data, information and domain expertise and ensures that we can re-use this institutional knowledge across the organization.’
Aspen Search helps manage the corporate knowledge asset in the face of a changing workforce. Engineers can leverage simulations from subject matter experts and maintenance and sharing of institutional knowledge is made easier. Aspen Search is available immediately for the latest V7.3 releases of Aspen Hysys and Aspen Plus. More from AspenTech.
UK-based Cleveland Process Designs (CPD) has developed a suite of modeling tools to assist fire fighters, hazmat teams, HSE managers and others involved in emergency management. CPD’s ‘iResponse’ emergency management software offers emergency pre-planning, training and response support. CPD is now offering entry-level thermal and dispersion modeling for emergency planning. Emergency and evacuation plans and environmental impacts can be investigated across a range of accident scenarios.
iResponse Thermal models fires in tanks, pools and bunds, incorporating atmospheric inputs such as wind, temperature and providing a display of heat radiation contours. A ‘burn down’ calculator indicates how long it takes for a product to burn itself out.
iResponse Dispersion adds plan and profile views of product dispersion to the atmosphere, providing distances and areal coverage of a release. CPD operations director Ross Coulman said, ‘We have worked with fire and HSE experts to develop iResponse which fills a need for easy to use, stand alone, thermal and dispersion modeling applications.’ Both apps are available ‘for a limited time’ at €160. More from Cleveland.
One of Ensyte’s Gastar clients recently acquired a company with several thousand wells on its books. Well data resided in multiple legacy systems, including Microsoft Excel and Access, a check filing system and a risk management tool. As more data sources were uncovered, Ensyte partnered with Open Source Business Consulting (OSBC) to evaluate migration options. OSBC recommended Talend’s data migration toolset based on its previous migration projects. Talend offers ‘drag and drop’ functionality across disparate data sources and shaved ‘weeks, even months’ off the project. OSBC also provided data cleansing and validation services.
The project included data integration of Gastar with the client’s SAP ERP and customer management system. The release claims that the skill sets of OSBC and Talend constitute a ‘competitive offering’ to natural gas companies looking to integrate and streamline business processes. It is interesting to observe that the OSBC and Talend’s specialist knowledge of open source methodology was required to migrate data from ‘Excel and Access’ into Ensyte’s Microsoft software stack. More from Ensyte.
4D Imaging has patented a system for monitoring pipeline integrity with magnetic ‘response’ imaging (MRI). The system provides a non-invasive, pig-less means of providing operators with a real-time picture pipeline health, checking for fractures and corrosion. MRI Pipeline works by wrapping the pipe in wire coils. One set of coils magnetizes the steel pipe, the other monitors the induced magnetic field. As steel corrodes and degrades its magnetic properties are affected, indicating possible corrosion or other damage.
Coils record their data one at a time in sequence along the length of the pipeline. It takes three seconds to thoroughly test a segment of pipe. Once the check has been performed the data is sent back to a computer and can be plotted against a schematic of the pipe, displaying which areas might require attention. The 4D Imaging team includes emeritus UC-Berkeley professor Jerome Singer, co-inventor of medical resonance imagery used in hospital radiological departments. But 4D’s MRI has nothing to do with magnetic resonance. Singer prefers the term ‘magnetic response imaging’ for the MRI Pipeline tool. To us it sounds like a magnetometer—but what do we know! More from 4D Imaging.
The Fieldbus Foundation, a not-for-profit grouping of process industry end users and automation suppliers has announced a foundation for remote operations management (ROM), a ‘unified digital infrastructure’ for process automation. ROM targets management of remote assets such as tank farms, terminals, pipelines and offshore platforms. The technology offers connectivity with leading industrial protocols such as Wireless Hart, ISA 100.11a and wired H1. An electronic device description language provides an abstraction layer for developers. ROM also gives access to discrete and analog field I/O from the control room, integrating machinery health monitoring, safety interlocks, fire and gas detection systems, and video surveillance.
Fieldbus global marketing manager Larry O’Brien commented, ‘Foundation for ROM is important because it is the first time these different protocols have been accessible in a single, standard environment. More importantly, it is one that does not sacrifice diagnostic capabilities of the existing wireless devices. Instead, we map these capabilities into our block structure to provide a standard environment for data management and quality. We can now eliminate current solutions which are highly customized and much more costly to maintain.’ Opco partners in ROM include BP, Chevron, ExxonMobil, Saudi Aramco and Shell. Sell-side members include Emerson, Endress+Hauser, GE, Hart, Honeywell, Invensys, Pepperl+Fuchs, Siemens, and Yokogawa. More from Fieldbus Foundation.
An article in the Norwegian magazine Teknisk Ukeblad has cast doubt on the viability of the recently formed EqHub (Oil IT Journal November 2011). EqHub was to streamline the upstream supply chain, standardizing and simplifying the exchange of product documentation. However, EqHub director Ove Ryland told TU that ‘Right now it seems that most industry players are sitting on the fence.’ So far only five of the fifteen targeted operating companies have poneyed-up the entry fee of up to 72,000 NOK per year, depending on turnover.
Statoil’s Ole Anders Skauby told Teknisk Ukeblad, ‘We have not yet decided on membership. We currently have an observer role and recently signed an agreement to purchase services from EqHub on the Gudrun project. This will give us experience with the concept and let us check out the gains in time and quality.’
Aker Solutions’ Terje Simonsen added, ‘We require that suppliers must provide documentation through EqHub but that’s easier said than done. Resistance to change is great and there is skepticism. The fee that vendors must pay to join is an additional barrier. The oil companies at the top of the food chain are the ones that save time and money here. They should be footing the bill initially.’ More from Teknisk Ukeblad (in Norwegian).