November 2011

BP’s data virtualization

Composite Software and IBM/Netezza database appliance provides stellar performance to BP’s upstream. ‘Super-fast’ hardware/software combo has also slashed development costs.

BP’s bet on a software/hardware combo from Composite Software and Netezza (Oil IT Journal September 2008) has paid off big time, according to Stuart Bonnington, BP America’s head of information architecture, upstream IT and services. Bonnington’s team of data modelers and architects work on MDM, mapping, search and business intelligence.

BP has some 3,000 applications in its upstream portfolio and wanted a ‘single logical source for all data access, conformant to a common business model.’ The data virtualization solution comprises a stack of connectors to 40 upstream data sources, a data virtualization layer and applications on top. To date, 32 applications are connected covering a range of production, well data and financial needs. The data virtualization layer provides a single point of access for some 600 ‘common canonical’ entities, modeled with the Embarcadero Studio tool.

A critical component of the system is the ‘semantic’ metadata model. Bonnington explained, ‘We ensure that all data conforms to a semantic structure. Applications query the record of reference in the virtualization layer rather than the data source.’ Total query throughput on the system is nearly 50,000 queries per day.

The IBM Netezza data warehouse appliance provides data storage inside the virtualization layer. Data in source systems is copied to the Netezza staging database. Refresh intervals are determined by data administrators according to the business case. Some real-time data sources may bypass the staging process. Bonnington added, ‘The Netezza appliance is what makes data virtualization work for us, from a scalability and performance perspective. Netezza is essentially a massive engine underneath Composite that makes it run faster.’

It addition to the raw power provided by Netezza’s 200 processors, it offers a more sophisticated alternative to Composite’s native caching capability by handling refresh schedules and dependencies.

The system has been running for two years with only three hours downtime. Its 15 terabytes of data require minimal support—around 10 minutes of DBA time per day for the Netezza and one and a half full time employees for the whole environment.

Results have been spectacular. Bonnington said, ‘The whole process is so fast that there is no need to create any data marts. The partnership between Composite and Netezza is super powerful. Our development costs have been cut by 40%.’

Bonnington received Composite’s ‘Virtualization Champion Award’ this month for ‘achieving and promoting data virtualization across his organization and in the broader data integration space.’ More from (Composite) and (Netezza).

Upstream middleware

ETL Solutions Transformation Manager gets a facelift and traction as clients migrate from legacy Finder databases to PPDM–based MDM solutions.

ETL Solutions has revamped its E&P middleware solution for oil and gas data migration, targeting the niche of business continuity during and after master data management solution deployment. Version 5.0 of ETL’s Transformation Manager middleware includes a NetBeans Java development platform. ETL’s Karl Glenn told Oil IT Journal, ‘We are increasingly involved in providing data migration services to larger organizations which are migrating from legacy Finder data stores to PPDM-based master data environments. Our  adapters do the heavy lifting in mapping to these complex data sources.’

ETL’s NetBeans components, ‘DataPorts,’ can be dropped into the IDE and integrated with enterprise frameworks.  OpenSpirit is an enthusiastic user of ETL’s technology (the companies have historical ties). ETL consultants are working with OpenSpirit in the development of the new release of the OpenSpirit PPDM Data Connector with support for V3.7 and 3.8. ETL Solutions has also just achieved 100% ‘gold’ level compliance for its PPDM DataPort. ETL upstream clients include BHP Billiton, Schlumberger, PetroCanada and ExxonMobil. More from

The semantic web—no not again!

Oil IT Journal editor Neil McNaughton notes an uptick of interest in things semantic, with the establishment of the World Wide Web Consortium’s Oil, Gas and Chemicals Business Group. An opportunity to take stock of the developments in oil and gas and offer his 2¢ of semantic wisdom.

As regular readers will know, I am more of a skeptic than a proselytizer. There are after all, enough people writing uncritically of the industry if that is what you want. I have always felt that my role has been to question and try to pick things apart. But I have a confession to make. I have, in my own sweet way, proselytized for some time about the semantic web—in particular as it manifests itself in the oil and gas software space. This proselytizing has come in two forms, a couple of papers presented in conferences and a quite extensive coverage of the technology in Oil IT Journal. This issue of Oil IT Journal has no less than three semantic-ish articles—our report from the EU Fiatech meeting, from Norway’s EQHub and a reprise of last month’s review of the ISO 15926 Primer.

With the slow burn liftoff of the World Wide Web Consortium’s (W3C) Oil, Gas and Chemical Business Group (  I think it is time to revisit the semantic web and give you a very personal take on where it is in oil and gas and in the world at large.

The semantic web was dreamed up over a decade ago by Tim Berners-Lee as an attempt to add structure to data on the web. One simple usage might be to note that in, say, a PowerPoint presentation, mention of a particular scientific paper could link seamlessly through to the conference where it was first given and , why not, from there to the data set that supported the research. The possibilities are, or should have been by now, endless.

To offer folks a universal way of representing data items requires a general purpose data modeling language. The W3C in its wisdom, opted for the Resource Description Framework (RDF). This bare-bones language sees everything as a ‘triple,’ of subject, predicate and object. With all three uniquely nailed down as a web resource or ‘URI.’ More, if you are interested, on Wikipedia (

As you can imagine, the promise of a web of intermeshed, usable data was extremely exciting to many. This led to great expectations, to much hype and almost as much disappointment and criticism.

First, the hype. In the Fiatech Introduction to ISO 15926, the familiar claim for machine to machine interoperability via semantic technology is made. An almost magical quality is evoked that would have us believe that somehow, without changing our systems, they can be rendered interoperable. This is of course pure hype. Systems need either re-writing to conform to a RDF view of the world, or they need to expose a semantic interface. 

In the case of a web page containing snippets of potentially reusable information this is not too hard to imagine. In fact the W3C has produced a flavor of RDF, ‘RDFa’ that inserts triples into a page of HTML. This can be viewed as per normal in a browser, but for those who want more structured information, that is available by parsing the document for its hidden RDF goodies. This might be a way, for instance, of adding metadata of reference to a web page, say a well’s UID.

True believers may however want to go the whole hog and expose all their data in RDF. This leads to a duality of resources—for example, Wikipedia is available as regular HTML or as machine readable RDF on DBpedia ( Those in the know can perform queries of data in DBpedia as if it were a database using the RDF query language Sparql.

Turning Wikipedia’s into a database was helped by its relatively simple and consistent structure.  Things gets harder if you are modeling more complex stuff. Because RDF is such a simple construct, modeling even relatively simple objects (like PCA’s now famous pressure transducer) leads to pretty horrendous models. There is a feeling that this is problematical because no two modelers see the world the same way. Simple constructs, like adding units of measure, involve convoluted modeling.

The problem is that RDF does not encourage an object approach to modeling like XML does. The world is flat and made up of triples. Fiatech has got around this by promoting the ‘Facade,’ hiding the complexity of the underlying ISO 15926 model. This is OK in that it enables interoperability. But the Facade is an obstacle to a ‘pure’ semweb approach. Users need to understand and unpick the Facade instead of letting their semantic tools loose on a raw RDF dataset.

The early hype surrounding the semantic web also gave birth to a different kind of ‘modeling’ altogether. After all, ‘semantic’ has to do with language doesn’t it? So we have another community that is fussing over knowledge representation, on meaning and other high-falutin’ concepts. Again, boatloads of hype here, although the simple knowledge organization system, SKOS ( looks interesting. 

Finally, another gotcha. Tim Berners-Lee’s take on his own brainchild has evolved since the original semantic web to one of freely available ‘linked data.’ In the greater scheme of things, it is probably more important to decide if you should be sharing your data freely, than worrying about what format it needs to be in. Such considerations are very much a propos in regard to ISO 15926. Owner operators would benefit from freely shared plant information. Other stakeholders may be less enthusiastic. Equipment manufacturers may have IP to protect. Software vendors may amass market share in part through obscure formats. and there are already companies which make a living from sorting all the mess out, with or without RDF. A decision on how much data will be freely available has never really been taken. The issue goes way beyond IT and standards. But things are moving here. The POSC/Caesar Association has just opened up a gateway—or in the jargon, an endpoint to the current ISO 15926 equipment catalog. You can test drive on— if you are a person, or perform semantic queries on the machine readable endpoint on—  

My personal feeling is that oil and gas should be using RDF more—especially where it is easy to deploy. If you are writing a seismic spec or the next version of Witsml then it might be a good idea to include some of your metadata in RDF. It won’t cost anything and folks living in a future semantic world may thank you for it.

ABB on ATAM, USAP and the experience factory

ABB’s researchers report on how to build ‘sustainable’ software leveraging Carnegie-Mellon’s software engineering institute’s methodologies. New, EU funded Q-ImPrESS project announced.

Writing in the current issue of ABB’s Review (, Aldo Dagnino, Pia Stoll and Roland Weiss describe the software architectural principles that guide its developers in what is described as a ‘fantastically complex chain of technological wonders [that] transport an oil molecule from subsea reservoir to the local gas station.’ ABB’s developers have been working for several years with researchers from the Carnegie-Mellon’s software engineering and human-computer interaction institute on software architectural principles. These broadly divide into two areas, ‘functional’ i.e. describing the essential actions or services provided by the software, or ‘non-functional’ i.e. aspects such as quality, usability and performance.

The authors also stress the importance of ‘sustainability’ in software development. For ABB, sustainability is not (just) a metaphor. Sustainability can be evaluated in terms of technological longevity, organizational and social impact, finance and the environment. Technical sustainability means that systems support both immediate use case and provide a platform for future maintainability and evolution. This includes issues such as developers’ skills and compatibility with other company’s products. Financial sustainability means assuring a decent return on investment from the developed software while minimizing re-work and avoiding the cost of poor quality. Software architectures can contribute to the environmental sustainability by, for example, limiting energy consumption of both the product and the processes it is controlling. Social sustainability is achieved by ‘simplifying the developers’ work, stimulating and motivating them.’

ABB distinguished four typical scenarios for software development requiring different approaches. The first scenario involves software revamp, starting with the evaluation of existing legacy tools using the Carnegie-Mellon ‘architecture tradeoff analysis method’ (ATAM— ATAM looks at how different facets of software and the business case interact. In one trial, ATAM was used to evaluate a code-generating tool. It was not immediately clear whether the tool was optimizing for performance or code portability. ATAM demonstrated the tool was producing more performant code at the cost of reduced portability. Trade-off analysis determined that this was acceptable in view of the customer’s business case.

A different approach, ‘usability-supporting architecture patterns’ (USAP— is used to develop new software tools. USAP stems from work done at Carnegie-Mellon along with NASA and the US Department of Defense. Like ATAM, USAP is all about trade-offs. A software system can have multiple conflicting aspects, for instance security and usability may pull in different directions. ABB has developed a web-based USAP delivery tool to visualize a software package’s components as they interact. The tool acts as an ‘experience factory’ holding reusable architectural knowledge for different system/environment interaction scenarios. Six hours use of the tool saved five weeks effort on one project by clarifying usability priorities early on.

ABB is carrying the torch for architectural principles by participating in an EU-funded research project Q-ImPrESS ( The project has developed a Java/Eclipse-based integrated development environment for design-time quality impact prediction.

2011 Schlumberger Ocean user group

New ‘Fluent’ interface, parallel processing, INT’s GeoToolkit, RDR on getting the best from the API.

Speaking at the 2011 Schlumberger Ocean user group held recently in Houston, Evgeny Lykhin showed off the new Microsoft ‘Fluent,’ a.k.a. ‘ribbon’ interface along with other usability enhancements to the dev kit for Petrel, Schlumberger’s software flagship. The Ocean framework has been bolstered with unit tests for plug-in ‘sanity,’ an ‘on-demand data loading option and more control over software licensing. Ocean now offers a stratigraphy API and more flexible custom plots for charts and map views. A new seismic attribute control improves interaction and a ‘virtual cropped volume’ API provides custom ‘probe’ volume functionality. 2D and pre-stack data can now both be displayed and manipulated through the Ocean API—including write back of pre-stack to disk.

Ashley Kelham (Rock Deformation Research) gave a straight-talking presentation of real-world Ocean development, sharing some of RDR’s best practices for getting the most from the Ocean API. Ocean is a highly specialized environment used by many from a non programming background. Such folks tend to use the API like a ‘first class member’ of the C# language. The reality is that while Ocean hides a lot of complexity, developers should not assume the API is always ‘correct.’ Ocean does not have tens of thousands of active users like mainstream development tools who iron out defects. Moreover Schlumberger’s rate of change makes it ‘almost impossible’ to be sure what is stable.

Kelham recommends checking that the API ‘does what you expect’ and to enclose questionable API  calls in a try/catch block. Performance is enhanced by treating API calls as if they were network calls, avoiding use inside loops and favoring ‘chunky’ over ‘chatty’ communications. To protect itself from Schlumberger’s frenetic release schedule, RDR wraps Ocean calls in its own classes to minimize code changes and ensure ‘efficient and stable code.’

Paul Schatz presented INT’s.Net GeoToolkit which powers Petrel’s new integrated 2D Windows environment. The first deliverable is the 2011.1 well section window. INT is working with Schlumberger to re-develop the multi-well view with support for OpenInventor and GPU-accelerated rendering. Albert Lu described how compute-intense applications benefit from parallelization with the Managed ThreadPool API and Microsoft’s parallel extensions for.NET 3.5. Although there are some potential pitfalls with threading overhead, possible deadlocks and limited speed up scalability. Clay Burch presented Syncfusion’s.NET controls which offer a range of user interface, data management and reporting components for most all Windows platforms. More from  Read the presentations from the 2011 Ocean User Group on—

Seismic data management system for SQL access to trace data

Westheimer Integrated Seismic Data Solution offers ‘row level’ access to pre and post stack data.

At the recent Society of Exploration Geophysicists conference in San Antonio, Westheimer Energy Consultants introduced its integrated seismic data solution ISDS. ISDS is built around Filetek’s StorHouse (Oil IT Journal December 2001) data store with an ESRI GIS front end, data manipulation tools from Troika and connectivity from OpenSpirit. ISDS hides the complexity of the ‘traditional’ hierarchical storage management approach with fully relational data access to pre and post stack trace data in StorHouse. ISDS also provide security, backup, data collocation and accounting.

For programmers, SQL data access supports single-row retrieval, multiple table joins, sort and other utilities. The system supports direct access across disparate storage devices and is host platform independent—currently available on 64 bit Windows, Linux and Solaris. Troika’s toolset adds tape handling and reformatting across pretty well all known seismic data formats and vintages. OpenSpirit provides connectivity to third party data stores and workstation applications.

Westheimer MD Jeffrey Maskell said, ‘SQL-based row-level access is the seismologists holy grail, offering a level of granularity that will simplify and speed data delivery to processing and interpretation systems.’ The system is claimed to scale to petabytes of data and billions of indexed CDPs. More from

API University e-learning program gets shot in arm

Training specialist General Physics Corp. to market API HSE and maintenance online courses.

The American Petroleum Institute has engaged General Physics Corporation (GP) to deliver and market API –University (API-U) branded eLearning courses. API-U provides training programs and continuing education for oil and gas professionals and also hosts entry-level courses for individuals interested in working in oil and gas. The courses are fee paying. While they do not offer formal certification or degree qualification, continuing education units are available.

GP already hosts API-U’s 140 online courses. The new marketing deal sets out to give more visibility to the API-U training portfolio. This comprises 66 maintenance-focused courses and 72 standards-based courses covering general industry standards awareness and onshore oil and gas safety.

Jim Parish, senior VP of GP’s oil and gas solutions team said, ‘The technology-intensive demands of the oil and gas industry require a high level of personnel training. The API-U eLearning courses provide learners with the skills to work effectively and safely.’ More from  (API) and  (GP).

Paradigm and Beicip-Franlab join hands at SPE

Uncertainty-based geological modeling meets automated production history matching.

At last month’s SPE Annual Technical Conference and Exhibition in Denver, Paradigm and French Petroleum Institute (IFP) unit Beicip-Franlab presented the results of a joint study on improved production forecasting with geologically-constrained history matching. The study has resulted in a commercial offering, combining history matching technology from Beicip’s OpenFlow with Paradigm’s Skua interpretation flagship.

The history matching engine inside Beicip’s OpenFlow suite is Condor—a.k.a. constrained description of reservoirs. Condor was originally developed by an IFP-led consortium. Condor eschews the ‘classic’ history matching approach of manual iteration on selected parameters. Instead, a geostatistical inversion technique is used to match dynamic flow data with different iterations of the static geological model in Skua.

Condor identifies and adjusts key parameters contributing to model uncertainty such as PVT, porosity, permeability and fault transmissibility. Parameter variability is propagated through the workflow to ensure consistency between Skua and the dynamic reservoir model. Paradigm CTO Jean-Claude Dulac said, ‘We have combined Skua’s geological uncertainty modeling with OpenFlow to allow for coupled workflows between the geological and engineering models.’ More from  (Paradigm) and  (Beicip).

London Geolsoc deploys Elsevier Geofacets search

Geolsoc seeks ‘wider access’ to Lyell Collection—through Elsevier pay wall.

The venerable London Geological Society (Geolsoc) has engaged Elsevier to georeference its Lyell Collection of online Earth science literature. Elsevier’s Geofacets technology (Oil IT Journal October 2010) will be used on the project to identify geographic place names and geological terminology and to index collection items for context-based retrieval. The combined Elsevier/Geolsoc collection amounts to some 165,000 geological maps, 75% of which have already been georeferenced. The offering targets the oil and gas industry—Geofacets already includes IHS’ petroleum basin glossary and outlines.

Geolsoc executive secretary Edmund Nickless said, ‘What we do as geologists has little value unless we make it as widely accessible as possible.’ Even wider access might have been achieved if the Lyell Collection and the Geofacets service were not restricted to fee-paying subscribers. More from

Software, hardware short takes

Baker Hughes, Solomon Associates, ITTVis, Austin GeoModeling, Schlumberger, CartoPac, dGB Earth Sciences, Geoinfo SRL, OpenIT, Petris, Rolta, Trace International.

The latest version of Baker Hughes’ WellLink real time 3D visualization service, powered by Dynamic Graphics’ CoViz, provides a ‘synopsis’ of the drilling environment. Well data can be viewed in its geological context for real-time wellbore placement and maximum reservoir contact—

Solomon Associates has been awarded a US Patent for its ‘control asset comparative performance analysis system.’ The tool is used in Solomon’s Advanced Process Control Performance Analysis—

 ITTVis’ E3De provides an interactive geospatial software environment for the creation of photorealistic 3D models and feature extraction from LiDAR data—

The 4.1 release of Austin GeoModeling’s Recon flagship includes ‘ReConnect’ a data mining connector for OpenWorks and a new Petrel Plug-in. A new image depth calibrator allows core descriptions, photographs and raster logs to be depth referenced and used in 2-D views—

Schlumberger’s Avocet 2012 now connects directly to simulators such as PipeSim and Oil Field Manager for shortfall root cause analysis. Avocet 2012 was developed using industry standards to support audits, security and regulatory compliance—

CartoPac’s new ‘Jumpstart’ is a rapidly deployable mobile enterprise platform for oil and gas pipeline operators. Jumpstart embeds Esri ArcGIS Server—

dGB Earth Sciences and Geoinfo SRL have released Computer Log Analysis Software (CLAS), an OpendTect plug-in for petrophysical interpretation—

The 6.0 release of OpenIT’s eponymous software usage metering system features a new GUI and end user connectivity from Excel. Users can build Excel dashboards for live data reporting and ad-hoc queries—

Petris claims a 5-10x speedup for its DataVera 9.0 quality toolset, an enhanced merge module and GIS support for data quality mapping in Esri ArcGIS or Google Earth. Connectivity enhancements add support for Peloton’s tools, PODS, PetrisWinds Enterprise and P2 Energy Solution’s data sources—

Rolta’s OneView business intelligence solution for process industries including upstream and downstream oil and gas has achieved SAP Business Objects certification. OneView’s ‘worst actors’ web intelligence report provides information about assets that have shown repetitive failure or have high costs of maintenance. The solution has been deployed at Chevron’s Pascagoula, Mississippi refinery—

Trace International has announced its gifts and hospitality tracking software targeting compliance with the US Foreign Corrupt Practices Act (FCPA). Trace’s package helps determine when a gift becomes a bribe. The software enables member companies to track key information about gifts and hospitality made and received by their employees. Two-way tracking meets the requirements of the FCPA, as well as the new UK Bribery Act, which criminalizes not only the offer or payment of a bribe, but also accepting or agreeing to accept a bribe. Use of the software is free to Trace members—

Jim Crompton sketches-out digital oilfield IT stack

Barriers to digital energy roll out—poor change management and lack of standards.

Chevron’s senior IT advisor Jim Crompton was in a provocative mood at the SPE Digital Energy Study Group in Houston this month. Speaking on the on the topic of the digital oilfield IT stack, he recognized two barriers that emerged from panel discussions at the SPE ATCE in Denver a couple of weeks earlier. These are a) the problem of change management and b) the need for a standard infrastructure and architecture. Standards have been overlooked in the expanding scope and role of IT as it becomes increasingly important in refining and oil and gas production. Another interesting issue is the need to cater for the expectations of the new generation of IT consumers. These comprise the first generation of oilfield workers to have better IT infrastructure in their homes than at work!

In many cases, the first mile remains a problem. An offshore field may benefit from a comprehensive fiber optic network, but the link to the onshore office is still reliant on a low bandwidth microwave link. How can we leverage the hundreds of thousands of sensors on a new greenfield platform and move from a ‘run to failure’ mode to proactive failure detection and avoidance? Crompton reported on digital oilfield predictive analytics, Statoil’s experiments with injected nano sensors that report back on reservoir conditions, distributed sensors for real-time optimization and new mobility platforms for field workers.

One new idea is to borrow sensor mesh architectures from agricultural and military applications to go beyond current de-bottlenecking workflows, leveraging the advanced analytics used by electrical engineers in their instrumentation. Crompton suggested that such a robust and cheap architecture pattern might be one of maybe half a dozen that an IT group like Chevron’s could deploy to provide semi-customizable solutions.

One frustration has been that Chevron’s best Visual Basic programmers are petroleum engineers using Excel. Such folks are more in touch with Microsoft’s roadmap than the IT group. They are also upset that the next version of Excel will see the end of Visual Basic as they know it.

Chevron now has over 20 terabytes of digital data under management. Its ‘information pipeline,’ like the real thing, needs to be protected from ‘leaks’ to unmanaged environments like Excel. Digital dashboards provide a balance between real time surveillance and advanced modeling, blending mapping and reporting services, moving the organization up the business intelligence maturity model. Crompton wound up with a nod to the promise of Hadoop, ‘big data’ and a dig at ‘creative solutions that only solve when the creator is present.’

Institution of Mechanical Engineers—Process Safety, London

UK-based institution hears from process safety specialists on avoiding major disasters like the UK’s Buncefield tank farm fire. Models, like James Reason’s ‘Swiss cheese’ layered security, guidelines and best practices are part of the solution along with constant vigilance and training for all.

The one day event, ‘Process Safety, are you doing enough to avoid major disasters?’ held last month at the London-based Institution of Mechanical Engineers kicked off with a keynote from Ian Travers who heads up the Chemical Industries Strategy Unit of the UK’s Health and Safety Executive. For Travers, process safety should be in your blood. If you lose control of a process, what happens next may be up to sheer luck. As luck would have it, there were no fatalities at Buncefield (, but this major incident shaped thinking around process safety. One outcome is the Chemical Industry Association’s best practice guide to process safety leadership ( Why leadership? Because the ‘C suite’ needs to understand risk and ensure that process safety is managed in a systematic way. The Buncefield Report is essential reading and explains the management systems failures. Companies should also use the UKPIA tools for self assessment ( Process safety is shorthand for how major hazard risks are controlled. Root causes are common across all organizations and the system is only as good as its weakest link. Process safety management determines what hazards are there and what is the potential impact on a plant. The Center for Chemical Process Safety’s tools should be used (

However, once everything is OK, it starts to go wrong! So you need to constantly monitor and adapt, recognizing that people are the weakest link—not the kit in the plant. Human error occurs throughout the organization starting at the top. Senior executives often don’t understand risk. They absolutely trust the system design and are shocked and upset when things go wrong. Managers are more receptive to messages about ‘success,’ and focus too much on outputs, thinking that ‘somebody else’ is in charge of safety. Front line staff suffer from complacency and don’t believe the consequences even when they are spelled out. They give priority to production and tend to deviate from agreed procedures.

Around 25% of plant in the UK is in an ‘unacceptable’ condition and 50% needs improvement. Are we over egging it? It is up to the regulator to decide but we are not where we want to be. Management needs to act on aging plant. Travers also observed that ‘people are fixated on near miss reporting.’ More focus is required on overall challenges to security. Any adverse outcome needs to be captured, for instance, repeated unintentional overfilling of a tank.

A common theme in several talks was the work of psychologist James Reason ( whose ‘Swiss cheese’ model of accidents visualizes multiple lines of defense comprising alarms, physical barriers, automatic shutdowns, operators and procedures. These protect assets and the environment from hazards, but they each have weaknesses—the holes in the cheese. Accidents occur when holes momentarily line up opening a trajectory of ‘accident opportunity.’ Safety procedures aim to identify and eliminate the holes and or to eliminate links in the accident chain.

Phil Scott of the Chemical Industries Association ( believes that we need to ‘instill a chronic sense of unease in managers.’ While there is no silver bullet for enhancing process safety, there are common causes across industries. Hence the cross industry Process Safety Forum with representation from chemicals, nuclear and oil and gas. Scott says underinvestment in safety is a false economy. Companies need to go beyond compliance and seek out problems—pipe work and non metallic equipment is often overlooked and tank supports, bridges and bunds need attention. If you have got to, shut down and inspect—there are no short cuts. A culture of ‘look, listen and report’ is needed. Checkout the CIA guide on— Process safety training has been a bit ‘ad hoc’ in the past. Scott also emphasized that management walkabouts are a good thing but observed that there was one on Macondo just before the accident (

Guy Gratton (head of the UK’s Facility for Airborne Atmospheric Measurements) observed that while aviation approaches safety in a similar way to other industries, there are areas where it leads the field. Aviation benefits from transparency in worldwide accident reporting and cause analysis. Most accidents (64%) are cause by ‘human factors.’ These are not just ‘pilot errors.’ Aviation prefers to look at the interfaces between software, hardware, environment and ‘liveware.’ Air accident investigators make safety recommendations rather than apportion blame. The ‘no blame’ approach encourages participants to share information but complicates insurance and may be at odds with mainstream legal culture. This was addressed in the 1952 Rome convention which has it that the operator carries the can, irrespective of blame. The most interesting concept that has emerged from air safety training is crew resource management (CRM). Several accidents can be attributed to poor communications between crew members. In 1989, a British Midland 737 crashed when the pilot shut down the wrong engine—even though passengers had informed the stewardess as much. The problem was that the established pecking order meant that stewardesses were afraid to tell the captain he was doing the wrong thing. The point is that ‘everybody here has valid input’ and the secret is collaboration. CRM training now happens on a regular basis and includes the crew, ground staff and management and tells a young, smart copilot how to tell a grumpy old captain, ‘you are about to kill us all.’ More on CRM from

Paul Taylor (Network Rail, UK) observed that despite many safety procedures and equipment, the same accidents happen over and over again. All rail risks are known somewhere in the business, either consciously or unconsciously. One problem with current safety procedures is that it makes for proliferating documentation and sometimes for ludicrous control measures. Network Rail has problems with maintenance worker fatigue while driving. But this cannot be countered by entreaties not to drive when tired. You need to stop putting workers in a position where they may be at risk and avoid stuff that makes safety managers feel good but that will not stop people driving home in the middle of the night when tired.

Phil Graham described how Linde Group’s major hazards review program (MHRP) set out to raise process safety at its 2000 sites around the world. Senior managers tend to think that a major disaster could not happen. Legislation is all very well but it is not enough to discharge corporate responsibilities. MHRP is a consistent process including audits, local accountability that ensures on and off site risks are managed to acceptable safety levels. A staged process starts with site data collection, moves through hazard and consequence evaluation, site categorization, risk mitigation, compliance and finally site certification. Some Linde plants have been shut down as getting risk to an acceptable level would cost more than the plant was worth. Plants may initially be located away from populated areas but as towns spread, risk rises. Process safety is now on the Linde board of directors’ agenda. A process safety dashboard is under development to visualize where risks exist and drill down for more information.

Mark Harrison (SABIC) returned to the theme of the aging plant. Plants degrade physically but equally as knowledge is lost and also with ‘creeping change.’ Very small risks at design time can be amplified through vibration-induced stresses over time. Hazard review often assumes a fit for purpose asset as starting point. Programs should address removal and modification of components vulnerable to vibration.

Graeme Ellis outlined how ABB has implemented performance safety metrics. These need to go beyond injury rates to embrace both leading and lagging indicators. You need to focus on high risk areas with a ‘manageable’ number of indicators. Indicators should be ‘SMART,’ i.e. sufficient, measurable, accurate, reliable and targeted. ABB has screened these down to 8-10 indicators.

John Armstrong (E.ON) described the‘Abiline paradox’ (, a kind of group think whereby a consensus position is achieved that is actually something that nobody wants to do. The consequence is that bad, risk-prone decisions can be taken because nobody involved in the process has a strong opinion. Examples include the RBS/ABN Ambro deal, eventually a €72 billion write down. This had been evaluated at 18 management meetings with nobody questioning the deal. Morton Thiokol’s role in the Challenger disaster was another where self censorship led to a wrong decision ( Jenny Clucas’s company, Cogent is working to rectify executives’ lack of process safety understanding with a dedicated training program— More from IMECHE on—

FIATECH European user meet

Plant standards group hears from Aveva, USPI-NL, GlencoIS on ISO 15926 implementation.

Fiatech, the Austin, Texas-based standards body for the building and construction industries held its 2011 EU meet in conjunction with SPAR Europe in The Hague this month. Fiatech’s Nicole Testa Boston described the various projects in progress—data handover, interoperability, a vendor neutral 3D CAD model and a Chevron-backed materials management interoperability support. Fiatech also operates a radio frequency identity tag (RFID) test ground near Houston airport to test materials tracking with smart devices. A Fiatech flagship project is the ISO plant and process information modeling standard, ISO 15926.

Neil McPhater outlined Aveva’s comprehensive plant lifecycle roadmap—from design to commissioning and handover with support from a digital information hub. This typically involves many software systems. Aveva uses Noumenon’s XmPlant/Proteus schema and ISO 15926 to map from CAD files to ‘smart’ design databases. This lets Aveva provide a single portal on data in multiple information sources. Woodside was cited as an enthusiastic users of ISO 15926. When Woodside sold its Otway gas plant to Origin Energy, the Aveva plant model went with the deal.

Paul van Exel who leads the Dutch USPI-NL standards body observed that we still suffer from the ‘bubble problem’ of a large number of sometime competing standards to chose from. Oil and gas is a relatively small player in the construction arena and standards are less focused than in other verticals. Implementation depends on good test data sets and critical mass. van Exel cited the EU Orchid WS project and ISO TC67 offshore equipment standards. There remains a lot of work to be done transforming these to IT standards. More from

Ian Glendinning (GlencoIS) provided an overview of POSC/Caesar’s ISO 15926 projects focusing on ‘interoperability through reference data. The Fiatech iRING project created a lot of interest but the reference data did not support all use cases—this is being addressed in the JORD project. Most users want compliant, authoritative reference data. The good news is that JORD means that the critical path no longer depends on a few specialists. If you are thinking of using 15926, you need JORD—check out the endpoint on  (humans) and  (machines).

There was considerable debate on the feasibility of using reference data as a path to interoperability. Not all valve equipment needs to be ISO Certified and not all vendors want to share their CAD model formats! The issue of who pays was raised—and the need to have operators on board.

Fiatech’s Neill Pawsey showed how high accuracy positioning systems were changing the building site. Better than centimeter accuracy is needed to avoid, say, drilling into an electric cable in a wall. This is where wireless location sensors such as Trimble’s ‘reverse RFID’ system with passive tags around a facility providing pinpoint 3D location. Virtual reality is increasingly used in construction—with for example, Applied Research Group’s  blend of video and model for ‘augmented telepresence.’

Folks, facts, orgs...

Energistics, OGP, API, Atwood Oceanics, Berkeley Lab, Boardwalk Pipeline, Bracewell & Giuliani, Cameron, Columbia Industries, Dril-Quip, FTS, Jee, Mainland Resources, Neuralog, Norse Energy, OPIS, Pace Global, Palantir, Peregrine Midstream, RSI, SIGMA3, Southcross, Spectra Logic, XCXP.

Randy Clark has stepped down as president and CEO of Energistics. COO Jerry Hubbard has replaced him in both roles.

Sam Phillips has left the Oil and Gas Producers (OGP) association. His position as EU affairs manager in Brussels is now filled by Christine Ravnholt-Hartmann, previously with DONG Energy.

Charlie Williams (Shell Oil) now leads the governing board of the API Center for Offshore Safety.

Atwood Oceanics has appointed Phil Wedemeyer to its board. He recently retired from Grant Thornton.

Berkeley Lab’s Computational Research Division has named John Bell head of mathematics and computational science,  Esmond Ng head of applied math and scientific computing, John Shalf head of computer and data sciences and Deb Agarwal head of advanced computing for science.

Boardwalk Pipeline Partners has appointed Patrick Giroir President of its subsidiary, Boardwalk Field Services. He hails from Eagle Rock Energy Partners.

Mark Lewis is now managing partner of Bracewell & Giuliani’s Washington, DC office. Lewis is a member of Bracewell’s energy practice.

Cameron has elected Rodolfo Landim to its board.

Saj Shapiro is now president and CEO of Columbia Industries. He was formerly with Patterson-UTI.

Mike Walker has retired as Dril-Quip chairman and CEO. He is replaced by John Lovoi (chairman) and Blake DeBerry (president and CEO). James A. Gariepy is senior VP and COO.

FTS International has appointed Ross Philo VP of IT. Philo was previously CIO of Maersk Oil.

Jee has hired Lead CAD Designer Chris Phillips to complement the engineering team.

Mainland Resources has appointed George Chilingar to its advisory board.

Neuralog has named Larry White as Senior Vice President of Sales & Operations.

Norse Energy Corp has named Nazir Ali executive VP operations and reservoir management. Ali hails from BP.

Donna Harris is now editor of Oil Express, OPIS’ publication for gasoline and diesel wholesalers, marketers and c-store operators.

Pace Global Energy Services has named Sam Elia as VP of its consulting practice.
Palantir Solutions
has appointed Paul Douglas and Dana Hilliker to its Business Development team in Canada and Anna Tregub and Julia Hayward to its development team in London.

Darrell Poteet is now executive VP Pipelines and Surface Facilities with Peregrine Midstream Partners. Mark Fullerton is executive VP reservoir development and management. Jim Ruth is executive VP general counsel. Ruth was previously with Falcon Gas Storage.

RSI has appointed Alan Cohen as Chief Geophysicist. He hails from Royal Dutch Shell/Shell Oil.

Randy McKnight has joined SIGMA3 as VP Reservoir Geophysics and Geohazards. He was previously with ConocoPhillips.

David Ishmael has joined Southcross Energy as VP Engineering and Technical Support. He was formerly with AB Resources.

Jon Benson and David Trachy, formerly with StorageTek, have joined Spectra Logic.

XCXP Operating has recruited Rick Himbury as a partner and VP Operations. He was formerly with Conoco Phillips.

Done Deals

Aker Solutions, X3M, Blue Marble, Global Mapper, Cameron, LeTourneau, Dawson, TGC Industries, GE Energy, Lightfoot Capital, Hertz Equipment Rental, Delta Rigging, Mott MacDonald, Mouchel Energy, Hunting, Specialty Supply, IHS, Purvin & Gertz, Recon, Reservoir Group, Technip.

Aker Solutions is to acquire X3M’s well intervention technology business for $8 million. X3M’s main owners are the Middle East-based Catalyst Private Equity Fund and its founders. X3M’s operating revenues for 2010 were $6 million.

Blue Marble Geographics has acquired Global Mapper. Mike Childs, the software developer behind Global Mapper, has joined Blue Marble.

Cameron has closed its purchase of LeTourneau Technologies’ drilling systems and offshore products divisions from Joy Global for approximately $375 million in cash.

Dawson Geophysical has abandoned its attempt to acquire TGC Industries as Dawson’s average stock price fell outside of the range specified in the merger agreement.

GE Energy Financial Services has acquired a 58% interest in Lightfoot Capital for $85 million from an investment vehicle managed by an affiliate of Magnetar Capital.

Hertz Equipment Rental Corporation has acquired Delta Rigging & Tools’ offshore equipment rental division.

ITC Global has completed its acquisition of the satellite operations of Broadpoint, provider of satellite communications services to the oil and gas sector in the Gulf of Mexico.

Mott MacDonald has acquired Mouchel Energy, a specialist in the UK gas sector.

Hunting has completed its acquisition of Specialty Supply for $31.0m cash plus adjustments for working capital and performance.

IHS has acquired global advisory and market research firm Purvin & Gertz. Financial terms were not disclosed.

Recon Technology is in violation of Nasdaq rules for failure to file its annual 10-K. Nasdaq has given the company 60 calendar days to submit a plan to regain compliance.

Reservoir Group has announced the acquisition of Oklahoma-based GeoSearch Logging Inc. The unit will integrate Reservoir Group’s surface logging service under the Empirica brand .

Technip is in negotiations to acquire the share capital of Cybernétix. The 45.7% stake is worth €14.1 million.

EqHub user meet, Stavanger

Norway’s offshore equipment e-commerce hub hears from Total, Aker and POSC/Caesar.

EqHub, the Norwegian oil country equipment portal was launched last year as an OLF initiative, owned by EPIM with support from Achilles, Sharecat Solutions, PCA and Det Norske Veritas. The portal is an information repository for operators and suppliers providing pre-qualified, classified and quality assured information. Speaking at the recent EqHub conference in Stavanger, Ida Kastrud described implementing EqHub on Total Norge’s Hild North Sea development. Hild is an innovative engineering project with remote operations and real time equipment diagnostics in support of condition-based maintenance. Total is using a ‘simple, modular and standardized’ design approach and a ‘replace offshore, repair onshore’ strategy that has allowed it to minimize offshore manning. Total uses EqHub as a gateway to its vendors and has included compliance in its contracts. Implementation was challenging—particularly mapping to Total’s internal specifications and systems. But Kastrud reports that the benefits of standard, quality-controlled information available to all stakeholders are a win-win for manufacturers, vendors and the operator.

CIO Jann Kåre Slettebakk provided insights as to how EqHub has been incorporated into Aker Engineering’s vision for standard information from its suppliers for ingestion by engineering design tools. While many suppliers have understood the benefits and joined the initiative, challenges remain with package suppliers using third party equipment from vendors with limited Norwegian involvement. The benefits of a ‘proper’ EqHub implementation include pre-validated contract documentation, web access with professional IT support, timely availability of documentation with reduced risk of delayed projects and more time available for design.

Nils Sandsmark reported on POSC/Caesar’s attempts to align the current EqHub implementation which uses ShareCat’s technology with the ISO 15926 reference data library. The mapping was successfully concluded with limited resources. Despite being well received by Bechtel and Emerson, the project failed to achieve buy-in from the EqHub membership. This contrasts with the ‘real progress’ made by the Fiatech JORD project which recently has gone live with an equipment triple store endpoint on  A demonstrator developed for the North West Redwater Partnership will be on show at the upcoming Digital Plant conference in Houston. Oil IT Journal will be there! More on EqHub from

ISO 15926 review revisited

Final edition of Fiatech’s ‘Introduction to ISO 15926’ improves on draft reviewed last month.

Last month we reviewed a pre-publication edition (the ‘Primer’) of Fiatech’s Introduction to ISO 15926  to conclude that it was heavy on business benefits and light on technology. The final version, the Introduction, goes some way to rectify this and is a considerable improvement.

While not exactly guff-free, the Introduction embeds much more hard information in the narrative. The section on the history of ISO 15926 is greatly enhanced and has expanded into a detailed and authoritative history of plant information standards around the world. The section ‘How does it work’ is improved but just as you think you are getting into the technicalities, it flip flops back into metaphor explaining stuff that is more or less self evident.

The ‘Getting started’ section is more or less unchanged. ‘Detailed implementation’ remains beyond the scope of the Introduction. In our opinion, much of the space taken by analogy, metaphor and business case could have been used to provide a blow by blow account of something like the iRing experiment with instructions on how to code your own Facade. The question remains, is there a compelling case for ISO 15926 as the silver bullet for interoperability? The Introduction repeats the brave claim that the protocol permits information exchange ‘without knowledge of each others systems.’ This should perhaps read, ‘if the other systems comply with ISO 15926’ which is a different proposition altogether. Download the Introduction from

Knowledge Reservoir, USA repurpose NASA control rooms

NASA’s operations center expertise to support pipeline industry’s safety push.

Knowledge Reservoir is teaming with United Space Alliance (USA) to provide pipeline operations center personnel with training and support. Houston-based USA supports NASA with, recently, a $50 million extension to its contract with the Kennedy Space Center. The companies are offering assistance in an accelerated schedule for compliance with the US Department of Transportation’s Pipeline and Hazardous Materials Safety Administration (PHMSA).

Knowledge Reservoir CEO Ivor Ellul commented, ‘We are combining our oil and gas expertise together with USA’s operation control knowledge to help the pipeline industry meet the new regulations. Many of the issues noted in the PHMSA control room management directive reflect the same issues that the space program has been dealing with for decades.’ The joint venture will offer operator training/certification, quality and risk management, compliance assessment, safety assessment and failure mode analysis—  (K-Systems) and  (USA).

Sales, contracts, partnerships and deployments

Aker Solutions, WellDog, Aveva, SempCheck Services, FileTrail, CartoPac, SynerGIS, Intergraph, Epsis, SkyBlitz, IFS Applications, Paradigm, Venture Information Management, Verdande, NVIDIA, VMWare, CorDEX, VRContext, Sensic, IBM Maximo.

Aker Solutions has signed a NOK 700 million contract with Lundin for the engineering, procurement and construction of a subsea production system for the Brynhild project on the Norwegian continental shelf—

WellDog has won a A$5 million-up contract from Arrow Energy for the provision of downhole pressure gauge systems—

PDS Protek is to use Aveva Plant as its preferred oil and gas engineering design tool—

Black Elk’s HSE program, based on SempCheck Services’ ( safety management system has passed an e-Audit conducted by the Bureau of Safety and Environmental Enforcement. Black Elk is also implementing FileTrail ( for SharePoint.

CartoPac ( and SynerGIS ( are teaming on spatially-enabled field data capture and a management dashboard. CartoPac is now the Americas’ distributor for SynerGIS’ WebOffice.

DORIS Engineering’s Engenharia Brazilian unit has Intergraph SmartMarine Enterprise for work on eight FPSO topsides for the pre-salt play—

Epsis has won a three year contract from Eni US for provision of its TeamBox collaboration solution—

Oil and gas drilling and rental equipment provider GP II Energy has selected SkyBitz’s GLS asset tracking solution for more than 600 of its frac tanks and other equipment—

MicroSeismic has selected IFS Applications as its ERP system of record—

Spectrum has selected the Paradigm Echos as its seismic processing standard—

Venture Information Management is migrating to Microsoft’s Office 365 cloud service in a phased deployment of SharePoint, Email and Lync—

Petroleum Development Oman has deployed Verdande Technology’s DrillEdge on its fleet of 36 rigs—

NVIDIA is teaming with VMware on 3D graphics technology ‘workstation-class’ virtual desktops and applications—

Weatherford is now using CorDEX Instrument’s ToughPIX 2300 XP camera for use in hazardous environments—

VRcontext is now offering Walkinside on Sensic’s zSight head-mounted display—along with an X-box 360 Controller—

Kenya Petroleum Refineries has deployed an IBM Maximo-based solution for ‘data-driven’ maintenance management at its East Africa operations—

Standards stuff

PODS looks to merging its pipeline data model with the GTI’s gas distribution model. ExxonMobil releases.NET dev kit for Witsml and Prodml under Apache license. OGP and Barents 2020 announce ISO/TC67 subcommittee on environmental standards for Arctic operations.

The Pipeline Open Data Standard association (PODS) has kicked off a gas distribution model (GDM) work group. The GDM was developed under the auspices of the Gas Technology Institute to assist gas utilities meet distribution integrity management program (DIMP) regulatory requirements. The PODS Board believes that the pipeline industry would be best served by a proven, robust, pipeline data model. The PODS GDM Task Force has outlined a strategy to engage with GTI on incorporating GDM into PODS. The result will be a modularized data model supporting all pipeline assets, from wellhead to burner tip—  (PODS) and  (GTI).

ExxonMobil has placed its.NET dev kit for developers working with Energistics’ Witsml and Prodml in the public domain under an Apache open source license. The Standards DevKit, developed by ExxonMobil Technical Computing Company, offers developers access to Energistics flagship data objects without requiring an in-depth understanding of their ‘extensive’ XML structure. Objects can be manipulated directly in.NET and translated into XML for storage or web-based data transfer. Representing Witsml and Prodml as.NET objects means that developers benefit from the ‘convenience’ provided by Microsoft’s.NET including ‘IntelliSense.’ The DevKit hides details of web service communication with backend servers and provides synchronous and asynchronous methods for Witsml and Prodml web calls. A wrapper for WMLS adds secure password management, and error handling—  Comment—it is nonetheless curious that adding this ‘open source’ front end to Witsml and Prodml restricts their use to Microsoft’s proprietary systems!

The Oil and Gas Producers association OGP is working on a new set of standards addressing the specific environmental conditions associated with cold climate operations. In partnership with Barents 2020, OGP has supported the formation of a new ISO/TC67 Subcommittee (SC8) for Arctic Operations. Topics include working environment standards for cold-climate conditions, escape, evacuation, rescue, and ice management—

‘Full-featured’ PPDM-based data management solution announced

Volant Solutions adds EnergyIQ, geoLOGIC Systems to ‘Envoy’ upstream data hub.

Following its teaming with Neuralog last month, Volant has announced a partnership with EnergyIQ to deliver a ‘full-featured,’ PPDM-based data management and integration solution for the upstream. The solution comprises a combo of EnergyIQ’s PPDM-based Trusted Data Manager (TDM) software and Volant’s ‘Envoy’ integration platform. Users will be able to transfer data between TDM and applications including IHS’ Petra, OpenWorks and Geographix.

Volant has also entered into a partnership with Geologic Systems for the integration of its ‘gDC’ data repository with Envoy. Users can now import gDC data directly into their geoscience applications. Volant is now planning to extend Envoy with support for Geographix, Petrel, Paradigm, PPDM and GeoFrame.

EnergyIQ president and CEO Steve Cooper added, ‘Integration between TDM and Volant’s connectivity offerings establishes an effective data management platform for both small and large organizations.’ The announcements were made at the 2011 PPDM Data Management Symposium and Tradeshow in Calgary last month. More from

Husky Energy engages Palantir Solutions

Palantir Cash and Plan tools to embed ‘dynamic business planning solution.’

Calgary-headquartered Husky Energy has engaged Palantir Solutions as software provider and consulting partner. A three year contract, started last month, will see Palantir and Husky collaborating on an ‘integrated, dynamic business planning solution.’ Palantir’s ‘Cash’ and ‘Plan’ software tools will be implemented in Husky’s portfolio management system.

Husky’s base is Western Canada where it operates conventional oil and natural gas assets, heavy oil and a large upgrading and transportation infrastructure. In 2010, Husky reported a $CDN 3.5 billion cash flow, up 42% from 2009 and established ‘growth pillars’ in the Alberta Oil Sands, the Atlantic Region and South East Asia. Husky’s Western Canada strategy includes liquids-rich gas plays, dry gas and heavy oil production.

Palantir is claiming a ‘significant addition’ to its Canada operations at a time of ‘unprecedented growth’ in the region. Palantir Canada president Jason Ambrose said, ‘We see increasing demand for our solutions and look forward to working with Husky and consolidating our position as a leading provider of planning solutions.’ More from

FreeWave monitors Petrobras pump stations

Wireless SCADA adds live IP video to counter rising pilfering and vandalism.

Petrobras is to deploy wireless data radios from Boulder-Colorado-based FreeWave Technologies to automate pump stations at over 200 oil and gas wells in Brazil. The installed radios are FreeWave’s HTplus Ethernet radios that automate remote pump stations, transmitting data, such as flow rate, pressure and temperature.

Many of Petrobras’ pump stations are in remote locations where theft and vandalism are high and increasing as copper cables are now an attractive target for theft. To mitigate such threats, Petrobras selected a solution with enough bandwidth for IP video cameras alongside the data/PLC transmission.

Petrobras engineer Celso Gomes Alves Neto, said, ‘Although we had used a competing vendor for other SCADA projects, we took a look at what FreeWave had to offer. In meetings, visits and a pilot, FreeWave demonstrated that HTplus was perfect for the job. The high bandwidth will allow us to monitor assets with IP video. We are very satisfied with the results.’

FreeWave’s radio operates in harsh environments and offers up to 867 Kbps data rates. The solution targets SCADA backhaul networks and supports UDP, TCP and serial communications and has a range of over 60 miles, more with repeater stations. More from

IDS heads-up oil and gas branch of Malaysian software test unit

Malaysian universities join developer in ‘state of the art’ Q-Portal.

The Malaysian Software Testing Board (MSTB), Universiti Malaysia Sarawak (UNIMAS), Independent Data Services (IDS) and Swinburne University of Technology, Sarawak have signed a memorandum of understanding heralding the establishment of the Sarawak Chapter of the Malaysian software testing hub (MSTH). The MSTH was launched under the auspices of the second Malaysian economic stimulus package in 2009.

Oil and gas industry software house IDS will work with the other partners to plan and implement the MSTH programs. These include a test bed equipped with state-of-the-art testing tools and infrastructure, a training program and ‘Q-Portal’ MSTH’s window on the cyber world, providing support, sharing knowledge, and matching customers with testing organizations. More from

EarthSearch for Ghana fuel delivery monitoring

No more fiddling with fuel delivery thanks to location-based spicket padlock.

Ghanaian-based MultiPlant has delivered an anti-fraud solution for oil trucks leveraging technology from EarthSearch. The system monitors fuel deliveries by recording tap opening and closing times, fuel levels and the GPS location of the tanker. Certified delivery information can then be submitted for payment.

EarthSearch technology includes the LogiBoxx wireless radio and RFSeal, used to secure the tanker’s spicket (tap) while travelling. The GPS system is pre-programmed with delivery destinations and , when on location, communicates with the RFSeal lock to time deliveries. Status can be visualized in Google maps and an alert generated if an anomaly is recognized.

Earlier this year, MultiPlant’s own ‘Nsoroma,’ solution for fleet management won a 2011 World Summit Award. Nsoroma automated fleet reporting and issues SMS alerts on driver-tampering, route deviation and emergency situations. MultiPlant CEO Harry Baksmaty said, ‘Our services allow clients the flexibility to choose from a variety of mapping servers such as Google Maps, Bing, OpenStreet Maps and Google Earth, freeing clients from the headache of managing their own software.’ More from

Maersk seal of approval for ‘Qemscan’ robot wellsite geologist

Ruggedized version of lithology analysis engine better than the human equivalent.

The natural resources business unit of FEI Company has announced the Qemscan WellSite analysis solution. WellSite, unveiled at the SPE ATCE last month in Denver is a robotic well site geologist that analyzes cuttings and classifies lithologies with, it is claimed, greater accuracy, sensitivity and resolution than the average mudlogger.

FEI VP Paul Scagnetti observed, ‘Mudlogging, a $1billion annual market, is an important development for our traditional laboratory focus. WellSite makes laboratory analytics available for critical, on-site decision making.’

Maersk Oil and Qatar Petroleum have already tested the system successfully on an off-shore jack-up drilling rig. Maersk Oil Qatar MD Lewis Affleck said, ‘Depite a challenging environment on an offshore rig with large temperature variations, humidity and vibrations, the system’s automated analysis and well-defined work flow delivered unbiased, quantitative characterization of drill cuttings in less than an hour from sample collection.’

The system is also undergoing field trials with oil field services provider Geolog under a joint development program. FEI acquired Qemscan back in 2009 from CSIRO spin-out Intellection. More from

Pertamina, GE Oil & Gas share compliance best practices

Companies team to cultivate ethical business practices and train professionals.

Indonesian state oil company Pertamina has signed an agreement with GE Oil and Gas to cultivate ethical business practices to enhance compliance and transparency. The agreement was signed by Hari Karyuliarto, Pertamina chief compliance officer and Frederic Gaillot, GE Oil & Gas global compliance leader. The companies plan to share best practices and training and adopt voluntary standards for compliance and anti-corruption.

Dian Mardiani, compliance manager and corporate secretary of Pertamina said, ‘We contacted GE to learn from its existing compliance program, and GE offered to formalize the relationship between the two companies. This is a wonderful opportunity for both companies to build upon an already strong and transparent relationship.’

Gaillot added, ‘We are entering this agreement to better understand each other’s business practices and goals. Our commitment to compliance and ethical business practices is part of our GE brand and a vital asset for our company. GE has consistently been ranked among the world’s most ethical companies.’ General Electric has been named one of the 110 World’s Most Ethical Companies by The Ethisphere Institute. The same cannot be said for Indonesia which ranked 110 out of 178 countries surveyed in the 2010 Transparency International corruption perceptions index. Some fifty Pertamina professionals graduated earlier this year from GE’s Oil & Gas University, a program that provides leadership, compliance and technology training. The scheme will be extended in 2012 to include Pertamina management. More from

Siemens quality assurance for Sinopec process monitoring

New Analyzer System Manager ensures key process meters are up and running.

Chinese refiner Sinopec has deployed Siemens’ new Analyzer System Manager (ASM) at its Shanghai production site. The new software is now controlling measurement quality from around 60 of the refiner’s analyzers, and recording data quality trends. Siemens developed, engineered and commissioned the ASM system. Sinopec product quality monitors are broadcast round-the-clock to control system and maintenance personnel. Data is analyzed to determine the optimum times for maintaining and calibrating the instruments that provide Sinopec’s production KPIs.

ASM leverages statistical analysis based on the ASTM 3764 industry standard. This identifies recurrent deviations of measured values from a normal distribution, monitoring and validating analyzer operations in real time. ASM is based on a Simatic platform including S7 universal controller and the WinCC SCADA system monitor. More from

© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.