Back in the days when I worked in a small E&P outfit, computers (or rather ‘calculators’) were becoming just about affordable. One of the popular pastimes of us geophysicists (as the only not-too-numerically-challenged folks in the organization) was computing net present values and discounted cash flows. At the time I remember we had interesting debates on the ‘discount rate’ and its relationship, or otherwise, to interest rates—without, as far as I remember, coming to any useful conclusions. The discount rate was just a number that we plucked from thin air. It turns out that we were in good company.
At the Ryder Scott Reserves Conference last month, Richard Adkerson, former SEC* staffer and McMoRan CEO, revealed how reporting regulations for oil and gas reserves were drafted, back in 1978. The SEC initially proposed a formula tying the discount rate used in reserves disclosure to interest rate changes and the diversity of a company’s reserves portfolio. Industry baulked at such complexity so, in Adkeson’s words, ‘The then chief accountant at the SEC, Clarence Sampson, told me to “pick a rate.” Without hesitating, I replied, “Prime plus one percent.” At the time, prime was 9%, so the discount rate became a ‘standard measure’ of 10% and has remained so for the past 25 years!’
On a completely different tack, I was wondering what to make of Baker Hughes’ disposal of its Recall software unit to Petris. The history of Z&S’ Recall encapsulates many facets of the relationship between software houses and the major service contractors. The story often begins when a couple of consultants, working in a fairly narrow field, develop a software package—in this case for managing well log data. Subsequent sales to oil companies demonstrate the usefulness of the package and may then attract the attention of a major player.
Money changes hands and the software house becomes part of a larger organization. The original developers hopefully receive a cash bonanza and may be invited to stay with the company. The acquiring company can breathe a quick sigh of relief that the target has not fallen into the hands of a competitor, before stepping back and reflecting on how to manage the new situation ‘going forward’.
Baker’s acquisition of Recall parallels Halliburton’s acquisition of Landmark and Schlumberger’s acquisitions of both Geoquest and more recently, Petrel. All of which have presented a range of problems to the acquirers and to the acquired. The first issue to decide is that of integration. Seen from the golf course, boardroom, or wherever the shots are called, of a large service company, there is a prima facie case for closely coupling the newly acquired software with the company’s data acquisition division.
After all, everybody keeps banging on about ‘integration,’ so why not ‘integrate’ acquisition with the software? Of course, this doesn’t work. If Recall had ever got so close to Baker’s acquisition division that it neglected its ‘interoperability’ with Schlumberger or Halliburton data, its users would be up in arms. So one potential ‘synergy’ bites the dust and we are back to the golf course. How about rationalizing software development? Another tricky issue. The fact of the matter is that although logging and seismic companies have a lot of developers, that does not make them software houses.
Commercial software development is about more, much more, than writing code. The interface needs polishing, documentation has to be written, non expert users kept happy and the stuff needs to be marketed. Most logging software development is tool-specific and a lot is not even commercialized. Seismic processing software may be sold—but to a user community of considerable sophistication, who will likely be less concerned about the interface. In fact, if there is development synergy to be had, it is more likely to be achieved by the software house ‘taking over’ development of the service company’s niche products. This may be hard to realize because of demarcation rivalry and the fact that the acquired company is probably under-resourced.
In desperation, our golfing partners turn to the only synergy left, marketing. Here, I feel we have a kind of inside track on what’s going on. In our monthly efforts to put the newsletter together, we visit several hundred company websites. Over the years, this activity has turned into a kind of ‘industrial archaeology’ of corporate development. We track companies from the first flush of enthusiasm, through growth and betimes, to acquisition. Sometimes you can see what’s going on simply by listing the dates of press releases. In the early years, releases are made at regular intervals. In the worst post-acquisition train wrecks, releases stop overnight and website rigor mortis sets in. Sometimes the product website is unplugged completely.
Carry the torch
It would be nice to think that the marketing torch is being carried by the acquiring group, but this is not always the case. Sometimes it happens, one thinks of Schlumberger and Petrel. But often it seems that post acquisition marketing gets overlooked, with one marketing department leaving it up to the other. I submit that Baker and Recall may have fallen into that category. One pairing that definitely fits the bill is Halliburton and Geographix, with just three press releases in the last five years! A fact that appears to have been noticed (at last) by Halliburton’s top brass who were busy ‘re-igniting’ Geographix at the Calgary AAPG—more of which in next month’s Oil IT Journal.
* Securities and Exchange Commission—the US financial watchdog.
Houston-based Petris Technology has acquired the Recall well log storage and analysis package from Baker Hughes’ Atlas unit for an undisclosed amount. Under the deal, Petris assumes world-wide responsibility for the Recall business including software development, sales and support.
Baker acquired Recall from its founders John Zangwill and Craig Shields back in 1998. The log database, known especially for its pioneering work with image logs, was re-launched last year (OITJ Vol. 9. N°6). Recall 5.0 introduced a port to Windows and enhanced interoperability thanks to an ODBC driver and new ‘RecallML’ data access interface.
Previous work for Anadarko led to an interface between Recall and the PetrisWINDS Enterprise (PWE) data management infrastructure for internal deployment. For hosted, application service provision deployment, Petris’ ‘Software as a Service’ (SaaS) offering allows Recall applications and data editing tools to be delivered via the internet on a rental basis.
Petris CEO Jim Pritchett said, ‘We have a great track record of developing flexible, vendor-neutral platforms that improve our customers’ ability to leverage their data. Recall is an industry-leading well log data management system and PWE adds the ability to search, view and analyze data from a range of applications. SaaS brings our innovative deployment technology to Recall users.’
Recall VP Sales, Neal Morgan, told Oil IT Journal, ‘This is great news for us. We always felt a bit uncomfortable in Baker Atlas, which didn’t really understand the software business. There is great synergy between Petris and Recall, especially the data adaptors. We will carry on investing in the product with a GIS interface and more will be revealed at the user group in October.’
Recall is the log data
management system embedded in Landmark’s Petrobank. Open Spirit is currently working on a data server for Recall under contract for Shell (see page 9 of this issue). Recall clients include ExxonMobil, Shell, BP, BG Group and Total. PWE flagship clients include Pemex and Saudi Aramco.
Petris CTO Jeff Pferd recently gave an interview to Oil IT Journal to be published next month. Pferd describes early work on XML and loosely coupled systems that led to the Petris Winds patented technology and service-oriented architectures.
Kuwait Oil Company (KOC) has augmented its upstream enterprise resource planning system with several new modules from supplier P2 Energy Solutions (P2ES). KOC is to deploy additional modules from P2ES’ Enterprise Upstream (EU) for the management of field operations, production, joint venture accounting, settlement and production planning.
EU is a web-based business management solution that integrates with existing ERP systems. The modular system covers business intelligence and reporting, exploration and production agreements, operations accounting and volume management.
EU provides improved visibility of asset economics and ensures compliance with complex international business requirements. P2ES claims that EU’s configurability minimizes customization and lowers implementation and support costs.
P2ES president Tarig Anani said, ‘EU will allow KOC to track production, revenues and costs in one integrated system. P2ES understands the region’s business requirements and will continue to help our clients improve business processes.’
OITJ—What exactly is Ocean?
D.G.—Ocean is an open proprietary development environment, designed to foster innovation and creativity. Just as Microsoft’s Visual Studio lets developers create software that runs on Windows, Ocean will let third party developers give their tools the ‘look and feel’ of SIS* mainstream software like Petrel, Peep, Osprey and Avocet. There is a wrinkle to this approach though. One development environment won’t hack it for all products. For seismic to simulation (S2S), the Petrel 3D canvas is ideal. For drilling, value and risk (DVR) the Petrel interface is not appropriate—engineering tools need a more flexible look and feel. The Ocean development environment supports both S2S and DVR in two distinct products. Common to both is the Ocean Core and a set of services for coordinate reference systems (CRS), units of measure etc. There are few things that irritate our customers more than seeing the same well in two places because of different CRS implementations.
OITJ—When will developers get their hands on Ocean?
D.G.—The first commercial release will be towards the end of 2005. At that time, developers will get access to the Petrel infrastructure, the 3D canvas and rendering engine. We will also expose the Petrel models and some seismic data types. By the end of 2006 the drilling world will be included and data access will include more seismic data types.
OITJ—Ocean has been touted as a .NET development environment. Does that mean that it will be ‘web services’ based—or just a recompile of a ‘monolithic’ Windows 32 bit MFC** application with the latest version of Visual Studio?
D.G.—Initially it will just be a recompile. When .NET was announced, Microsoft completely blew it by emphasizing its ‘web services’ aspects. In reality, .NET is a development environment like MFC. Web services are still taking shape. But by going with Visual Studio we hope to position ourselves to take advantage of web services and Services oriented architecture when they’re ready.
OITJ—So you’re abandoning Linux?
D.G.—No, we are not turning our backs on Linux which we see as having an enduring position in the cluster world—especially for compute intensive applications like Voxel Vision, Eclipse and Decide!
D.G.—We see a long and prosperous future for GeoFrame, but we expect clients to transition to the new environment over time. We are not going to hustle folks. GeoFrame on Linux is very performant.
OITJ—A move from Linux (or Unix) to Windows is predicated on 64 bit computing. When will Petrel be 64 bit?
D.G.—I wouldn’t say ‘predicated,’ but Petrel will be 64 bit in the near future. We are re-engineering the Petrel interface to be ‘64 bit ready’ and of course we are waiting on Visual Studio 2005, which will be 64-bit enabled. Unfortunately, VS 2005 has been delayed, so all I can say is that Petrel and Ocean will be 64 bit ‘real soon now’!
OITJ—So the plan is to recompile Petrel on the VS 2005 64 bit when it is available?
D.G.—Not exactly. By year end 2005 there will be a new version of Petrel which will migrate the Petrel GUI to the .NET WinForms. There is a lot of MFC stuff in Petrel. The key is to get the user interface in .NET and then start testing on 64 bit.
OITJ—What is ‘open proprietary’?
D.G.—SIS will own the source code and development environment. We will publish the API with user guides.
OITJ—You said ‘anyone’ can buy it. Would you sell Ocean to Landmark?
D.G.—Yes, but I’ll be buying a lot of people drinks when that happens.
OITJ—So it will be different from the old days when the GeoFrame dev kit was not exactly ‘open’.
D.G.—Yes it will be different. I am adamant about a separate identity for Ocean as a product with attention to documentation, presentation etc.
OITJ—Where does this leave OpenSpirit?
D.G.—OpenSpirit (OS) is SIS’ provider of data access middleware. Customers will use OS to access data in legacy data stores. We are actually working with the OS folks to move their data access middleware forward. They are challenged by the rigidity of the data footprint and we are working to move data access to a meta data driven approach, a key piece of the puzzle. OS needs to be more flexible.
OITJ—What is ‘metadata driven’?
D.G.—You’ll be able to create metadata to define mappings between standard data and data store specifics. This will move from a ‘canonical’, hard wired model to a flexible system, making it possible for OS to provide third parties with the capability of writing their own adapters. This will grow the OS footprint. Also if you want to include a legacy data source, you can use your own metadata to get at it—without having to wait on the OS footprint to ‘evolve’.
OITJ—What was in the 2004 Petrel API?
D.G.—The Petrel API exposed a subset of the Petrel data model letting developers create Petrel plug-ins—a first step. The 2004 API allowed access to four Petrel data types and let users create processes that appear in the Petrel process manager and become part of the Petrel workflow. This API has a algorithm-focus. You can build you own facies modeling algorithm, but you can’t access Petrel infrastructure or create new data types.
OITJ—Will Ocean’s support for geodetics, units of measure extend to UWI*** and other reference data? In short is this the answer to Steve Comstock’s plea for better Petrel data management?
D.G.—There is no ‘policing’ of metadata, UWI, CRS etc. built into Ocean. We see this a separate IM challenge. Petrel IM will leverage the new common Seabed data store. We, and the users, like Petrel the way it is and we don’t want to mess it up. So we are keeping Petrel ‘clean’ and letting users store project files in the database rather than on the ‘C’ drive. In the future you’ll be able to ‘decompose’ a Petrel project file into a Seabed data store, keeping the stuff that people want to share. Ultimately such files will be decomposed completely and the Petrel Project File will be a pass-through cache to data stored in Seabed.
OITJ—Was last month’s endorsement from Microsoft’s Steve Ballmer any more than product placement?
D.G.—SIS is now one of Microsoft’s strategic partners. But that does not mean that we are going to have much influence on the 440 million Office users! We do expect to see opportunities for collaboration in certain areas, like high performance computing where Microsoft is faced with Linux/Unix dominance. Microsoft creates a load of ‘stuff,’ we want to know about it as it happens so we can help early adopters.
* Schlumberger Information Solutions.
** Microsoft Foundation Class—the Windows API.
*** Universal well identifier.
Paradigm has just released a new version of it Geolog well log and petrophysical package. Geolog 6.6 includes native Windows support, automated seismic facies classification, horizontal well support and image log interpretation.
Paradigm’s Sysdrill is now fully integrated with Geolog and ‘OpsLink,’ a branded WITSML data receiver allows for the integration of real-time data into the well planning and geosteering process. Paradigm claims that Geolog is used by over 70% of oil and service providers worldwide. The Windows port ‘eases deployment while maintaining multi-vendor interoperability.’
Epos 3 SE
Paradigm has also upgraded its Epos 3 data infrastructure with improved support for structural interpretation. Epos 3 Second Edition combines volume-based and traditional interpretation methods, letting users perform line, volume and spatial interpretation concurrently. Epos 3SE offers dynamic 3D project management, with support for any combination of 2D and 3D seismic datasets. Red Hat Linux, Solaris and SGI Irix are supported.
The latest issue of OpenSpirit Corp’s newsletter reports that Apache Corp. has been using OpenSpirit to link its Petrel interpretation software to its OpenWorks database. Although Apache is a Landmark shop in Houston, Schlumberger’s Petrel has gained a foothold in Aberdeen and more ‘aggressive’ deployment is planned at the head office.
Apache project leader Claire Andrews said, ‘We are starting to leverage our technical alliance with Schlumberger to determine the most efficient way of using Petrel in our data management activities. We first thought of OpenSpirit as a linking mechanism, but it’s clear that it’s actually a robust solution in its own right and is an essential part of our Petrel deployment.’
Apache appreciates the OpenSpirit query functions, ArcView access and its ability to handle ‘aggressive’ coordinate transformations. Andrews commends OpenSpirit’s use of the original software’s APIs when talking to other databases, ‘In OpenWorks, OpenSpirit calls native utilities so we know we are getting the correct information. We also like its use of Oracle security to write back to the database.
Apache is now working with OpenSpirit to pull cultural data from Z-MAP–connecting users to the GIS world. Andrews explained, ‘Users can leverage map-based selection procedures to highlight and collect data, rather than relying on traditional tabular selectors to move the data into Petrel.’
Calgary-based geoLogic has opened a new data center in Calgary and teamed with reservoir engineering consultants Epic Consulting Services. The geoLogic Data Center (gDC) is an online exploration information system providing clients with access to what is claimed as ‘the most current petroleum data available.’
GeoLogic President David Hood said, ‘The gDC offers customers fast data delivery, a PPDM-based database and fully redundant backup and automated failover.’ Nine companies have committed to use the gDC to securely store and manage their proprietary data.
GeoLogic and Calgary-based Epic have joined forces on a new integrated software release of Epic’s products ResSurveil, ResBalance and ResWorks with the GeoScout data management solution.
The French Petroleum Institute (IFP) has signed with Paradigm for the commercialization of its Stratigraphic Inversion (SI) package. The IFP’s software will be integrated with Paradigm’s Epos data infrastructure. SI extends traditional seismic inversion methods by integrating ‘soft’ constraints such as geology, ‘a priori’ knowledge and data uncertainties.
Paradigm’s Anat Canning said, ‘This represents a step-change in the development and use of seismic inversion as a natural extension of seismic imaging and AVO projects. The opportunities to exploit this advanced technology are now at the fingertips of every G&G professional”.
IFP VP Gérard Friès added, ‘SI incorporates years of research into stratigraphic inversion applied to reservoir model building and the optimization of well locations and geologic targets.’ The deal with Paradigm marks a departure from the IFP’s usual software outlet, Beicip-Franlab.
The 600 producers in OMV’s Eastern Austria region account for about 10 % Austria’s crude oil supply. OMV reconciles the need for stable production with HSE* constraints by continuously monitoring production, previously performed by a team of experienced technicians. OMV found this a costly solution and it proved hard to maintain a high standard of operator competency.
To improve on the manual data recording and at the same time optimize well production and reduce operating expenses, OMV contracted with Alcatel to develop a real-time monitoring and control solution for around half the wells in the region. OMV already had a fiber optic backbone, but access to individual wells was problematical in view of the hilly terrain. Alcatel designed a SCADA system and WLAN network to acquire the data which is transmitted to the backbone.
Each well is controlled by a Programmable Application Controller (PAC). Pressure and liquid level error states are signaled by digital switches and the power supply is managed by ripple control receivers, circuit breakers and phase failure relays – all processed in real time. Data from analogue sensors such as strain gauges and flow is pre-processed, logged and transmitted to a central SCADA server based on deadband** criteria. The SCADA server also enables remote control of operations such as starting and stopping individual wells, resetting operating hours counters and remote software configuration.
Under offline conditions (e.g. temporary loss of WLAN coverage) the PACs guarantee autonomous control including local buffering and delayed data transfer to the SCADA server. Each cluster of up to three wells has its own WLAN with line of sight to 7 WLAN pick-up points with access to the OMV fiber optical backbone.
The central SCADA server acts as a common framework for data logging, alarm handling, real-time monitoring and remote control. A web based client provides user management and integration with the existing IT platforms for alarm notifications etc. Users can browse a hierarchical set of animated geographical maps, filtered alarm lists, customizable process views and preset target values. A reporting gateway enables different kind of reports in Word or Excel format. All data exchange between the SCADA server and RTUs is fully event driven - only changes in measured values, remote control states or configuration parameters are transmitted, reducing network and improving scalability.
The control, SCADA and visualization layer were developed and customized in National Instruments’ LabView – particularly the Datalogging and Supervisory Control functions.
The turnkey system has reduced OMV’s operating costs, enhanced data quality and allows for instant verification of maintenance operations.
* Health Safety Environment.
** Actuator thresholds which trigger data transmission.
Roxar’s software developers have been working overtime! The Norwegian geo-modeling and simulation specialist is ramping up its software offering in preparation of a fall IPO and has just released no less than three additions to Irap Reservoir Modeling System (RMS). The new tools target fracture and permeability modeling, fault seal analysis and well correlation.
The new fracture modeling package, FracPerm lets geologists and reservoir engineers incorporate fracture modeling into 3D modeling and simulation. Roxar claims an ‘industry first’ for the integration of fracture modeling with the standard 3D workflow.
By incorporating fracture networks within 3D modeling, geologists and engineers can assess complex fractures using either a simple four-step, ‘straight-to-grid’ route or a discrete fracture network (DFN) model.
Roxar CEO, Sandy Esslemont, said, ‘Fracture modeling is often regarded as an esoteric, specialized workflow, despite the fact that fractures are present in major oil fields throughout the world. FracPerm brings fracture modeling into the mainstream with an intuitive, integrated package.’
Fracture and permeability models can be integrated with existing 3D reservoir models created in Roxar’s Irap RMS using RMSopen—an integration toolkit for in-house or third party applications.
The second new tool, RMSFaultseal allows fault seal analysis to be included in the reservoir modeling workflow. Fault properties such as shale gouge, smear factor and user-defined properties are used to estimate fault transmissibility. Results can be exported to Roxar’s RMSflowsim black oil simulator or RMSstream, single-phase streamline analysis tool. Roxar partnered with Leeds University’s Rock Deformation Research group to develop the new tool.
Finally, Roxar has broadened its interpretation footprint with the addition of a well correlation module, RMSwellstrat which is said to allow interpreters to handle complex geology and well geometries in a truly 3D environment. The software includes log and well picks calculators and export to mapping packages.
The 2005 Houston meet of the Public Petroleum Data Model (PPDM) Association witnessed a modest return of the corporate data model with deployment reports from Nexen, Woodside and Anadarko. But it is in the service sector that the data modeling landscape is being re-defined, with PPDM being the solution of choice for pretty well all new development. PPDM CEO Trudy Curtis is building on this success by hooking up with the Pipeline Open Data Standard (PODS) organization, with plans to share data infrastructure and modeling best practices.
Petris CTO Jeff Pferd showed how the loose coupling and service-oriented architecture favored by his company can be used to avoid the dependencies and versioning issues encountered when upgrading, say from PPDM 3.6 to 3.7. Pferd called for the standards bodies to create web services interfaces to their data models, documenting recommended granularity and providing XML data question and response.
Kenneth Greer (CenterPoint Energy) described PODS as a ‘software and vendor-independent’ database for pipeline and location data. Work is in progress on a GPS data dictionary, rights of way, inline inspection, compliance and documentation. Greer believes that in the future, PODS and PPDM may share data infrastructure where there is overlap—e.g. for geodetics, units of measure and partnerships. Tracy Thorliefsen (Eagle Information Mapping) elaborated further on the PODS data model showing how pipelines are similar to wellbores to the data modeler. PODS has no spatial capability ‘out of the box’ and is therefore ‘GIS-neutral’. Spatialization options are a trade-off between loose-coupled systems and the tight coupling as described in the ESRI manual. This has unwanted side-effects and ‘creates havoc for interoperability at the enterprise level.’ Loose coupling links GIS features and classes to PODS via foreign keys. The database becomes GIS-independent albeit with a performance and maintenance penalty. The PODS model lends itself to network modeling and engineering studies using tools like PipePhase.
In a video presentation, Gwen Kelly showed how PPDM has been coupled to Woodside’s SAP ERP system. Woodside’s AFE* Project tracks expenditure from business proposal to financial settlement for items like G&G studies, seismic surveys and wells. Each project contains both financial and technical data, requiring a joint finance and production management approach. The project links permit information in PPDM with joint venture finances in SAP. The development was needed because a) ‘SAP is not very good at detailed project management,’ b) ‘It is hard to make a neat customized form in SAP’ and c) ‘There’s no word for ‘fluffy’ in German!’ PPDM is good at managing the complexity of joint ventures and farmouts. SAP’s Business Warehouse was used to produce permit reports and an ArcMap interface shows AFE spend as color coded map.
Trudy Curtis reported good take up for PPDM’s latest 3.7 version which now has 1,200 tables and 24,000 columns. The model now runs on Oracle, SQL Server, MySQL and PostGres. Work is in progress on a PPDM metamodel and on reference values. These should include data provenance and units of measure. EPSG geodetics are now embedded in PPDM. The next version will leverage PPDM’s collaboration with POSC on XML components and schema, although some of this work is ‘on hold’ pending WITSML developments. An EnCana-funded project will align the PPDM schema with GML (see below). Work is also ongoing in the field of taxonomy with arbitration between UNSPSC and NASA code sets and a ‘confusing and incomplete’ POSC Discovery EPICAT.
Dave Buggraf (Galdos Systems) presented a backgrounder on Geography Markup Language (GML), XML for web-based geospatial information. GML has a constellation of related standards—ISO 19199 GML Web Feature Server (WFS), XMML (mining), PPDM GML, O&M (observation and measurement) etc. The basic idea is that a browser can access data in any geo-database via a web feature service. A distributed geo infrastructure will allow land, pipeline, hydrography etc. to be consolidated from multiple databases. Raster, contour, digital terrain models and are handled by attaching an external binary file in JPEG 2000-based JPX ‘package’. Bugraff described JPX as ‘GeoTiff on steroids.’
John Jacobs described how Anadarko used to manage data flows throughout the enterprise with ‘bubble gum and bailing wire solutions’ written in Perl and Unix shell scripts. These have been replaced with Informatica’s extract transform and load (ETL) tools. Metadata mapping has allowed integration of data from Anadarko’s ERP system with its in-house PPDM corporate data base. Integrating production data from external sources has been hard to achieve, particularly with 3.5 million wells in its domestic database. PPDM proved a ‘clear and easy winner’ for well data. Documentum and ESRI SDE also ran—leveraging a standard taxonomy. Anadarko’s system has been developed in-house ‘to save cost’. Although Anadarko is in general a ‘buy not build’ company, business rules and enhancements ‘need to be done by people in the organization.’
Kim Thomas (ExxonMobil and PPDM Board) described the industry as ‘at a watershed,’ with an average age of 48 years—even older in the data management sector. Turning to data management, Thomas stated that in ExxonMobil, database use is being driven by reporting requirements. So the more standards bodies work on common issues, the better for everyone. In some areas though, there are plenty of standards already. Units of measure can be managed in IEEE, ANSI, POSC, API, Mathematica and NIST. ExxonMobil is ‘in transition’ regarding units of measure and is planning to select one of the above and share best practices with the industry. Thomas noted that such openness and sharing reflects a change in ExxonMobil’s outlook and ‘would not have happened a few years ago.’
* Authorization for expenditure.
POSC CEO David Archer cited a recent Gartner study which found that ‘industries on standards do better’. E&P standards are needed for regulatory reporting, asset management, portfolio review and production optimization. ‘Standardizing IT helps companies get to their core business’. POSC is also working on an XML protocol for distributed temperature survey (DTS) data and a draft ProductionML standard.
The Norwegian Integrated Information Platform (IIP) for reservoir and subsea production is the most ambitious oilfield standards project since POSC’s Epicentre and Oracle’s Synergy. POSC’s IntOPs SIG is a participant, along with Statoil, Hydro, DNV, National Oilwell and others. The project began in June 04 and is set to cost $3.8 million over 3 years. Deliverables to date include some impressive slideware of Statoil’s Tyrihans (an Asgard outlier) subsea completions. Tyrihans is to be developed as a highly instrumented field with real time data from wells and permanent bottom for 4D seismic monitoring. This year Tyrihans will deploy a well ‘stream’ containing subsea equipment withWITSML drilling and logging, production and HSE reporting. In 2006 the IIP will extend to include automation, operations, reliability, maintenance and reservoir characterization. In the final year of the project, decision support tools will be deployed for real time rule-based notification and visualization. After testing, the IIP will be submitted to ISO as a standard for Subsea Production and Operations. An IIP ‘dictionary of terms’ will be ‘delivered’ to the W3C. Of potential interest to the W3C semantic web community is OWL-based information retrieval and categorization software. Project scope is mind boggling. IIP aims to integrate everything, from geometry (GIS, CAD, and earth model) through 4C/4D seismic, drilling, logging, production, HSE etc. ‘in one standard.’
Data Store Solutions
Alan Doniger presented the results of the Data Storage Solutions SIG. This is a ‘broad church’ of a grouping which includes the Global Unique Well Initiative (GUWI), previous work from the Shell Discovery project and reference data standards for lithology and coordinate reference systems. Both the lithology and fluid properties protocols are heading for inclusion in WITSML. The DSS SIG aims to ‘increase awareness’ of the EPSG coordinate reference database. WITSML 1.3 is ‘nearly aligned’ and a web-based coordinate reference service is planned for Q4 2005. This will be deployed as a GML and WITSML server. The DTSML temperature survey is also a WITSML-like format leveraging namespace and units of measure as does the new Well Path Data Transfer Standard.
The ConocoPhillips and Shell-lead global unique well identifier GUWI project, also known as the world wide well project, produced an initial discussion document early in 2004. Following review by vendors, a letter of intent was submitted to oil companies. Now a request for proposals has been drafted for a ‘global clearing house service’ including ‘customer facing’ services for registration, query etc. The GUWI service will be deployed progressively starting from exploration and development ‘hotspots’. The AAPG is ‘on board,’ IHS has offered to carry with what they do already. According to GUWI lead, ConocoPhillips John Adams, ‘There is no intent to replace the API in USA or the Canadian numbering system.’
Somewhat belatedly, Norwegian Tieto-Enator presented the results of a 2003 project carried out for the Norwegian trade grouping OLF and the major Norwegian operators. TietoEnator has standardized daily operations and production reporting into an XML format. This has solved the problem of variable quality regulatory reporting and eased data aggregation and analysis.
The COPEX production and well data reporting format (originally developed by PGS for Petrobank) was used, along with the Schlumberger Oilfield Glossary, API standards and ISO 15926. NPD standards and key performance indicators supplied by Petoro also ran. XML reports are now stored in TietoEnator’s LicenseWeb. This allows for query and extraction to ad hoc reports in Microsoft Excel. The data footprint covers operations, alarms, production allocation, gas lift and HSE. Pilot projects have leveraged the technology on BP’s Valhall and Statoil’s Asgard fields. The early XML mock-up is to be retooled as a contribution to the WITSML production standard.
During the Houston WITSML public meeting last month, lunchtime interoperability test was conducted to see how easily newcomers to the standard could interface with established data providers.
Twelve POSC WITSML SIG members present at the exhibition were connected over a local area network (LAN). During setup, several exhibitors recognized that they could ‘see’ other exhibitors on the LAN. Within a very short period of time, and with virtually no work, exhibitors were successfully exchanging WITSML messages.
Two WITSML newcomers, Wellstorm Development and Jibe Networks, were able to interoperate with established SIG participants Sense Intellifield, Smith Bits, Halliburton/Landmark, Knowledge Systems Inc, SDC Geologix and POSC. A telling demonstration of the fact that web services technology ‘just works’!
An informal poll of attendees and vendors showed that both were enthusiastic about the exhibition and the progress of the WITSML Standard over the past two years. Both groups gave strong endorsements for similar seminars and exhibitions at future WITSML meetings. Statoil and Hydro will host the next WITSML SIG and Public Meetings in Norway on the 15-18th November.
About 20 attended the Spring meeting of the API Petroleum Industry Data Exchange (PIDX) committee in Paris. PIDX Europe chairman John Boardman (Shell) welcomed SAP as the latest of PIDX Europe’s 25-strong membership. PIDX’ mission is to establish global business standards for the industry and its trading partners. The organization also works on business processes and documentation. PIDX has migrated its legacy EDI standards to an XML-based protocol, expanding its footprint to ‘complex products and services’ that support whole chunks of oil field activity like cement jobs or well completions.
Trade Ranger’s Randy Clark, PIDX chairman, reported that the classification task force’s revision to Segment 71 of the UNSPSC (Mining, Oil and Gas) has now been adopted as a UNSPSC code set. A new XML transaction standard includes schemas for custody ticket, petroleum products, order status request, receipt and advanced shipping notice. The Business Process workgroup has completed the credit memo standard and is working on standardizing the invoice-to-pay processes. In the downstream, EDI product codes have been updated and a downstream special interest group is working on a draft XML bill of lading specification.
PIDX has signed a memorandum of understanding with the chemical industry standards body, CIDX and RAPID, for a Joint Technology Plan (JTP) to collaborate on standards development. The JTP will evaluate OASIS, UPC, CEFACT, as well as web services, core component specifications, schema design best practices and cross industry standards. There is broad industry agreement that e-business is critical for Sarbanes-Oxley compliance. As XML standards mature, adoption is broadening, benefiting operations and ‘getting the attention of people in finance.’
Mark Mack revealed that Schlumberger has been operating its own e-commerce system for internal purchasing and catalog publishing for several years. Schlumberger use industry standards ‘end to end’ from field ticket through purchase to pay to Request for Tender. The company dislikes reverse auctions and ‘non standard’ processes which might compromise Schlumberger IPR. For Schlumberger, PIDX is ‘the standard’. Today, 2/3 of its invoices are XML, the rest are legacy EDI. Clients want contract compliance, spend analysis and cost efficiency. Compliance involves a three-way match of purchase order, delivery ticket and invoice. This works for simple products but complex services generate a tiered price structure. The cost of a logging job cost may not be knowable beforehand—only between 0 and 20% of catalog items have a price!
Not too granular!
For Mack, spend analysis is not really a supplier issue. Clients should not to get ‘too granular,’ complaining, for instance, that ‘fuel charges are too high’. To understand your spend, you should look at how much is spent on logging, not at vehicle mileage. 95% of the e-business challenge is not invoice-related but centers on other documents and on pre-invoicing discussions. Clients need to ‘stay involved’ even if a PIDX capability is outsourced. Today, 9,000 users in 1,600 locations use the Schlumberger-developed eZView secure billing system and its online catalog of oilfield products and services.
SAP Industry Speak
According to Telma Gallo Sanchez, SAP’s ‘Industry Speak’ (IS) enterprise services architecture is a business-to-business facility, built atop SAP’s NetWeaver web services platform. Industry Speak promises connectivity with third party applications. PIDX is currently ‘outside of the SAP IS process.’ IS enables ‘closed loop’ processing from order to invoice. IS differs from traditional SAP development, allowing process automation across heterogeneous environments such as Siebel CRM and Microsoft Outlook.
Andy Ross (Business Web) introduced the AS2 routing protocol from the Internet Engineering Task Force (IETF). AS2, which is used by Wal-Mart to communicate with its suppliers, will become an alterative to RosettaNet for PIDX data. AS2 helps smaller companies avoid the high costs of middleware. PIDX business messages make extensive use of namespaces, ‘we chose to namespace everything’. This is compliant with W3C recommendations, but adds to bandwidth. Ross strongly recommends configuring middleware to support all W3C-compliant namespace standards.
Joint Technology Plan
Andy Ross outlined the CIDX, PIDX and RAPID Joint Technology Plan to ‘converge standards and bridge technologies’. This involves a joint study of the potential of web services. The focus is on standardizing schema naming and design. The carrot is CIDX/PIDX interoperability; after all, ‘an invoice is an invoice is an invoice ’ The plan is to define core components and vertical-specific standards. There are challenges regarding take up in an all-PIDX vertical. Notwithstanding a degree of flux, Ross stated that ‘standards are really taking off in North America.’
Mercury Computer Systems (formerly TGS) has released a version of its visualization programming toolkit, Open Inventor, tuned for clusters. Open Inventor Cluster Edition offers developers ‘transparent scalability’ of 3D visualization applications like 3D seismics and large voxel models.
The Cluster Edition is available on 64 bit Linux systems based on either AMD’s Opteron or Intel Xeon EMT64. 32 and 64 bit Windows versions will be available later this year. Open Inventor is embedded in systems from Foster Findlay Associates, Landmark, Schlumberger, SMT, Jason, Paradigm and Roxar. Oil company developers include Total, BGP (China).
Calgary-based CMG generated $11.5 million from reservoir modeling software for fiscal 2004, up 22% on the year.
Statoil’s investment arm Offtech Invest is to inject $2.5 million into Calgary-based Geomodelling Technology for the further development of the SBED product line.
C&C Reservoirs has sold its Digital Analogs database to new clients Petronas Carlgali, Kuwait Oil Company, Occidental and Wintershall.
Petrosys has appointed Franck Lemaire manager of its new Paris office. Lemaire was previously with Dynamic Graphics.
Ikon has appointed Nick Pillar as chief geophysicist. Pillar was previously with Enterprise and Petronas.
John Dobbs and Scott Rouze, both petroleum engineers, have been signed by consultants Ryder Scott. Dobbs was previously with ExxonMobil, Rouze with Williams Pipeline.
Geoscience Australia and CSIRO are demonstrating real-time access to ‘pre-competitive’ geoscience data in the Solid Earth and Environment Grid (SEE Grid) project, ‘next generation’ internet-based technology for geoscience and spatial data interoperability.
Hakima Ben Meradi is to head-up Earth Resource Management Services (ERM.S) new Stavanger offices.
Jerry Dees has been named director to Geotrace’s board. Dees was previously with Vastar Resources.
Knowledge Reservoir and Net Brains are to team on the pursuit of subsurface, production, and knowledge management projects in Mexico.
Robert ‘Bo’ Ewald has been named CEO of Linux Networx. Ewald was previously with Cray Research and served on the US Presidential Information Technology Advisory Committee.
OFS Portal has appointed Alvaro Escorcia (Halliburton), Paul Krueger (Vetco Intl.) and Steve Sidney (Baker Hughes) to its board.
The Petroleum Research Center of the Libyan National Oil Corp. is to deploy Scandpower Petroleum Technology’s Olga 2000 simulator to boost its R&D and technical services capability.
A new release of Geosoft’s Oasis Montaj offers enhanced 3D visualization, 3D voxel model building, Kriging and GoCad support.
Paradigm will be hosting its 2005 processing and depth imaging user meeting in Krakow, Poland on October 26-27.
ESRI is holding a special session of its Petroleum 3D and Pipeline Data Model SIGs at its annual user conference in San Diego, 25-29 July.
Spescom Software has appointed Jonathan Reed to spearhead its expansion into the oil, gas and petrochemical market. Reed was previously with Neon system.
Tigress is now hosting clients’ data rooms online from its re-vamped website www.tigress.co.uk. The company has also announced three new hires to its Moscow office—Emin Mamedov, Konstantin Shishkin and Anthon Rybalko.
Yoram Shoham has been appointed to Veritas’ board of directors. Shoham was previously with Shell.
Wood Mackenzie is to receive a cash injection from EU buyout specialists Candover. The funds will be used to finance WoodMac’s growth through ‘organic investment and strategic add-on acquisitions.’
BP has successfully conducted remote well surveillance of operations on an offshore Trinidad well from its Houston offices. Logging while drilling and other real time data from the rig floor, collected in legacy WITS, was transmitted via Baker Hughes’ RigLink to a WITSML server located at Baker Hughes Inteq’s Port of Spain headquarters.
BP’s Matthew Kirkman said, ‘The WITSML standard turns cross-application data transfer into a plug-and-play routine. We appreciate the efforts of Inteq and KS as they continue to apply WITSML to new workflows of significant value to industry.’
To monitor drilling conditions in real time, BP used Knowledge Systems’ (KS) Drillworks wellbore stability package. This collected data from the Inteq server in Trinidad using KS’ ConnectML—a branded WITSML implementation. BP’s engineers were able to compute and update wellbore stability models in real time as new data flowed in from the well.
In a future phase of operations, Inteq is to transmit real-time wellbore imagery to BP for real time validation of the wellbore stability model. See page 7 of this issue for more on WITSML interoperability.
When Shell integrated Recall with its Open Spirit infrastructure, the natural course of action was to get Baker Atlas to write an OpenSpirit server for Recall. An Open Spirit development kit was duly acquired, but despite Recall’s developers’ best efforts, they found it too complicated keeping up with Open Spirit’s technology. In the end, Baker supplied Open Spirit with a copy of its own dev kit—so Recall’s Open Spirit server is also a Recall client application! A variant on the ‘your place or mine’ conundrum perhaps?
OITJ—What’s special about Saltend?
Persad—This is BP’s flagship simulator. This week alone we have had three lots of visitors to the facility. We’ve also had visits from the offshore production sector. A new simulator costs many hundreds of thousands of dollars, mostly in man hours for a good team of simulation engineers. To build a ‘high fidelity’ model you need as much information as to build the plant.
OITJ—What is your role?
Persad—I maintain the simulator, keeping it in step with changes to the real plant. We have to avoid the situation where the plant ‘drifts away’ from what is configured in the simulator. I also manage project teams from AspenTech and Honeywell, who do the actual model building. BP’s role is to operate the site, we don’t have the resources to build a big model ourselves.
OITJ—What is in the simulator?
Persad—The simulator models the plant down to component parts such as valves and pumps, built from standard software building bricks. These are customized with the actual equipment specifications used in the plant. AspenTech’s HySys is our main simulator environment but we also use AspenTech’s Otiss, Fantoft’s D-Spice and ABB Simcon’s Gepurs.
OITJ—How do you compare simulator and plant? What’s the ‘reality check’?
Persad—It’s all in the testing. We put together a test team made up of people from the various engineering disciplines, who know all aspects of a real plant. All flows, temperatures and pressures have to match the real plant for acceptance testing, with a good match of both steady state and dynamic behavior. Operators are important here—they can spot differences between the simulator and a real plant. They get sucked into testing the simulator. It becomes ‘real’ and they’re sweating at the end of a session! We put them though start up, shut down, and test, test and test again! This can take many weeks to do properly. Ultimately, the confidence in the final model is given by the operators.
OITJ—Is simulation used in plant design and process optimization?
Persad—There is a lot of scope for the using the simulator as a design tool. In 1989, on our first ever simulator, the simulator design found several instances where the plant design was wrong. The tanks were too small, the liquid density too high. We went back to the engineering prime contractor with suggested changes that enabled the actual plant start-up to be free of these problems. But in process control, the simulator is still not used as much as it should be. I reckon we could save weeks of production downtime. Packages like AspenPlus are used in the steady state design process. But with dynamic simulation, there is a greater potential to improve process. In fact, the technology has evolved to make this possible since many plants were designed.
OITJ—How is the simulator is used to optimize operations?
Persad—Typically to practice plant startup and shutdown. These plants can run steady for very long periods (2 to 3 years), so starting and stopping them is a big deal and needs to be planned and practiced. One of the most significant cost savings is the avoidance of plant ‘trips,’ when the plant is forced to shutdown by the Emergency Shutdown System. This happens before a potentially unsafe condition arises. Such a forced shutdown may result in a 12 hour loss of production, with a much greater monetary value than the cost of the simulator!
OITJ—How exactly do you avoid such conditions arising?
Persad—The experienced operator sees things coming where an inexperienced operator doesn’t. By training operators with the scenarios mentioned above we help them anticipate unsafe conditions.
OITJ—Do you have a ‘graying’ industry like the upstream?
Persad—Do we ever! In fact simulators are used to maintain skill levels , especially as we restructure the site and potentially lose valuable skills and knowledge. The simulator becomes a way of capturing the knowledge. We anticipate problems as the baby boomers retire. We need to get hidden knowledge out of operator’s heads and into the simulator.
OITJ—When is a simulator operator training and when is it simulation and design? Aren’t these facets of the same problem?
Persad—Yes, this is what the vendors now call the ‘lifecycle model.’ It is getting more and more important.
Statoil is to deploy more AspenTech software to support the design and optimization of its upstream production facilities. A new multi-year license provides access to applications including HySys Upstream.
Statoil’s previous AspenTech deployment concerned a HySys-based solution for Front-End Engineering Design (FEED). This combines process simulation, economic evaluation and engineering data management and has enabled Statoil to optimize engineering on new projects.
The HySys Upstream option extends these capabilities with industry standard methods for handling petroleum fluids and providing links to third-party applications that model gathering networks. This enabled steady-state and dynamic simulation of the entire production system, from wellhead to the production facility.
Statoil VP Ĝivind Nilsen said, ‘AspenTech’s simulators are designed for the complex fluid behavior found in oil and gas systems that makes them difficult to analyze and optimize. These tools, in combination with the integrated FEED solution, will enable us to improve design and investment decisions and to improve engineering efficiency.’
AspenTech VP Blair Wheeler added, ‘Hysys is the leading production facility engineering environment for the upstream and provides the foundation for our AspenOne integrated solution for oil and gas.’
Schlumberger has certified Linux Networx’ Evolocity cluster systems to run the Eclipse numerical reservoir simulator. According to Linux Networx, this is the first Linux-based high performance computer (HPC) to receive such certification.
The Schlumberger European Service Center in Aberdeen, UK is currently running ECLIPSE on a Linux Networx Evolocity cluster system. Certification was achieved on a dual AMD-Opteron node 64-bit system using Infiniband high-speed interconnect and the SUSE Professional 9.1 Linux operating system distribution.
In a separate announcement, Linux Networx announced that it is to offer its Evolocity clusters with dual-core Opterons at a future date. Dual core technology promises increased processing capacity in the same amount of space with no change in power consumption or heat levels.
Halliburton has opened a reservoir modeling ‘center of excellence’ in Calgary. The center will provide expertise in the areas of reservoir characterization, geostatistical and geocellular modeling, numerical simulation and production optimization.
Halliburton VP David Ackert said, ‘This service offering combines the latest in Halliburton’s real-time asset management solutions with an extensive reservoir knowledge base. We can now deliver integrated field development studies from reservoir description through to drilling and completions, production enhancement and production management services.’
The center of excellence is staffed by personnel from Halliburton’s Digital and Consulting Solutions (DCS) division and claims expertise in non conventional plays including coalbed methane, tight gas, enhanced oil recovery and heavy oil.
Local DCS manager Brad Bechtold added, ‘There is a need for specialized expertise to address the growing challenges across the Western Canadian Sedimentary Basin. The new center emphasizes Halliburton’s commitment to the Canadian oil industry.’
OFS Portal has signed two significant deals this month. Calgary-based Trican Well Service and two Brazilian units have joined the e-commerce supply-side consortium. The Brazilian deal will govern key aspects of e-commerce between Petrobras, its e-commerce marketplace Petronect and other OFS Portal members. Petronect was created in 2002 by e-Petro, a subsidiary of Petrobras, in partnership with Accenture and SAP.
Petronect president Luis Fernando Mendonca said, ‘OFS Portal members can access bids and promote eCommerce with Petrobras. We welcome OFS Portal’s commitment to open standards, which offer OFS Portal members a common point of access to Petrobras.’
Trican provides a variety of specialized products, equipment and services to the Canadian oil industry. Trican will use OFS Portal’s standards to publish electronic catalogs, making them visible to the community of Portal users.
OFS Portal CEO William Le Sage said, ‘We are expanding our global network by providing industry-leading initiatives for secure e-business processes. API/PIDX standards are critical to reducing costs and friction in upstream e-commerce.’
In a paper presented at the 2005 EAGE convention in Madrid, Petrosys MD Volker Hirsinger highlighted the ‘persistent gap’ between the spatial centric data management work of GIS focused teams and the business centric efforts of IT. Overlapping data sets with inconsistent data structures and integrity rules are one characteristic of a ‘cultural divide’, which leads to duplication in data management effort, compromising applications functionality.
Because this ‘uncertain area of responsibility’ is at the heart of Petrosys’ mapping business, the company has developed extensions to the PPDM 3.7 data model to consolidate spatial and business data into a single source of information. Petrosys bridges the divide between spatial data in systems like ArcSDE and business data in standard Oracle applications.
By storing data across PPDM, Oracle Spatial and ArcSDE, structured query on spatial data becomes a reality. Queries such as ‘find wells that intersect the TA-5 sand with more than 10ft. of pay,’ can be answered.
The American Petroleum Institute (API) is addressing industry knowledge loss and the aging workforce with its new ‘API University,’ a ‘comprehensive’ continuing education program for oil and gas professionals. The API University offers courses in a classroom setting, on-line, or by CD.
API Business Services Director Kathleen Combs said, ‘The program helps companies meet their training needs in a dynamic, creative and cost-effective way. The API University provides access to the largest pool of subject experts in the industry and programs are developed and taught by top trainers using the latest innovative methods.’
Over 300 e-learning courses provide flexible training in operational risk, asset integrity, natural hazards, drilling fundamentals, production or plant operations, security, quality and environmental auditing. If existing courses do not meet a company’s specific needs, API can help employers customize their employees’ training programs.
Instructor-led training in HSE risk management is provided by ABSG Consulting. Multi-language e-Learning courses have been developed by Technomedia to cover subjects such as plant operations, drilling and production. An Instructor’s Digital Assistant is also available for customized presentations and training materials. Other industry experts provide courses on API standards including fitness for service, pressure relief systems, damage mechanism and aboveground storage tanks. More from www.api-u.org.
BP has reported successful deployment of Multivariable Predictive Controllers (MPC) on its Gulf of Mexico deepwater Marlin platform. BP is using Honeywell’s ‘Profit Controller’ MPC system to maximize Marlin production, taking account of a variety of changing operating and weather conditions.
MPCs and Advanced Process Control (APC) solutions are used to manage operations with complex process constraints, where there are significant interactions between variables. BP first deployed an MPC on its Norwegian North Sea Ula platform where a 2% increase in production was achieved.
A key element of the Marlin deployment was the involvement of facilities operators who provided key information on how best the facility operated in both ideal and non-ideal conditions. This let Honeywell’s designers identify the production system’s key operating constraints. The primary compressor system was selected as the best candidate for APC control.
Honeywell’s Experion Process Knowledge System (PKS) was interfaced to third party controllers and a data historian, all running under Profit Controller. Initially, the optimizer was run very conservatively, but after testing, constraints were gradually relaxed until the physical limit of the process was reached.
Substantial engineering effort was required to research and get approval for such a change in operational conditions. The team worked with the rotating equipment manufacturers, Compressor Control Corporation, Honeywell and BP engineering and operations staff to define the optimal operating envelope. The results were dramatic. The ability to use the maximum capacity of the compressor upped production by 4%. BP estimated that the project repaid the extra investment in under three weeks.
Hunt Petroleum has selected Overland Storage to provide a ‘comprehensive solution’ for data archival, backup and recovery. Overland’s REO 4000 backup and recovery appliance will be deployed with the NEO 2000 tape library to store Hunt’s geophysical data. Hunt’s data storage requirements are rising by 30 % annually, straining existing storage resources.
As Hunt Petroleum’s Darrin Edgerton says, ‘We were running out of disk space on our primary storage array, while backup and recovery operations took far too long. REO and NEO give us the flexibility to grow our Windows, Linux and Unix tier-two storage on-the-fly, while reducing our backup and recovery window by more than 60%.’
Two day backup
Hunt’s old system was taking over two days to perform full backup to tape, encroaching on production time. The REO/NEO system performs a full disk-to-disk-to-tape (D2D2T) backup in about eight hours. Legato backup and recovery software restores the system ‘almost instantaneously.’
Disk-speed data recovery leverages Hunt’s Fibre Channel-based Windows/UNIX storage area network (SAN). Edgerton plans to offload tier-two storage onto the REO to maximize F-CAL use. The system is now being tailored to support disaster recovery and ensure business continuance—particularly in respect of regulatory requirements.
REO 4000 incorporates high-capacity, serial ATA disks, specialized management and virtualization software, iSCSI and F-CAL connectivity, and RAID 5. Modules support up to 26 SDLT or 30 LTO cartridge slots and up to two tape drives of up to 15.6 TB capacity. Overland Storage value-added reseller Dallas Digital Services facilitated the sale.