As data volumes rise, load times are going through the roof—a serious issue for companies like Apache Corp. whose (total) online seismic data volume has grown 7 fold to 3.5 petabytes over the last four years.
Bradley Lauritsen, manager of exploration computing with Apache said, ‘Our users’ geological models used to take 20 minutes to open in Schlumberger’s Petrel 2009. The 64-bit address space lets them access very large files, creating data transfer bottlenecks.’
Apache has now deployed a 10 Gigabit Ethernet backbone (with 1 GbE workstation links), Windows’ Server Message Block 2 (SMB2) network file-sharing protocol and a storage system comprising NetApp’s FAS and the Unix-derived ‘Ontap’ operating system.
Solid state ‘Flash Cache’ modules improve I/O throughput by reducing the need to pull data from Apache’s back-end SATA drives. Cache is proving an effective solution to seismic data transfer. Apache reports a ‘near 70%’ cache hit rate—reducing access to the slower drives.
Apache uses multiple 512GB NetApp Flash Cache cards in five FAS6080 non-blocking storage systems in its Houston headquarters. Local sites deploy a FAS3170 systems as a backend to Petrel and other interpretation systems. Lauritsen reports that what took 20 minutes to load before now takes only 5 minutes. He described the NetApp storage solution as the ‘missing link’ that allows its geoscientists to take full advantage of Petrel 2009.
Curiously, the foregoing has been reported as a Windows success story—by both Microsoft1 and Schlumberger2 and to knock Linux-based interpretation systems (read from Paradigm and Landmark.) But the reality is a little more complex. The advent of SMB2 in the latest 64 bit Windows operating systems has meant that these can now ‘talk’ to industrial strength storage systems like NetApp.
We asked NetApp’s solid state product marketing manager Mark Woods if getting data over the network was really faster than a local disk. He replied, ‘No, loading a file from a local disk is typically going to be faster than downloading it from networked storage. This is likely still the case in the new environment. Apache wanted to enable geoscientists to work in parallel on the same data files. Changing from local to networked storage made this practical.’
In conclusion, this story is more about data management than performance. In which context, Apache is also leveraging NetApp’s SnapMirror and SnapVault data protection and replication technology to synchronize data between headquarters and affiliates. Last month NetApp reported total (all clients) sales of one petabyte of cache memory. More from www.oilit.com/links/1007_8.
© Oil IT Journal - all rights reserved.