CGG center showcases gigabit network

PDM visited CGG’s new Redhill processing center where a new high performance 100 terabyte storage area net from LSI Logic feeds data to CGG’s 1000 processor PC-Cluster compute engine. Data flows over what is claimed to be Europe’s first 2GB/s network.

PDM visited CGG’s new processing center in Redhill, UK for the roll-out of a new, high performance storage system supplied by LSI Logic Storage Systems. LSI’s Metastor enterprise storage system, as supplied to CGG, comprises 100 terabytes of disk storage along with what is claimed as Europe’s first 2 Gigabit Fiber Channel storage area network (SAN).

Delorme

CGG’s IT Manager Laurent Delorme explained that performant seismic processing at Redhill increasingly relied on high-speed disk-based data storage. The majority of CGG’s storage is now LSI, but CGG also deploys disks from Sun, SGI and Network Applications. CGG appreciates LSI because ‘they know the geophysical business.’ CGG also benefits from direct access to LSI – without going through a distributor – gaining early access to, and influence on, new technology.

Never ending

Storage optimization, the ‘never ending story’ is part of an ongoing effort to enhance and optimize CGG’s productivity. Today’s seismic acquisition typically collects around 5TB of data in one project. During data processing, this data volume expands to around 50TB per project. Such intermediate data was previously stored on cartridges, but is increasingly moving to disk through hierarchical storage management systems. CGG may have as many as 50 projects current at any given time – representing around 2.5 Petabytes of data. A single project is collapsed to around 0.2 TB once processed and delivered to a client for interpretation on the workstation or visualization center.

DMFS

To improve load-balancing, reduce data spooling and improve fault tolerance, CGG’s IT architecture is decoupling storage from the compute engines. This is leading to a migration to a new Distributed Migrated File System (DMFS) paradigm. Interoperability of Fiber Channel hardware remains an issue, indeed the whole field of distributed file system controllers is still in its infancy.

Fabric

The current CGG solution relies on large volume Fiber Channel disks and a SAN Fabric using switches from Brocade and Q-Logic.

NUMA

Today, CGG’s 1000-processor PC-Clusters are only used for ‘embarrassingly parallel’ code. Tomorrow, the DMFS architecture will move more and more code from expensive NUMA machines to the clusters. CGG claims 7 teraflops of processing power worldwide.

Virtual network

CGG offered an amusing insight into its high bandwidth, transatlantic ‘virtual network’. Sending a 5TB processing dataset over the ‘pond’ is still prohibitively expensive. The solution adopted by CGG to move BP’s seismics from Houston to Aberdeen? Send a rack full of disks via DHL!

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.