Oil ITJ Interview—Dean Hutchings, Linux Networx

Linux Networx president Dean Hutchings told Oil IT Journal of the high performance cluster manufacturer’s origins, making boards for the US Government. For Hutchings, clusters are not commodities. The true cost of connecting and running thousands of machines can easily be under estimated. The secret is in cluster management—Linux Networx’ forte.

Oil ITJ—How did you get started in the cluster business?

Hutchings—We sold our first cluster to the US government in 1997 already running on Linux. Our first commercial sale was to Brookhaven National Labs. The company started in 97 as board manufacturer. Then our customers asked to have boards assembled and the cluster business was born. Clusters are not just a bunch of PCs, managing them requires special skills. Our ClusterWorx package provides a complete management solution for clusters including system admin, application management, job scheduling, resource management and prioritization. All of which greatly increases levels of use. Cluster management has been the key to successful migration from big iron. For oil and gas this means better price performance for clients like GX Technology and Shell’s Rijswijk R&D unit.

Oil ITJ—What about companies like CGG which just uses ‘commodity’ Dell boxes for their clusters?

Hutchings—Companies need to take a step back and see how much managing clusters is costing them. For most computational users, the value is created through a relationship with an independent software vendor (like Schlumberger, Landmark). We can then think about creating the best infrastructure to run clients’ apps.

Oil ITJ—So clusters are not commodity hardware?

Hutchings—You should ask those who buy clusters. Both ExxonMobil and ChevronTexaco let their system administrators run MPI and were disappointed with the results. They should have bought a managed system.

Oil ITJ—What of those companies who have made ‘skunk work’ clusters – buying boxes on eBay?

Hutchings—Yes, Daimler Chrysler made a Grid out of 12 laptops. But how does that help their business? Today even early adopters should be concerned with total cost of ownership, productivity etc. We test our systems and code in Salt Lake City before delivery. So they run out of the box. One system, the sixth fastest in the world, with 2,800 Opterons and high speed interconnect was installed at Los Alamos. This was 6 days from delivery to doing science.

Oil ITJ—How do you count the number of processors in a cluster?

Hutchings—It’s defined by the scalability of an application. At 80-90 CPUs most systems max out as CPUs don’t have enough to do.

Oil ITJ—What about the new ‘on-demand’ computing paradigm?

Hutchings—That’s what we are doing at Los Alamos. Some jobs run on 2,800 CPUs, some on 16. There is a shift in focus from the number of CPUs/$ to productivity/$. Interconnect is also crucial. Oil and gas often uses Gigabit or hi speed Ethernet but there are much faster options; Quadrics or Infiniband. Multicast software upgrades allow you to radically change configurations according to work load. This resource/node allocation lets you do your re-engineering at night and GigaViz interactivity during the day. The new pre-stack focus represents great potential for cluster use. Fraunhofer is also delivering cluster-based visualization with PV-4D.

Oil ITJ—What happened to the Itanium?

Hutchings—The AMD Opteron, which has great memory access, has grabbed the Itanium’s market. Intel has reacted and is now offering a 64 bit Xeon. The Itanium may take off in 12-24 months. Incidentally, there will be a 2,200 node Xeon 64 at the Department of Defense in September.

Oil ITJ—Where’s cluster-based computing heading now?

Hutchings—The world is moving back towards central computing and data servers, with distributed clients. The same applies to visualization, where the key is configurability, multi-case provision, scheduling and ease of upgrade.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.