In a recent lecture at Rice University, Bill Gropp of the NCSA, honored the venerable message passing interface, MPI, as ‘The once and future king.’ MPI is a ubiquitous open source standard for parallelization on high performance compute clusters. It was and remains ‘the dominant programming model for massively parallel scientific computing.’ MPI underpins the world’s fastest computers including China’s Sunway TaihuLight 10 million core cluster and the NCSA’s own Blue Waters.
Gropp looked to the future of HPC which is increasingly a race between heterogeneous architectures (CPU/GPU..) and software and compilers playing catch-up. On the one hand, ‘compilers can’t do it all’ as ‘the number of choices that a compiler faces for generating good code can overwhelm the optimizer.’ Guidance from a human expert is required. While the programming system must not get in the way of the expert, innovation at the processor and node level makes for complex memory and execution models.
Gropp concluded that single codebase portability is impossible, ‘Just because it is desirable doesn’t make it a reasonable goal, though it is an excellent research topic!’
MPI is 25 years old this year and will be celebrated with a symposium on ‘25 Years of MPI’ to be held this fall at the EuroMPI meeting in Chicago. A complementary open source standard for HPC, Open MP is celebrating its 20th birthday this year as reported in Intel Parallel Universe.
© Oil IT Journal - all rights reserved.