Book Review—Introduction to parallel programming

Pete Pacheco’s textbook offers exhaustive coverage and insights on MPI, Pthreads and OpenMP.

Pete Pacheco’s new book, An Introduction to Parallel Programming1 (I2PP) is a well written accessible introduction to the subject that is now synonymous with high performance computing. Parallel programming is also increasingly key to performant desktop applications as even commodity PCs come with multiple cores which require re-tooled software to deliver their full potential. I2PP is a textbook, not a cheapo ‘How to’ guide. It is accompanied by a website2 and each chapter includes a set of quite hard questions. It is good to see that not all IT courses are ‘dumbed down.’

I2PP should be read by just about anybody with a serious interest in computing. Those involved in marketing technology may learn what acronyms like SIMD, NUMA and concepts like threads and processes actually mean. Programmers get a hands-on tutorial in building parallel programs of considerable scope. Pacheco works though hardware architectures, network topologies and three approaches to parallel programming, MPI, Pthreads and OpenMP—all called from C. He does not cover Nvidia’s CUDA or ‘other options’ for making the programmer’s life easier. It would have been nice at least to hear what these were.

Pacheco is prof of computing and math at the University of San Francisco and is well placed to provide insight into algorithm crafting and tuning to hardware. His two main use cases are solving the n-body and the travelling salesman problems. As he shows, there are many ways of skinning the parallel cat and indeed, there may be no ‘best way.’

But what is perhaps most surprising in I2PP is the revelation that parallel programs display ‘non determinism.’ In other words, depending on some hardware niceties, the same program can produce different results! This has to be fixed by the programmer using ‘mutex’ constructs rather like a database lock, adding an extra layer of complexity.

It seems like, after years of trying to protect programmers from themselves, compiler engineering has reverted to a previous era where nothing could be taken for granted. Back in the day, memory leaks, out of range pointers and indices were the problem. Then languages and compilers got savvy with built in protection. Hardware abstraction offered more progress in code portability and longevity. Parallel blows a lot of this away. We have to manage non determinism and to retool the algorithm for each hardware nuance. It appears that the chances of writing a bullet proof program are receding. It would be nice to think that we are at a provisional low point in compiler development and that the future will bring more sophisticated tools that will relieve the programmer of the huge burden that current technology imposes. But this unfortunately does not appear to be the way the world is going.

1 ISBN 978-0-12-374260-5 and www.oilit.com/links/1104_45.

2 www.oilit.com/links/1104_46.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.