Foraging for Resources in the Multicore Present and Future

HPC consultant Wolfgang Gentzsch has thoughtfully updated the case of multicore architectures in the HPC context. Over on LinkedIn, via one of the HPC discussion groups, I responded with:

I also enjoyed your article, Wolfgang – thank you. Notwithstanding the drive towards cluster-on-a-chip architectures, HPC customers will require workload managers (WLMs) that interface effectively and efficiently with O/S-level features/functionalities (e.g., MCOPt Multicore Manager from eXludus for Linux, to re-state your example). To me, this is a need well evidenced in the past: For example, various WLMs were tightly integrated with IRIX’s cpuset functionality (http://www.sgi.com/products/software/irix/releases/irix658.html) to allow for topology-aware scheduling in this NUMA-based offering from SGI. In present and future multicore contexts, the appetite for petascale and exascale computing will drive the need for such WLM-O/S integrations. In addition to the multicore paradigm, what makes ‘this’ future particularly interesting, is that some of these multicore architectures will exist in a hybrid (CPU/GPU) cloud – a cloud that may compliment in-house resources via some bursting capability (e.g., Bright’s cloud bursting, http://www.brightcomputing.com/Linux-Cluster-Cloud-Bursting.php). As you also well indicated in your article, it is incumbent upon all stakeholders to ensure that this future is a friendly as possible (e.g., for developers and users). To update a phrase originally spun by Herb Sutter (http://www.gotw.ca/publications/concurrency-ddj.htm) in the multicore context, not only is the free lunch over, its getting tougher to find and ingest lunches you’re willing to pay for!

We certainly live in interesting times!

Usability and Parallel Computing

According to Wikipedia:

Usability is a term used to denote the ease with which people can employ a particular tool or other human-made object in order to achieve a particular goal.

And further on, it’s stated:

In human-computer interaction and computer science, usability usually refers to the elegance and clarity with which the interaction with a computer program or a web site is designed.

Although most people focus on interface design when they hear this term, the definition allows room for more.

For example, I’m now toying with the idea of replacing “Accessible” with “Usable” in the context of the recently blogged interest in Accessible Parallel Computing.

Parallel Computing Needs to be More Accessible

There are two truths about parallel computing.

1. Parallel computing is hard.

To quote from a February 2005 article by Herb Sutter in Dr. Dobb’s Journal:

Probably the greatest cost of concurrency is that concurrency really is hard: The programming model, meaning the model in programmers’ heads that they need to reason reliably about their program, is much harder than it is for sequential control flow.

2. Parallel computing is going mainstream.

To quote Sutter again:

… if you want your application to benefit from the continued exponential throughput advances in new processors, it will need to be a well-written, concurrent (usually multithreaded) application. And that’s easier said than done, because not all problems are inherently parallelizable and because concurrent programming is hard.

Because 1 and 2 are to some extent in opposition, we have an escalating situation.

So, this means we have to do a better job of making parallel computing, well, less hard – i.e., more accessible.

Since returning to York University last April, this has become resoundingly clear to me. In fact, it is resulting in an Accessible Parallel Computing Initiative. I hope to be able to share much more about this soon. For now, you can read over my abstract for the upcoming HPCS 2007 event:

Accessible Parallel Computing via MATLAB

Parallel applications can be characterized in terms of their granularity and concurrency. Whereas granularity measures computation relative to communication, concurrency considers the degree of parallelism present. In addition to classifying parallel applications, the granularity-versus-concurrency template provides some context for the strategies used to introduce parallelism. Despite the availability of various enablers for developing and executing parallel applications, actual experience suggests that additional effort is required to reduce the required investment and increase adoption. York University is pioneering an investment-reducing, adoption-enhancing effort based on the use of MATLAB, and in particular the MATLAB Distributed Computing Toolbox and Engine. In addition to crafting an appropriate environment for parallel computing, the researcher-centric York effort places at least as much emphasis on the development and execution of parallel codes. In terms of delivery to the York research community, MATLAB M-files will be shared in a tutorial context in an effort to build mindshare and directly engage researchers in parallel computing. Although MATLAB shows significant promise as a platform for parallel computing, some limitations have been identified. Of these, support for threaded applications in a shared-memory context and limited support for the Message Passing Interface (MPI) are of gravest concern.

As always, I welcome your feedback.