Live Webcast 15th Annual Charm++ Workshop

NUMA support for Charm++
Joint Laboratory for Petascale Computing Workshop (JLPC) 2010
Publication Type: Talk
Repository URL:
Cache-coherent Non-Uniform Memory Access (ccNUMA) platforms based on multi-core chips are now a common resource in High Performance Computing (HPC). In such platforms, the shared memory is physically distributed on several memory banks interconnected by a network (e.g AMD hypertransport and Intel QuickPath Interconnect). Because of such interconnection, the costs of memory access may vary depending on the distance between processing units and memory banks. Thus, the main challenge of a ccNUMA platform is to manage efficiently data distribution over the machine memory banks. Charm++ is a parallel programming system that has as main characteristic providing a portable programming model for platforms based on shared and distributed memory. Since, clusters based on ccNUMAs nodes are becoming a trend in HPC and exascale platforms, it is important to provide NUMA support on Charm++. This talk presents the collaborative research between UIUC and INRIA on Charm++ to provide a support to manage memory affinity. Such support is based on three parts: a command line option to distribute application data on memory banks, NUMA-aware isomalloc for AMPI and NUMA-aware memory allocator. This NUMA support has been integrated in the Charm++ distribution. pdf
Research Areas