PPL Logo
Adaptive Message Passing Interface (AMPI)

Parallel Computational Science and Engineering (CSE) applications often exhibit irregular structure and dynamic load patterns. Many such applications have been developed using procedural languages (e.g. Fortran) in message passing parallel programming paradigms (e.g. MPI) for distributed memory machines. Incorporating dynamic load balancing techniques at the application-level involves significant changes to the design and structure of applications, because traditional run-time systems for MPI do not support dynamic load balancing. Object-based parallel programming languages, such as Charm++, support efficient dynamic load balancing using object migration for irregular and dynamic applications, as well as deal with external factors that cause load imbalance. However, converting legacy MPI applications to such object-based paradigms is cumbersome. AMPI is an implementation of MPI that supports dynamic load balancing and multi-threading for MPI applications.

Our approach and implementation is based on the user-level migratable threads and load balancing capabilities provided by Charm++'s run-time system. Conversion from legacy codes to this platform is straightforward even for large legacy codes. We have converted various benchmark programs and legacy codebases, as well as the component codes ROCFLO and ROCSOLID of a Rocket Simulation application to AMPI. Our experience shows that with a minimal overhead and effort, one can incorporate dynamic load balancing capabilities into legacy Fortran-MPI codes. An introductory tutorial about how to convert existing Fortran 90 codes to AMPI by hand can be found here.

Also, please refer to the talk on AMPI and AMPI Tutorial.
 

Software
AMPI has been integrated into the Charm++/Converse distribution. Please download the source distribution. And install AMPI by specifying the target "AMPI" to the Charm++ build command. (For details, see the README file in the source distribution.) Also, see the Adaptive MPI Manual. [postscript] [PDF] [html]
People
Papers
  • 10-21    Eduardo R. Rodrigues, Philippe O. A. Navaux, Jairo Panetta, Celso L. Mendes and Laxmikant V. Kale ,  Optimizing an MPI Weather Forecasting Model via Processor Virtualization,  Proceedings of International Conference on High Performance Computing (HiPC 2010)
  • 10-14    Stas Negara, Gengbin Zheng, Kuo-Chuan Pan, Natasha Negara, Ralph E. Johnson, Laxmikant V. Kale and Paul M. Ricker,  Automatic MPI to AMPI Program Transformation using Photran,  To appear in 3rd Workshop on Productivity and Performance (PROPER 2010) at EuroPar 2010 Conference
  • 10-09    Stas Negara, Kuo-Chuan Pan, Gengbin Zheng, Natasha Negara, Ralph E. Johnson, Laxmikant V. Kale and Paul M. Ricker,  Automatic MPI to AMPI Program Transformation,  8th Annual Workshop on Charm++ and its Applications
  • 06-05    Gengbin Zheng, Orion Sky Lawlor, Laxmikant V. Kale,  Multiple Flows of Control in Migratable Parallel Programs,  the 8th Workshop on High Performance Scientific and Engineering Computing (HPSEC-06)
  • 07-08    Chao Huang, Gengbin Zheng, Laxmikant V. Kale,  Supporting Adaptivity in MPI for Dynamic Parallel Applications,  PPL Tech Report 07-08
  • 05-04    Chao Huang, Gengbin Zheng, Sameer Kumar, Laxmikant V. Kale,  Performance Evaluation of Adaptive MPI,  Proceedings of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2006
  • 03-07    Chao Huang, Orion Lawlor, L. V. Kale,  Adaptive MPI,  Proceedings of the 16th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2003), LNCS 2958, pg 306-322
  • 00-03    Milind Bhandarkar, L. V. Kale, Eric de Sturler, and Jay Hoeflinger,  Object-Based Adaptive Load Balancing for MPI Programs,  In Vassil Alexandrov et al (Eds), Computational Science -- ICCS2001, Proceedings of the International Conference on Computational Science, San Francisco, CA, Lecture Notes in Computer Science, Vol. 2074, Springer Ver;ag, pp 108-117, May 2001.
Related Links

This page maintained by Celso Mendes. Back to the PPL Research Page