Live Webcast 15th Annual Charm++ Workshop

-->
Runtime Systems and Tools:
Interoperability
The expressiveness of different parallel languages and paradigms provide natural ways to solve different clases of problems. It would be an advantage to be able to link different parallel modules or libraries written using different parallel languages.



Charm++ provides a transition path from the prevalent MPI (or MPI + OpenMP) based codes, in which users have an opportunity to use Charm++ without rewriting their entire code base in Charm++. In Charm++'s interoperable mode, a user may rewrite a part of their program in Charm++, e.g. a module that performs parallel state space search, and use that module as an external library from their MPI based code. Trivial changes are required to the MPI code, and to the Charm++ libraries to make them interoperable. In particular, MPI codes are required to make a call to CharmLibInit before invoking any Charm++ library. A Charm++ library is required to provide a simple interface function that transfers the control from MPI to Charm++. More details about enabling such interoperability can be found at the manual link provided at the end of this page. So far, we provide a few interoperable libraries that users can use in their MPI codes - collision detection, sorting, 1D-FFT, and state space search.



Using OpenMP with Charm++ is even simpler - add the pragmas to the Charm++ code at desired places, and the effects will be seen. Alternatively, Charm++ provides its own implementation of parallel loop constructs called CkLoop. Refer to links below for further information.



Finally, if your MPI program is devoid of global variables, i.e. the global variables have been encapsulated to local entities, any MPI code can be run as an AMPI code, and hence make use of the Charm++ RTS. All the advantages of Charm++ - message driven, asynchronous messages, load balancing, fault tolerance to name a few, can be used without any major changes. Refer to AMPI for more details.



Feel free to contact us for further questions; we are always on a look out for people to try our new features and provide feedback.



Charm++-MPI Interoperability Manual Page

OpenMP/CkLoop Manual Page

AMPI
People
Papers/Talks
20-04
2020
[Paper]
Achieving Computation-Communication Overlap with Overdecomposition on GPU Systems [ESPM2 2020]
| Jaemin Choi | David Richards | Laxmikant Kale
18-02
2018
[Paper]
Multi-level Load Balancing with an Integrated Runtime Approach [CCGrid 2018]
15-17
2015
[Talk]
Charm++ & MPI: Combining the Best of Both Worlds [IPDPS 2015]
15-02
2015
[Paper]
Charm++ & MPI: Combining the Best of Both Worlds [IPDPS 2015]
| Nikhil Jain | Abhinav Bhatele | Jae-Seung Yeom | Mark Adams | Francesco Miniati | Chao Mei | Laxmikant Kale
13-14
2013
[Talk]
Charm++ Interoperability [Charm++ Workshop 2013]