Center for Petascale Computing Workshop
June 24, 2008 4405 Siebel Center
9:00am |
Duane Johnson and Laxmikant Kalé: "Introduction" |
9:30am |
David Padua: TBA |
10:00am |
Ralph Johnson: "Refactoring" A refactoring is a change to a program whose purpose is to change the design of the system (e.g. module structure, interface design) but not its behavior. Identifying a change as a refactoring is useful because there are strategies for making refactorings that are quick and reliable. Moreover, it is possible to make tools to automate refactorings. The talk will focus on the strategies, not the tools. |
10:30am |
Laxmikant Kalé: "Charm++ and its use in scalable CSE applications" Charm++ is a parallel programming system aimed at improving productivity and performance in parallel programming. Charm++ provides an adaptive runtime support system and automates resource management. AMPI and other newer languages built using Charm++ help raise the level of abstraction in parallel programming. I will discuss these and their use in science and engineering applications that have scaled to tens of thousands of processors. |
11:00am |
David Ceperley: TBA |
11:30am |
Lunch |
12:00pm |
Duane Johnson: "KKR (hybrid real- and k-space) order-N DFT code" The KKR method uses Herglotz Green's functions (G(r; E) or G(k; E) matrices that are non-Hermitian). For large systems, k-space sums become less of an issue (no parallelization or simple one) because k-points are reduced with system size. So far, QMR (e.g. TF-QMR) works, but needed preconditioning and graph partitioning (e.g. via MeTiS) to work just sufficiently. Preconditioning with iterative solve will be key for solution, including check pointing and load balancing, which may be accomplished in Charm++. We would also benefit, I believe, from the simulation for large cores/nodes that are in Charm++. |
12:30pm |
Paul Ricker: "Petascale Development of FLASH" I will describe FLASH, a parallel adaptive mesh refinement (AMR) simulation code used primarily for astrophysical hydrodynamics and particle problems. I will also discuss challenges to be met in adapting FLASH to petascale platforms like Blue Waters. |
1:00pm |
James Phillips (for Klaus Schulten): "Towards Petascale Biomolecular Simulations with NAMD" Applying petascale resources to biomedically relevant molecular dynamics simulations of 10^6 to 10^8 atoms presents different parallelization challenges than the multi-billion atom materials science simulations of today. The current message-driven NAMD design is latency tolerant and uses measurement-based load balancing. Our goal is to run at scale, which will require distributed I/O and load balancing, with sub-millisecond iteration times, which will require low-overhead communication. |