Enforcing deterministic semantics for a parallel program can make a large improvement to programmer productivity: (i) There are no difficult-to-detect synchronization errors and data races, so both debugging and testing become significantly simpler; (ii) One does not have to reason about the notoriously subtle and difficult issues of a memory model because the language guarantees sequential semantics; (iii) One can follow an incremental development strategy, starting with a largely sequential implementation and successively parallelizing portions of it as the program behavior will remain unchanged; (iv) Even with parallelized code, one is free to use sequential debugging tools because parallel executions are not necessary for reproducing program errors;
So, can we have parallel languages where non-deterministic behavior is always detected, either at compile time or at run time, without sacrificing (too much) expressivity and performance? In such languages, a program will be guaranteed an overall deterministic outcome. Such a language would not be universally sufficient: some algorithms (e.g., chaotic relaxation or parallel branch-and-bound search) may desire truly non-deterministic outcomes for greater parallel performance. But for the vast majority of computationally intensive programs, nondeterminism is not part of the algorithm specification, and non-deterministic behavior is the result of bugs. For such programs, deterministic parallel languages would significantly enhance programmer productivity.
We propose to investigate, and likely combine, two approaches to the design of deterministic
parallel shared-memory languages; (i) focus on the software implementation of caching protocols
that guarantee determinacy, which requires run-time checks, and (ii) eliminate run-time checks for
many (but not all) cases by imposing stronger typing constraints.
Deterministic Sharing of Arrays