State of the art domain decomposition algorithms for large-scale boundary value problems (with M≫1 degrees of freedom) suffer from bounded strong scalability because they involve the synchronisation and communication of workers inherent to iterative linear algebra. Here, we introduce PDDSparse, a different approach to scientific supercomputing which relies on a “Feynman-Kac formula for domain decomposition”. Concretely, the interfacial values (only) are determined by a stochastic, highly sparse linear system G(ω)u→=b→(ω) of size O(M), whose coefficients are constructed with Monte Carlo simulations—hence embarrassingly in parallel. In addition to a wider scope for strong scalability in the deep supercomputing regime, PDDSparse has built-in fault tolerance and is ideally suited for GPUs. A proof of concept example with up to 1536 cores is discussed in detail. © 2023 The Author(s)
Add the full text or supplementary notes for the publication here using Markdown formatting.