Regularity conforming neural networks for PDEs.

Abstract

Neural Networks (NNs) are becoming more commonly applied as discretised function spaces for solving Partial Differential Equations (PDEs). Whilst the Universal Approximation Theorem guarantees that a sufficiently large NN can approximate Sobolev functions – the natural functions that arise in PDEs, we will demonstrate that in practice, low-regularity solutions can lead to convergence issues during training. To overcome this, we propose the use of regularity-conforming architectures, where a priori regularity information is built into the NN. In addition, such architectures are inherently explainable, allowing the definition of novel types of loss function. We use a 2D transmission problem with discontinuous materials as our case study. This example is used in several application domains, e.g., geophysics, and at the same time we know precisely several aspects of its regularity along the domain. The solutions may admit power-like singularities and discontinuities in the gradient across material interfaces. In the classical L-shape problem, our proposed architecture improves the H1-error by a factor of ten with respect to the use of a classical architecture. In the case of four distinct materials, where both jump discontinuities in the derivative and power-type singularities are present, our explainable architecture permits the definition of a PINNs-type loss based on the strong formulation of the PDE with an interface condition, obtaining H1-relative errors of 0.5%. This is a joint work with David Pardo (UPV/EHU and BCAM) and Judit Muñoz Matute (BCAM and University of Texas at Austin)

Date
Apr 19, 2024 1:00 PM
Event
Seminario GMNA
Location
Sala de Seminarios del Departamento de Matemáticas (2.2.D08)