Memory complexity and data scarcity are two main pressing challenges in learning solution operators of partial differential equations (PDEs) at high resolutions. These challenges limited prior neural operator models to low/mid-resolution problems rather than full scale real-world problems. Yet, physical problems possess spatially local structure not exploited by previous approaches.
We propose a method which utilizes this structure to predict solutions locally and unite them into a global solution. Specifically, we introduce a neural operator that scales to large resolutions by leveraging local and global structures through decomposition of both the input domain and the operator’s parameter space.
It consists of a multi-grid tensorized neural operator, a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization. Our method employs a multi-grid based domain decomposition approach to exploit the spatially local structure in the data.
Using the FNO as a backbone, its parameters are represented in a high-order latent subspace of the Fourier domain, through a global tensor factorization, resulting in an extreme reduction in the number of parameters and improved generalization. In addition, the low-rank regularization it applies to the parameters enables efficient learning in low-data regimes, which is particularly relevant for solving PDEs where obtaining ground-truth predictions is extremely costly and samples are, therefore, limited.
We empirically verify the efficiency of our method on the turbulent Navier-Stokes equations where we demonstrate superior performance, with 2.5x lower error, 10x compression of the model parameters, and 1.8x compression of the input domain size. Our tensorization approach yields up to 400x reduction in the number of parameters without loss in accuracy. Similarly, our domain decomposition method gives a 7x reduction in domain size while improving accuracy.