In inverse problems, we often have access to data consisting of paired samples (x,y)∼pX,Y(x,y) where y are partial observations of a physical system, and x represents the unknowns of the problem. Under these circumstances, we can employ supervised training to learn a solution x and its uncertainty from the observations y. We refer to this problem as the “supervised” case. However, the data y∼pY(y) collected at one point could be distributed differently than observations y′∼p′Y(y′), relevant for a current set of problems. In the context of Bayesian inference, we propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator qθ(x|y) to approximate the target posterior density pX|Y(x|y). Additionally, this preliminary phase provides a density function qθ(x|y), which can be recast as a prior for the “unsupervised” problem, e.g. when only the observations y′∼p′Y(y′), a likelihood model y′|x, and a prior on x′ are known. We then train another invertible generator with output density q′ϕ(x|y′) specifically for y′, allowing us to sample from the posterior p′X|Y(x|y′). We present some synthetic results that demonstrate considerable training speedup when reusing the pretrained network qθ(x|y′) as a warm start or preconditioning for approximating p′X|Y(x|y′), instead of learning from scratch. This training modality can be interpreted as an instance of transfer learning. This result is particularly relevant for large-scale inverse problems that employ expensive numerical simulations.
Deep learning techniques have recently benefited inverse problems where the unknowns defining the state of a physical system and related observations are jointly available as solution-data paired samples (see, for example, Adler and Öktem, 2017). Throughout the text, we will refer to this problem as the “supervised” case. Supervised learning can be readily applied by training a deep network to map the observations to the respective solution, often leading to competitive alternatives to solvers that are purely based on a physical model for the data likelihood (e.g. PDEs) and prior (handcrafted) regularization. Unfortunately, for many inverse problems such as seismic or optoacoustic imaging, data is scarce due to acquisition costs, processing is computationally complex because of numerical simulation, and the physical parameters of interest cannot be directly verified. Furthermore, as in the seismic case, the vast diversity of geological scenarios is bound to impact the generalization capacity of the learned model. For this type of problem, supervised methods have still limited scope with respect to more traditional “unsupervised” approaches, e.g. where observations pertaining to a single unknown are available, a data model and prior are postulated, and generalization errors do not affect the results. Note that recent work has found an application for deep networks even in the unsupervised setting as a reparameterization of the unknowns and an implicit regularizing prior (deep prior, Bora et al., 2017; Ulyanov et al., 2020; F. J. Herrmann et al., 2019; A. Siahkoohi et al., 2020a, 2020c, 2020b), by constraining the solution to its range. Unless the network has been adequately pretrained, however, the deep prior approach does not offer computational advantages.
In practice, as it is often the case in seismic or medical imaging, some legacy joint data might be available for supervised learning, while we might be interested in solving a problem related to some new observations, which are expected to come from a moderate perturbation of the legacy (marginal) distribution. In this work, we are interested in combining the supervised and unsupervised settings, as described above, by exploiting the supervised result as a way to accelerate the computation of the solution for the unsupervised problem. Clearly, this is all the more relevant when we wish to quantify the uncertainty in the proposed solution.
This paper is based on exploiting conditional normalizing flows (Kruse et al., 2019; Winkler et al., 2019) as a way to encapsulate the joint distribution of observations/solution for an inverse problem, and the posterior distribution of the solutions given data. Recent advancements have made available invertible flows that allow analytic computation of such posterior densities. Therefore, we propose a general two-step scheme which consists of: (i) learning a generative model from many (data, solution) pairs; (ii) given some new observations, we solve for the associated posterior distribution given a data likelihood model and a prior density (even comprising the one obtained in step (i)).
Normalizing flow generative models are the cornerstone of our proposal, due to their ability to be trained with likelihood-based objectives, and not being subject to mode collapse. Many invertible layers and architectures are described in Laurent Dinh et al. (2014), L. Dinh et al. (2016), Kingma and Dhariwal (2018), and Kruse et al. (2019). A fundamental aspect for their applications to large-scale imaging problems is constant memory complexity as a function of the network depth. Examples for seismic imaging can be found in Peters et al. (2020) and Rizzuti et al. (2020), and for medical imaging in Putzky and Welling (2019). In this paper, we will focus on uncertainty quantification for inverse problems, and we are therefore interested in the conditional flows described in Kruse et al. (2019), as a way to capture posterior probabilities (see also Winkler et al., 2019).
Bayesian inference cast as a variational problem is a computationally attractive alternative to sampling based on Markov chain Monte Carlo methods (MCMC). With particular relevance for our work, Parno and Marzouk (2018) formulates transport-based maps as non-Gaussian proposal distributions in the context of the Metropolis-Hastings algorithm. The aim is to accelerate MCMC by adaptively fine-tuning the proposals to the target density, as samples are iteratively produced by the chain. The idea of preconditioning MCMC in Parno and Marzouk (2018) directly inspires the approach object of this work. Another relevant work which involve MCMC is Peherstorfer and Marzouk (2018), where the transport maps are constructed from a low-fidelity version of the original problem, thus yielding computational advantages. The supervised step of our approach can also be replaced, in principle, by a low-fidelity problem. The method proposed in this paper, however, will not make use of MCMC.
We start this section by summarizing the uncertainty quantification method presented in Kruse et al. (2019), in the supervised scenario where paired samples (xi,yi)∼pX,Y(x,y) (coming from the joint unknown/data distribution) are available. We assume that an underlying physical modeling operator exists, which defines the likelihood model pY|X(y|x), y=F(x)+n, where n is a random variable representing noise. The scope is to learn a conditional normalizing flow
T:X×Y→Zx×Zy,
For the problem in Equation (7), we can initialize the network S=S0 randomly. However, if we expect the supervised problem (2) and the unsupervised counterpart (7) to be related, we can reuse the supervised result T as a warm start for S, e.g.
S0(zx)=πX∘T−1(zx,zy),
In this section, we present some synthetic examples aimed at verifying the speed up anticipated from the two step preconditioning. The first example is a low-dimensional problem where the posterior density can be calculated analytically, with which we can ascertain our solution. The second example constitutes a preliminary assessment for the type of inverse problem applications we are mostly interested in, e.g. seismic or optoacoustic imaging.
Here, we consider unknowns x∈RNx with Nx=12. The prior density pX=pX(x) is a normal distribution pX=N(μx,Σx) with μx=1 and Σx=diag(1,2,…,12). Observations are y∈RNy with Ny=6, and we consider the following likelihood model pY|X=pY|X(y|x):
y=Ax+ε,ε∼N(με,Σε).
Let us assume now that new observations y′ have been collected. We generated those observations according to
y′=Ax′+ε′,ε′∼N(με′,Σε′)
The uncertainty quantification results for the supervised (11) and unsupervised problem (12) are compared in Figure 1.
We study the convergence history for unsupervised training with and without warm start, as described in Equation (9). The plot in Figure 2 makes clear the computational superiority of the warm start approach.
Now we consider the denoising problem for 2D “seismic” images x, which are selected from the processed 3D seismic survey reported in Veritas (2005) and WesternGeco. (2012). The dataset has been obtained by selecting 2D patches from the original 3D volume, which are then subsampled in order to obtain 64×64 pixel images. The dataset is normalized.
Observations y are obtained simply by adding noise
y=x+ε,ε∼N(με,Σε)
We now turn to the unsupervised problem defined by the observation likelihood
y′=Ax′+ε′,ε′∼N(με′,Σε′)
Again, comparing the loss decay during training for two different instances of the unsupervised problem in Figure 4, makes clear that considerable speed up is obtained with the warm start strategy relatively to training a randomly initialized invertible network.
Moreover, despite the relatively high number of iterations ran during training (∼20000), the network initialized from scratch does not produce a reasonable result comparable to the ground truth. This can be seen by comparing the ground truth in Figure 5a and the conditional mean in Figure 5e, relative to the posterior distribution obtained from training a network from scratch. The comparison with the ground truth is much more favorable with the conditional mean obtained from the warm start training, in Figure 5c. Pointwise standard deviations for these different training modalities can also be inspected in Figures 5d (warm start) and 5f (without warm start). The discussed results above are related to the loss function depicted in Figure 4a. Same results for a different realization of the unsupervised problem with loss function shown in Figure 4b can be seen in Figure 6.
We presented a preconditioning scheme for uncertainty quantification, particularly aimed at inverse problems characterized by computationally expensive numerical simulations based on PDEs (including, for example, seismic or optoacoustic imaging). We consider the problem where legacy supervised data is available, and we want to solve for a new inverse problem given some out-of-distribution observations. The scheme takes advantage of a preliminary step where the joint distribution of solution and related observations is learned via supervised learning. This joint distribution is then employed as a way to precondition the unsupervised inverse problem. In the supervised and unsupervised case, we make use of conditional normalizing flows to ease computational complexity (fundamental for large 3D applications), and to be able to encode analytically the approximated posterior density. In this way, the posterior density obtained from the supervised problem can be reused as a new prior for the unsupervised problem.
The synthetic experiments confirm that the preconditioning scheme accelerates unsupervised training considerably. The examples here considered are encouraging for seismic or optoacoustic imaging applications, but additional challenges are expected for large scales due to the high dimensionality of the solution and observation space, and expensive wave equation solvers.
Adler, J., and Öktem, O., 2017, Solving ill-posed inverse problems using iterative deep neural networks: Inverse Problems, 33, 124007. doi:10.1088/1361-6420/aa9581
Bora, A., Jalal, A., Price, E., and Dimakis, A. G., 2017, Compressed sensing using generative models:.
Dinh, L., Krueger, D., and Bengio, Y., 2014, NICE: Non-linear independent components estimation:.
Dinh, L., Sohl-Dickstein, J., and Bengio, S., 2016, Density estimation using Real NVP:.
Herrmann, F. J., Siahkoohi, A., and Rizzuti, G., 2019, Learned imaging with constraints and uncertainty quantification: Neural Information Processing Systems (NeurIPS) 2019 Deep Inverse Workshop. Retrieved from https://arxiv.org/pdf/1909.06473.pdf
Kingma, D. P., and Dhariwal, P., 2018, Glow: Generative Flow with Invertible 1x1 Convolutions:.
Kruse, J., Detommaso, G., Scheichl, R., and Köthe, U., 2019, HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference:.
Parno, M. D., and Marzouk, Y. M., 2018, Transport Map Accelerated Markov Chain Monte Carlo: SIAM/ASA Journal on Uncertainty Quantification, 6, 645–682. doi:10.1137/17M1134640
Peherstorfer, B., and Marzouk, Y., 2018, A transport-based multifidelity preconditioner for Markov chain Monte Carlo:.
Peters, B., Haber, E., and Lensink, K., 2020, Fully reversible neural networks for large-scale surface and sub-surface characterization via remote sensing: ArXiv Preprint ArXiv:2003.07474.
Putzky, P., and Welling, M., 2019, Invert to Learn to Invert:.
Rizzuti, G., Siahkoohi, A., Witte, P. A., and Herrmann, F. J., 2020, Parameterizing uncertainty by deep invertible networks, an application to reservoir characterization: SEG technical program expanded abstracts 2020. Retrieved from https://arxiv.org/pdf/2004.07871.pdf
Siahkoohi, A., Louboutin, M., and Herrmann, F. J., 2019, The importance of transfer learning in seismic modeling and imaging: Geophysics, 84, A47–A52. doi:10.1190/geo2019-0056.1
Siahkoohi, A., Rizzuti, G., and Herrmann, F. J., 2020a, A deep-learning based bayesian approach to seismic imaging and uncertainty quantification: 82nd eAGE conference and exhibition 2020. Retrieved from https://arxiv.org/pdf/2001.04567.pdf
Siahkoohi, A., Rizzuti, G., and Herrmann, F. J., 2020b, Uncertainty quantification in imaging and automatic horizon tracking—a bayesian deep-prior based approach: SEG technical program expanded abstracts 2020. Retrieved from https://arxiv.org/pdf/2004.00227.pdf
Siahkoohi, A., Rizzuti, G., and Herrmann, F. J., 2020c, Weak deep priors for seismic imaging: SEG technical program expanded abstracts 2020. Retrieved from https://arxiv.org/pdf/2004.06835.pdf
Ulyanov, D., Vedaldi, A., and Lempitsky, V., 2020, Deep image prior: International Journal of Computer Vision. doi:10.1007/s11263-020-01303-4
Veritas, 2005, Parihaka 3D Marine Seismic Survey - Acquisition and Processing Report: No. New Zealand Petroleum Report 3460. New Zealand Petroleum & Minerals, Wellington.
WesternGeco., 2012, Parihaka 3D PSTM Final Processing Report: No. New Zealand Petroleum Report 4582. New Zealand Petroleum & Minerals, Wellington.
Winkler, C., Worrall, D., Hoogeboom, E., and Welling, M., 2019, Learning likelihoods with conditional normalizing flows:.
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H., 2014, How transferable are features in deep neural networks? In Proceedings of the 27th international conference on neural information processing systems (pp. 3320–3328). Retrieved from http://dl.acm.org/citation.cfm?id=2969033.2969197