Self-attention summary networks for subsurface velocity-model building from common-image gathers

TitleSelf-attention summary networks for subsurface velocity-model building from common-image gathers
Publication TypeUnpublished
Year of Publication2026
AuthorsZeng, S, Yunlin Zeng, Abhinav Prakash Gahlot, Deng, Z, Felix J. Herrmann
Month3
Keywordsattention, Bayesian inference, CIG, deep learning, diffusion models, FWI, IMAGE, Imaging, Inverse problems, learned summary statistics, RTM, SEG, uncertainty calibration, Uncertainty quantification, velocity model building, WISE
Abstract

Common-image gathers (CIGs) contain physically meaningful information about velocity-model errors through reflector focusing and residual moveout, but in conventional imaging workflows they are typically used only as diagnostic tools. In this work, we propose a multiscale self-attention summary network that maps high-dimensional 3D CIG volumes into compact conditioning embeddings for probabilistic subsurface velocity inversion. These learned embeddings preserve offset-dependent kinematic structure and spatial coherence while reducing variability caused by background-velocity mismatch. Conditioned on these summary embeddings, a flow-matching model learns a transport from a Gaussian source distribution to the posterior distribution of plausible velocity fields. Numerical experiments show that, compared with direct conditioning on raw CIGs, the proposed summary network improves posterior velocity inference. In particular, the multiscale attention design provides greater robustness to background-model mismatch, yielding more accurate posterior reconstructions and lower predictive uncertainty.

URLhttps://slim.gatech.edu/Publications/Public/Submitted/2026/zeng2026IMAGEssn/abstract.html
URL2
Citation Keyzeng2026IMAGEssn