SINBAD Consortium Meeting - 2017 Fall - Program

2017 SINBAD Consortium meeting

Download bundled slides — all talks — HERE

Tuesday October 3, Houston, Dug, 16200 Park Row Drive, Suite 100

08:00—08:30 AM Registration and coffee & pastries
08:30—09:00 AM Felix J. Herrmann [Welcome & overview of the meeting]—SLIDES.MOV
Impact of Compressive Sensing on Seismic Data Acquisition & Processing — boosting the economics & time-lapse repeatability from fewer non-replicated data (Chair: Rajiv Kumar)
09:00—09:30 AM Chuck Mosher What Happened: How we implemented CSI with help from SINBAD
09:30—10:00 AM Oscar Lopez A Guide for Successful Low-Rank Matrix Recovery in Seismic ApplicationsSLIDES.MOV
10:00—10:15 AM Coffee Break
10:15—10:45 AM Ali M. Alfaraj Reconstruction of S-waves from low-cost randomized acquisitionSLIDES.MOV
10:45—11:15 AM Chengbo Li Alternating Direction Method and its role in CSI technology
11:15—11:45 AM Rajiv Kumar Full-azimuth seismic data processing w/ coil acquisitionSLIDES.MOV
11:45—12:15 PM Felix J. Herrmann Highly repeatable 3D compressive full-azimuth towed-streamer time-lapse acquisition –- a numerical feasibility study at scaleSLIDES.MOV
12:15—12:30 AM Discussion
12:30—01:30 PM Lunch
Extreme-scale matrix factorizations — making the impossible possible w/ randomized probing (Chair: Marie Graff)
01:30—02:15 PM Oscar Lopez Matrix Completion in Parallel Architectures: Julia ImplementationSLIDES.MOV
02:15—02:45 PM Yiming Zhang Massive seismic data compression & recovery w/ on-the-fly data extractionSLIDES.MOV
02:45—03:15 PM Ali Siahkoohi Seismic data interpolation with Generative Adversarial NetworksSLIDES.MOV
03:15—03:30 PM Coffee Break
03:30—04:00 PM Rajiv Kumar Multi-domain target-oriented imaging using extreme-scale matrix factorizationSLIDES.MOV
04:00—04:30 PM Marie Graff Low-rank representation of omnidirectional subsurface extended image volumesSLIDES.MOV
04:30—05:00 PM Discussion
Table1Program for Tuesday October 3 of the 2017 SINBAD Consortium meeting

Wednesday October 4, Houston, DUG, 16200 Park Row Drive, Suite 100

08:30—09:00 AM Registration and coffee & pastries
Wave-equation based Imaging, Inversion, and Uncertainty Quantification — tackling artifacts, noise, lack of convergence speed & parasitic minima Chair: Philipp Witte
09:00—09:30 AM Mengmeng Yang Imaging with multiples in shallow waterSLIDES.MOV
09:30—10:00 AM Emmanouil Daskalakis Stochastic Optimization from the perspective of dynamical systemsSLIDES.MOV
10:00—10:15 AM Coffee Break
10:15—10:45 AM Mathias Louboutin Data driven Gradient Sampling for seismic inversionSLIDES.MOV
10:45—11:15 AM Zhilong Fang PDE-free Gauss-Newton Hessian for Wavefield Reconstruction InversionSLIDES.MOV
11:15—11:45 AM Shashin Sharan Tracking the spatial-temporal evolution of fractures by microseismic source collocationSLIDES.MOV
11:45—12:15 PM Felix J. Herrmann Noise robust and time-domain formulations of Wavefield Reconstruction InversionSLIDES.MOV
12:15—12:30 AM Discussion
12:30—01:30 PM Lunch
Extreme performant at-scale Wave Equation-Based Inversion — managing complexity while increasing performance Chair: Bas Peters
01:30—02:15 PM Bas Peters Algorithms and Julia software for FWI with multiple constraintsSLIDES.MOV
02:15—03:00 PM Philipp Witte A large-scale framework in Julia for fast prototyping of seismic inversion algorithmsSLIDES.MOV
03:00—03:30 PM Mathias Louboutin Latest developments in DevitoSLIDES.MOV
03:30—03:45 PM Discussion
03:45—04:15 PM Coffee Break
04:15—05:15 PM Steering committee meeting with SINBAD (Consortium members only)
04:30—08:30 PM Dinner @ Watson’s House of Ales [Google maps], 14656 Grisby Rd
Table2Program for Wednesday October 4 of the 2017 SINBAD Consortium meeting

Abstracts

Impact of Compressive Sensing on Seismic Data Acquisition & Processing — boosting the economics & time-lapse repeatability from fewer non-replicated data

Over recent years we have seen a definite uptake of Compressive Sensing technology in Seismic Data Acquisition and this has cumulated a special section in the current issue (August 2017) of The Leading Edge entitled: “Impact of compressive sensing on seismic data acquisition and processing”. As a group, we are grateful for having had a chance to help push for this innovation and this session is devoted to showcase the latest developments of this exciting breakthrough. Specifically, we have two outside speakers from ConocoPhilips, who will speak on their experiences on applying Compressive Sensing to the field followed by presentations on criteria that will guarantee the recovery; reconstruction of S-waves; full-azimuth processing of coil data and at-scale time-lapse.


What Happened: How we implemented CSI with help from SINBAD

Chuck Mosher (ConocoPhillips)

Abstract. Chuck tells the tale of how ConocoPhillips built their Compressive Seismic Imaging platform, leveraging research, technology, and personal insights from Felix and his team at SINBAD.


A Guide for Successful Low-Rank Matrix Recovery in Seismic Applications

Oscar Lopez (fourth year PhD)

Abstract. This talk presents recent results in the theory of low-rank matrix recovery as heuristics for seismic practitioners. In the theory of matrix completion, we discuss the spectral gap as a means to quantify how successful a given sub sampling scheme will be for trace interpolation. Additionally, we consider previously proposed random sampling techniques and develop conditions on the sampling distribution that guarantees successful low-rank matrix recovery. The results apply to time-jittered acquisition, off-the-grid trace interpolation and source separation for simultaneous towed-streamer marine acquisition. Put together, the talk provides practical instruments that help design acquisition schemes in favor of rank penalization techniques.


Reconstruction of S-waves from low-cost randomized acquisition

Ali M. Alfaraj (second year PhD)

Abstract. Due to the lower shear wave velocity compared with compressional waves, finer spatial sampling is required to properly record the earlier according to the Nyquist sampling criterion. To avoid higher acquisition costs and to utilize the multicomponent data to its available full extent, we propose acquiring randomly undersampled ocean bottom seismic data. We present two up- and down-going shear wave reconstruction methods: (i) rank minimization reconstruction followed by elastic wavefield decomposition, and (ii) sparsity promoting joint interpolation decomposition using all the multicomponent data in one optimization problem.

This is joint work with Rajiv Kumar


Alternating Direction Method and its role in CSI technology

Chengbo Li (ConocoPhillips)

Abstract. Chengbo Li will talk about how to solve the compressive sensing problems based on alternating direction method (ADM) and its role played in CSI technology.


Full-azimuth seismic data processing w/ coil acquisition

Rajiv Kumar (postdoc)

Abstract. In this work, we will demonstrate the performance of our in-house 5D low-rank based interpolation method on a seismic data acquired using coil shooting full-azimuth acquisition. We will show that we can recover full-azimuthal interpolated data from highly subsampled data, where the subsampling ratio is 4%. This is the first time, we are testing our interpolation ideas on real 3D marine seismic data acquisition. Our findings show that we can avoid the general practice of windowing the data while performing the interpolation, specially using rank-minimization based framework.


Highly repeatable 3D compressive full-azimuth towed-streamer time-lapse acquisition –- a numerical feasibility study at scale

Felix J. Herrmann

Abstract. Most conventional 3D time-lapse (or 4D) acquisitions are ocean-bottom cable (OBC) or ocean-bottom node (OBN) surveys since these surveys are relatively easy to replicate compared to towed-streamer surveys. To attain high degrees of repeatability, survey replicability and dense periodic sampling has become the norm for 4D surveys that renders this technology expensive. Conventional towed-streamer acquisitions suffer from limited illumination of subsurface due to narrow azimuth. Although, acquisition techniques such as multi-azimuth, wide-azimuth, rich-azimuth acquisition, etc., have been developed to illuminate the subsurface from all possible angles, these techniques can be prohibitively expensive for densely sampled surveys. This leads to uneven sampling, i.e., dense receiver and coarse source sampling or vice-versa, in order to make these acquisitions more affordable. Motivated by the design principles of Compressive Sensing (CS), we acquire economic, randomly subsampled (or compressive) and simultaneous towed-streamer time-lapse data without the need of replicating the surveys. We recover densely sampled time-lapse data on one and the same periodic grid by using a joint-recovery model (JRM) that exploits shared information among different time-lapse recordings, coupled with a computationally cheap and scalable rank-minimization technique. The acquisition is low cost since we have subsampled measurements (about 70% subsampled), simulated with a simultaneous long-offset acquisition configuration of two source vessels travelling across a survey area at random azimuths. We analyze the performance of our proposed compressive acquisition and subsequent recovery strategy by conducting a synthetic, at scale, seismic experiment on a 3D time-lapse model containing geological features such as channel systems, dipping and faulted beds, unconformities and a gas cloud. Our findings indicate that the insistence on replicability between surveys and the need for OBC/OBN 4D surveys can, perhaps, be relaxed. Moreover, this is a natural next step beyond the successful CS acquisition examples discussed during this session.


Extreme-scale matrix factorizations — making the impossible possible w/ randomized probing

Aside from being instrumental for compressive seismic data acquisition, matrix factorizations and random probing are also instrumental in the formation of full-subsurface offset image volumes. During this session, we will discuss the latest developments on parallel solvers based on alternating least-squares; on-the-fly creation of shot records from the “LR-factors” w/o the need to form the full-data volumes; multi-domain targeted imaging w/ factored full subsurface-offset image volumes; and low-rank representations of full subsurface-offset image volumes.

Matrix Completion in Parallel Architectures: Julia Implementation

Oscar Lopez (fourth year PhD)

Abstract. Matrix completion techniques offer potential tools for frugal seismic data acquisition, where dense acquisition is replaced by optimization. This shift of focus means that efficient numerical methods are critical for the implementation of these techniques in large-scale seismic applications. To this end, this talk modifies rank-penalization methodologies to suit parallel architectures. By adopting factorization-based alternating minimization schemes, each program can be decoupled into independent sub-problems handled in parallel. We showcase a distributed parallel execution in Julia and explore the scalability of the approach.


Massive seismic data compression & recovery w/ on-the-fly data extraction

Yiming Zhang (second year Msc)

Abstract. Industrial seismic exploration has moved towards complex geological areas, which requires typically long-offset and dense sampling data in order to avoid aliasing and inaccuracy in wave-equation based inversion algorithms. These strict requirements lead to massive data volume size and prohibitive demands on computational resources. In this work, we propose to compress our dense data in hierarchical Tucker tensor format by exploiting the low-rank structure of the data in a transformed domain. Then, we devise on-the-fly common shot or receiver gather extraction directly via the highly compressed factors. In subsampling scenarios, by interpolating this novel tensor format, we can also reconstruct the shot or receiver gather on a per-query basis rather than expanding the data to its fully-sampled form. We demonstrate the effective performance of our proposed technique on 3D stochastic full-waveform inversion, which allows the stochastic algorithm to extract shot gathers as it requires them throughout the inversion process. Moreover, we finally show how to computational effectively generate the CIGs from this compressed low-rank tensor representation of the data with the help of fast simultaneous shot or receiver gather generation.

This is the joint work with Rajiv Kumar and Curt Da Silva.


Seismic data interpolation with Generative Adversarial Networks

Ali Siahkoohi (Second year PhD student)

Abstract. In this project we implement an algorithm to predict the missing traces in the seismic shot gathers. The missing traces can be either regular or irregular. Any interpolation scheme assumes a prior knowledge on the data. Here the prior information used to interpolate the data is obtained from interaction of two trained deep neural networks, namely Generator and Discriminator. The combination of these two neural networks is called Generative Adversarial Network (GAN). GAN is trained on finely sampled seismic shot gathers. By employing the trained GAN we can project shot gathers with missing traces into the domain of the generator network. Then by computing the output of generator given the found projection, we can fill in the initial gather.


Multi-domain target-oriented imaging using extreme-scale matrix factorization

Rajiv Kumar (postdoc)

Abstract. In this work, we present an alternative approach to redatum both source and receivers at depth, under the framework of reflectivity-based extended images with two-way wave propagation in the background medium. We propose a randomized svd based probing scheme that takes advantage of the algebraic structure of the extended imaging system to overcome the computational cost and memory usage associated with the number of wave-equation solutions and explicit storage employed by conventional migration methods. Experimental results on complex geological models demonstrate the efficacy of proposed methodology in performing multi-domain target imaging.


Low-rank representation of omnidirectional subsurface extended image volumes

Marie Graff (postdoc)

Abstract. Extended image volumes are an important migration tool in seismic exploration. However the computation and the storage of omnidirectional subsurface extended image volumes are usually prohibitive. That is why some solutions have been already proposed for instance by focusing on horizontal offsets only. In our work, we will consider a linear algebra approach to deal with the low-rank representation of extended image volumes with full offsets. We will never build entirely the resulting matrix but get only actions of it on well-chosen probing vectors, based on Low-Rank decomposition or randomized SVD. This representation allows us to have access to all the energy of the extended image volume matrix and still limits the storage of the information and the computational cost.


Wave-equation based Imaging, Inversion, and Uncertainty Quantification — tackling artifacts, noise, lack of convergence speed & parasitic minima

With the advent of faster computers and object-oriented abstractions, which allow us to unleash the power of modern-day large-scale optimization, we have been able to address fundamental issues such as the mitigation of parasitic local minima; lack of convergence of at-scale sparsity-promoting solvers for least-squares RTM, and (time-domain) formulations of Wavefield Reconstruction Inversion itself. We start our session with time-domain sparsity-promoting RTM w/ multiples and source estimation followed by a new scheme to improve the convergence of inversion of large inconsistent and ill-conditioned systems of equations. Next, we propose a novel way to deal with non-uniqueness based on Gradient Sampling followed by derivation of PDE-free GN Hessian for WRI, a dual formulation for our micro-seismic source localization, and noise robust and time-domain formulations of WRI.


Imaging with multiples in shallow water

Mengmeng Yang

Abstract. Based on the latest developments of research in inversion technology with optimization, researchers have made significant progress in the implementation of least-squares reverse-time migration (LS-RTM) of primaries. In Marine data however, these applications rely on the success of a pre-imaging separation of primaries and multiples, which can be modeled as a multi-dimensional convolution between the vertical derivative of the surface-free Green’s function and the down-going receiver wavefield. Instead of imaging the primaries and multiples separately, we implement the LS-RTM of the total down-going wavefield by combining areal source injection and linearized Born modelling, where strong surface related multiples are generated from a strong density variation at the ocean bottom. The advantage including surface related multiples in LS-RTM is the extra illumination we obtain from these multiples without incurring additional computational costs related to carrying out multi-dimensional convolutions part of conventional multiple prediction procedures. Even though we are able to avert these computational costs, our approach shares the large costs of LS-RTM. We reduce these costs by combining randomized source subsampling with our sparsity-promoting imaging technology, which produces artifact-free, high-resolution images, with the surface-related multiples migrated properly.


Stochastic Optimization from the perspective of dynamical systems

Emmanouil Daskalakis (postdoc)

Abstract. We present improvements to a family of methods (Linearized Bregman, Kaczmarz and Stochastic Gradient Descent) that are often use in optimization problems. We explain the link of those methods with dynamical systems and we draw ideas for improving their performance. We use a simple idea to improve the stability and the performance of our family of optimization methods especially for ill-posed, inconsistent large-scale problems. Finally we present an application at a least squares migration problem, which highlight the importance of the suggested improvements on large scale Geophysical problems.

This is joint work with Rachel Kuske.


Data driven Gradient Sampling for seismic inversion

Mathias Louboutin (5th year PhD Student)

Abstract. We present in this work an extension of the Gradient Sampling algorithm presented at the last EAGE in Paris. We previously showed the potential of this algorithm playing with implicit time-shifts to represent the wavefield of a slightly perturbed velocity model. We introduce an extension where the weights of the Gradient Sampling algorithm are obtained with the solve of data-based quadratic subproblem instead of at random. The update direction is the a more accurate representation of the true Gradient Sampling update direction.


PDE-free Gauss-Newton Hessian for Wavefield Reconstruction Inversion

Zhilong Fang (fifth year PhD)

Abstract. In this work, we present a PDE-free Gauss-Newton Hessian for Wavefield Reconstruction Inversion. With this PDE-free Gauss-Newton Hessian, we can compute matrix-vector products without additional PDE solves. Thus, we are able to use the second order optimization method Gauss-Newton method with a roughly equal computational cost of first-order methods such as the gradient-descent method.


Tracking the spatial-temporal evolution of fractures by microseismic source collocation

Shashin Sharan (Fourth year PhD)

Abstract. Unlike conventional reservoirs, unconventional plays are not naturally viable for economical production of oil and gas. They require stimulation by injecting high-pressure fluid causing fractures in the rocks. These fractures make the medium more permeable, hence, the extraction of oil and gas becomes feasible. For drilling purposes and to prevent potentially hazardous situations, we need to have good knowledge of the location of these fractures. Also, we need to have good knowledge about how these fractures originated in time. Hydraulic fracturing changes stress in rocks, which results in the emission of microseismic waves. The opening of cracks due to high pressure fluid injection during hydraulic fracturing mainly causes this change in stress in the rocks. Therefore, microseismic events are mostly localized along these fractures and have finite energy along time. To accurately track the evolution of fractures in both space and time, we need to locate closely spaced microseismic events along these fractures activating at very small time intervals. A naive approach can be the back propagation of the observed data to find out a point in space and time where maximum focusing of back propagating energy occurs. This point corresponds to the location and origin time of a microseismic source. This approach, although simpler, suffers from low resolution and requires scanning of complete 4D volume (3D in space and 1D in time). Hence, this method can be challenging when there are multiple closely spaced microseismic sources originating at different times. We in this work propose a sparsity promotion based method that can locate closely spaced microseismic events, with spatial separation as low as within half a wavelength, activating at small time intervals. We simultaneously estimate the origin time of microseismic events by estimating their source time functions. Our method exploits the fact that microseismic events are localized in space and have finite energy. We use accelerated Linearized Bregman algorithm with a preconditioning operator to arrive at a computationally feasible scheme.

This is joint work with Rongrong Wang


Noise robust and time-domain formulations of Wavefield Reconstruction Inversion

Abstract. We propose a wave-equation-based subsurface inversion method that in many cases is more robust than conventional Full-Waveform Inversion. The new formulation is written in a denoising form that allows the synthetic data to match the observed ones up to a small error. Compared to regular Full-Waveform Inversion, our method treats the noise arising from the data meassuring/recording process and that from the synthetic modelling process separately. Compared to Wavefields Reconstruction Inversion, the new formulation mitigates the difficulty of choosing the penalty parameter λ. To solve the proposed optimization problem, we develop an efficient frequency domain algorithm that alternatively updates the model and the data. Numerical experiments confirm strong stability of the proposed method by comparisons between the results of our algorithm with that from both plain FWI and a weighted formulation of the FWI. We also discuss a new memory efficient time-domain formulation for Wavefield Reconstruction Inversion based on duality.

This is joint work with Rongrong Wang, Mathias Louboutin, and Emmanouil Daskalakis.


Extreme performant at-scale Wave Equation-Based Inversion — managing complexity while increasing performance

Having flexibility to develop new types of imaging and inversion algorithms while maintaining production-level performance continues to be a challenge. During this session, we demonstrate that this challenge can be overcome by combining the right level of abstractions in Devito, where we encode the physics, and in Julia, where we expose the proper objective, gradient, and Jacobian calculations. Devito is a collaboration between SLIM and Gerard Gorman’s group at Imperial College London. We start our session by presenting a new constrained FWI framework in Julia, followed by technical discussions of our time-domain wave-equation inversion framework based on Devito and the latest developments of Devito.


Algorithms and Julia software for FWI with multiple constraints

Bas Peters (5th year PhD)

Abstract We present a framework to add multiple convex and non-convex constraints to nonlinear inverse problems, specifically FWI. The constraints mitigate problems related to noisy data, artifacts arising from working with very few simultaneous sources, inaccurate starting models and using approximate physical forward models. Compared to earlier work at SLIM, the current framework is algorithmically simpler and computationally more efficient. We show examples where the model estimation is improved when we use very limited prior knowledge directly as constraints. We also present the software implementation in Julia and how it is used together with other software that compute data-misfit values and gradients w.r.t. the model parameters.


A large-scale framework in Julia for fast prototyping of seismic inversion algorithms

Philipp Witte (forth year PhD)

Abstract. We present our progress on a large-scale seismic modeling workflow in Julia for wave-equation based inversion. The software offers a range of high-level abstractions to easily express PDE constrained optimization problems in terms of linear algebra expressions, while utilizing the DSL Devito to symbolically express the underlying PDEs and to generate fast and parallel code for solving them. Data containers and linear operators can be set up without much effort from input SEG-Y data and scale to large-scale 3D applications. This talk provides an overview of the basic functionalities of our software and applications to least squares imaging and 3D FWI.


Latest developments in Devito

Mathias Louboutin (5th year PhD Student)

Abstract. We present an overview of the latest developments in Devito. We introduced Devito in the previous meeting as a prototype finite-difference DSL for seismic modelling and inversion. We are presenting here the latest improvements and functionalities of Devito. We will also discuss the current future plans as well as non-supported features that the audience may be interested in. This presentation will be followed by/mixed with a hands-in tutorial if the time and resources allows it.


Performance & capabilities review of Devito

Gerard Gorman (Imperial College London) Mathias Louboutin (5th year PhD Student)

Abstract. New performance results of Devito. We show the latest benchmarking result for the acoustic and TTI kernels on conventional CPUs and Intel Xeon Phi processors. Will see with Gerard/Fabio/Michael what we put there depending on the progress with YASK.


Discussion

This time slot is designated to informal discussion, feedback & possible demos. It is important for us to get input on the application and possible use of our work and we therefore greatly value your input. We hope that this for will continue to be conducive to lively discussions.