SINBAD Consortium Meeting - Fall 2013

Whistler, Canada

Date

Dec 1 (6:00 PM) - Dec 4 (12:00 PM), 2013

Venue

Fairmont Chateau Whistler 4599
Chateau Boulevard
Whistler, British Columbia

Reserve through: https://resweb.passkey.com/go/slimresearch

Or call: 1-800-606-8244 (Group Code "1113UBCU")



Transportation

SLIM Shuttle

Pickup Sun Dec 1:

YVR International Terminal: 2:00 PM
UBC Bookstore 2125 East Mall: 3:00 PM
Downtown Fairmont Waterfront 900 Canada Place Way: 3:45 pm

Depart Whistler Wed Dec 4, 1:00PM

3:00 PM: Dropoff Waterfront
3:45 PM: UBC
4:30 PM: YVR

Other Transport Options:

Bus: Pacificcoach YVR-Whistler-Skylynx
Limousine service: Star Limousine
Car rentals: Airport Rental Cars (use YVR for airport) Driving Directions airport to Whistler



SLIM Contacts

Miranda Joyce
+1 (604) 822-5674 (office)
mjoyce@eos.ubc.ca
SLIM - EOAS Dept University of British Columbia
2020-2207 Main Mall,
Vancouver, B.C. CANADA V6T 1Z4

Program 2013 SINBAD Fall Consortium meeting

Sunday December 1st (Spirit Room)

06:00—07:30 PM Icebreaker

Monday December 2nd (Empress Ballroom)

08:00—08:30 AM Registration and coffee & pastries
08:30—09:00 AM Felix J. Herrmann Welcome & Roadmap for the meeting

FWI & WEMVA (Chair: Tristan van Leeuwen)

09:00—09:35 AM Felix J. Herrmann Frugal FWIbibtex/slides
09:35—10:15 AM Tristan van Leeuwen Relaxing the physics: A penalty method for full-waveform inversionbibtex/slides
10:15—10:30 AM Discussion
10:30—10:40 AM Coffee break
10:40—11:00 AM Bas Peters Examples from the Penalty-methodbibtex/slides
11:00—11:20 AM Rajiv Kumar Extended images in action: efficient WEMVA via randomized probingbibtex/slides
11:20—11:40 AM Zhilong Fang Parallel 3D FWI with simultaneous shotsbibtex/slides
11:40—12:00 PM Brendan Smithyman Phase-residual based quality-control methods and techniques for mitigating cycle skipsbibtex/slides
12:00—12:15 PM Discussion
12:15—01:30 PM Lunch

FWI & case studies (Chair: Felix J. Herrmann)

01:30—01:50 PM Mike Warner Reflection FWI with a poor starting modelbibtex/slides
01:50—02:05 PM Xiang Li Lessons learned from Chevron Gulf of Mexico data setbibtex/slides
02:05—02:25 PM Ning Tu Recent results on the BP Machar data setbibtex/slides
02:25—02:45 PM Xiang Li Model-space versus data-space regularized FWI with the acoustic wave equationbibtex/slides
02:45—03:00 PM Zhilong Fang Uncertainty analysis for FWIbibtex/slides
03:00—03:15 PM Discussion
03:15—03:30 PM Coffee break

Wave-equation based linearized inversion (Chair: Polina Zheglova)

03:30—03:55 PM Ning Tu Fast imaging with multiples and source estimationbibtex/slides
03:55—04:15 PM Polina Zheglova Imaging with hierarchical semi separable matricesbibtex/slides
04:15—04:35 PM Lina Miao Fast imaging via depth stepping with the two-way wave equationbibtex/slides
04:35—04:55 PM Rajiv Kumar Extended images in action: efficient AVA via probingbibtex/slides
04:55—05:15 PM Discussion

HPC/Big Data Forum (Chair Ian Hanlon)

05:30—08:00 PM Pizza & drinks
HPC/Big Data Forum

Tuesday December 3rd (Empress Ballroom)

08:00—08:30 AM Registration and coffee & pastries

Seismic data acquisition & multiples (Chair: Tim T. Y. Lim)

08:30—08:55 AM Haneet Wason Time-jittered marine sourcesbibtex/slides
08:55—09:10 AM Felix Oghenekohwo Estimating 4D differences in time-lapse using randomized sampling techniquesbibtex/slides
09:10—09:35 AM Tim T. Y. Lin Bootstrapping Robust EPSI with coarsely sampled databibtex/slides
09:35—10:00 AM Navid Ghadermarzy Using prior support information in approximate message passing algorithmsbibtex/slides
10:00—10:30 AM Discussion
10:30—10:45 AM Coffee break

Seismic wavefield recovery via matrix/tensor completions (Chair: Ernie Esser)

10:45—11:10 AM Rajiv Kumar Wavefield reconstruction with SVD-free low-rank matrix factorizationbibtex/slides
11:10—11:35 AM Curt da Silva Structured tensor formats for missing-trace interpolation and beyondbibtex/slides
11:35—12:00 PM Okan Akalin Matrix and tensor completion for large-scale seismic interpolation: a comparative studybibtex/slides
12:00—12:15 PM Discussion
12:15—01:30 PM Lunch

Large-scale (convex) optimization (Chair: Michael Friedlander)

01:30—01:45 PM Michael Friedlander Introduction
01:45—02:00 PM Ting Kei Pong The proximal-proximal gradient algorithmbibtex/slides
02:00—02:15 PM Julie Nutini Putting the curvature back into sparse solversbibtex/slides
02:15—02:35 PM Ives Macędo A dual approach to PhaseLift via gauge programming and bundle methodsbibtex/slides
02:35—02:50 PM Gabriel Goh Taming time through tangentsbibtex/slides
02:50—03:15 PM Discussion
03:15—03:30 PM Coffee break

Compressive sensing (Chair: Ozgur Yilmaz)

03:30—03:50 PM Ozgur Yilmaz [Sparse recovery methods for _under_determined and _over_determined systems]
03:50—04:10 PM Brock Hargreaves [The bridge from _ortho_gonal to redundant transforms and weighted \(\ell_1\) optimization]—bibtex/slides
04:10—04:30 PM Ernie Esser Algorithms for phase retrievalbibtex/slides
04:30—04:50 PM Rongrong Wang Robustness of the interferometric formulation for seismic databibtex/slides
04:50—05:10 PM Enrico Au Yeung A new class of random matrices with applications to seismic databibtex/slides
05:10—05:30 PM Discussion
05:30—06:00 PM mini break
06:00—06:30 PM Bus leaves for dinner at The Pony in Pemberton
07:00—07:30 PM Felix J. Herrmann I3: the _I_nternational _I_nversion _I_nitiative in Brazil
11:00—11:30 PM Bus ride back to the hotel

Wednesday December 4th (Empress Ballroom)

08:00—08:30 AM Registration and coffee & pastries

Forward modeling & computational aspects (Chair: Rafael Lago)

08:30—08:50 AM Rafael Lago Krylov solvers in frequency domain FWIbibtex/slides
08:50—09:10 AM Art Petrenko Accelerating an iterative Helmholtz solver with FPGAsbibtex/slides
09:10—09:30 AM Tristan van Leeuwen Solving the data-augmented wave equationbibtex/slides
09:30—09:50 AM Zhilong Fang Swift FWIbibtex/slides
09:50—10:15 AM Discussion
10:15—10:45 AM Coffee break
10:45—12:00 PM Steering committee meeting with SINBAD (Consortiums members only)
01:15PM Bus departs for Vancouver

Discussion

This time slot is designated to informal discussion, feedback & possible demos. It is important for us to get input on the application and possible use of our work and we therefore greatly value your input. We hope that this new format will be conducive to lively discussions.

Frugal FWI

Speaker: Felix J. Herrmann (Director SLIM) bibtex/slides

Abstract. Seismic waveform inversion aims at obtaining detailed estimates of subsurface medium parameters, such as soundspeed, from seismic data. A formulation in the frequency-domain leads to an optimization problem constrained by a Helmholtz equation with many right-hand sides. Application of this technique in 3D precludes the use of factorization techniques to solve the Helmholtz equation due the large number of gridpoints and the bandwidth of the matrix. While many sophisticated pre-conditioned iterative techniques have been developed for the Helmholtz equation, they often include model-specific tuning parameters and are thus not very attractive for inversion since the medium parameters change from one iteration to the next. In this paper, we propose a method for 3D seismic waveform inversion that addresses both the need to efficiently solve the Helmholtz equation as well as the computational cost induced by the many right-hand sides. To solve the Helmholtz equation, we consider a simple generic preconditioned iterative method (CARP-CG) that is well-suited for inversion because of its robustness. We extend this method to a block-iterative method that can efficiently handle multiple right-hand sides. To reduce the computational cost of of the overall optimization procedure, we use recently proposed techniques from stochastic optimization that allow us to work with approximate gradient information. These approximations are obtained by evaluating only a small portion of the right-hand sides and/or by solving the PDE approximately. We propose heuristics to adaptively determine the required accuracy of the PDE solves and the sample-size and illustrate the algorithms on synthetic benchmark models. This is joint work with Tristan van Leeuwen.

Relaxing the physics: A penalty method for full-waveform inversion

Speaker: Tristan van Leeuwen (ex PDF, now collaborator at CWI) bibtex/slides

Abstract: Computationally efficient method of solving partial differential equation (PDE)-constrained optimization problems that occur in (geophysical) inversion problems. The method takes measured data from a physical system as input and minimizes an objective function that depends on an unknown model for the physical parameters, fields, and additional nuisance parameters such as the source function. The invention consists of a minimization procedure involving a cost functional comprising of a data-misfit term and a penalty term that measures how accurately the fields satisfy the PDE. The method is composed of two alternating steps, namely the solution of a system of equations forming the discretization of the data-augmented PDE, and the solution of physical model parameters from the PDE itself given the field that solves the data-augmented system and an estimate for the sources. Compared to all-at-once approaches to PDE-constrained optimization, there is no need to update and store the fields for all sources leading to significant memory savings. As in the all-at-once approach, the proposed method explores a larger search space and is therefore less sensitive to initial estimates for the physical model parameters. Contrary to the reduced formulation, the proposed method does not require the solution of an adjoint PDE, effectively halving the number of PDE solves and memory requirement. As in the reduced formulation, fields can be computed independently and aggregated, possibly in parallel.

Examples from the Penalty-method

Speaker: Bas Peters (second year PhD) bibtex/slides

Abstract: A novel penalty method for PDE-constrained optimization was recently proposed by van Leeuwen and Herrmann (2013). The conventional PDE-constrained optimization formulation used in seismic waveform inversion is based on calculating the gradient of the data misfit objective functional via the adjoint-state method, at the cost of two PDE solves. The penalty method requires only one solution of an overdetermined linear system, in the least-squares sense. In this talk some numerical properties of this linear system will be exposed. The penalty method for PDE-constrained optimization involves a parameter, balancing the data misfit and PDE-misfit parts of the objective functional. This talk will address how to select this very important parameter. Some examples will be shown in which the penalty method outperforms the conventional method in non-linear waveform inversion, as well as linearized seismic imaging by migration.

Extended images in action: efficient WEMVA via randomized probing

Speaker: Rajiv Kumar (third year PhD) bibtex/slides

Abstract: Extended images as a function of the full subsurface offset are an important tool to perform wave-equation based migration velocity analysis (WEMVA) in areas of complex geology. Unfortunately, computation & storage of these extended images is prohibitively expensive. In this work, we present an efficient way to compute extended images for all subsurface offsets without explicitly calculating the source and receiver wavefields for all the sources. Instead, we calculate actions of extended image volumes on probing vectors that live in the image space. The probing can either be defined as vectors from the Dirac basis, which allows us to form the extended image at the location of the point diffractor, or they can be defined in terms of Gaussian noise. The latter corresponds to sources with random weights firing simultaneously at every grid point. We demonstrate that this probing leads to a computationally efficient implementation of WEMVA. This is joint work with Tristan van Leeuwen.

Parallel 3D FWI with simultaneous shots

Speaker: Zhilong Fang (second year PhD) bibtex/slides

Abstract: In this work, we manage to build the workflow for the parallel 3D full-waveform inversion. In the forward simulation part, we generate the Helmholtz matrix in parallel and use parallel CARPCG to solve the Helmholtz equation. In the inversion process, simultaneous shots are used to reduce the computational costs. Additionally, we propose a method to select the number of simultaneous shots and tolerance of CARPCG dynamically, to reach a compromise between the computational costs and accuracy.

Phase-residual based quality-control methods and techniques for mitigating cycle skips

Speaker: Brendan Smithyman (first year PDF) bibtex/slides

Abstract: Most full-waveform inversion algorithms use local optimization methods to iteratively improve the numerical earth model. All of these make the implicit assumption that the the model is “close enough” to the true earth to avoid a “cycle skip”. In practice this may not be true. We explore two questions:

  1. How do we understand and visualize the cycle skip phenomenon in order to recognize it if it exists?

  2. How do we automate this quality control step and use the rich information from multiple data to mitigate cycle skips and avoid local minima?

Reflection FWI with a poor starting model

Speaker: Mike Warner bibtex/slides

Abstract: TBA

Lessons learned from Chevron Gulf of Mexico data set

Speaker: Xiang Li (fifth year PhD) bibtex/slides

Abstract: Chevron Gulf of Mexico data set is very challenging for FWI because of elastic phases, limited offset, lack of low frequencies and salt structure. To overcome these issue, we first use ray-based tomography on the hand-picked first breaks to generate initial model for FWI, and then we apply curvelet-denosing techniques to improve the poor signal-to-noise ratio of the observed data at low frequencies. Finally, Curvelet domain sparsity promoting Gauss-Newton FWI helps to suppress model space artifacts caused by elastic phases. This is joint work with Andrew J. Calvert, Ian Hanlon, Mostafa Javanmehri, Rajiv Kumar, Tristan van Leeuwen, Brendan Smithyman, Haneet Wason and Felix J. Herrmann

Recent results on the BP Machar data set

Speaker: Ning Tu (fifth year PhD) bibtex/slides

Abstract: By courtesy of BP, we are able to get some hands-on experience imaging seismic data from the Machar field in North Sea. We performed reverse-time migration and sparsity-promoting migration with source-estimation to this dataset (or a cropped section of the dataset in many cases due to computational constraints), and had some interesting findings. In this presentation, we will show some conclusive results we have got, and explain the key techniques in imaging this dataset.

Model-space versus data-space regularized FWI with the acoustic wave equation

Speaker: Xiang Li (fifth year PhD) bibtex/slides

Abstract: Inverting data with elastic phases using an acoustic wave equation can lead to erroneous results, especially when the number of iterations is too high, which may lead to over fitting the data. Several approaches have been proposed to address this issue. Most commonly, people apply “data-independent” filtering operations that are aimed to deemphasize the elastic phases in the data in favor of the acoustic phases. Examples of this approach are nested loops over offset range and Laplace parameters. In this presentation, we discuss two complementary optimization-driven methods where the minimization process decides adaptively which of the data or model components are consistent with the objective. Specifically, we compare the Student’s t misfit function as the data-space alternative and curvelet-domain sparsity promotion as the model-space alternative. Application of these two methods to a realistic synthetic lead to comparable results that we believe can be improved by combining these two methods.

Uncertainty analysis for FWI

Speaker: Zhilong Fang (second year PhD) bibtex/slides

Abstract: Uncertainty analysis is important for seismic interpretation. Based on the framework of Bayesian, we can analyse different statistic parameters of our FWI result. However, directly sampling the posterior probability density function (pdf) is computationally intractable. In order to make this problem computationally tractable, in this work, we use Gaussian distribution approximation and low rank approximation to generate the posterior pdf. Simultaneous shots are also used to reduce the computational costs.

Fast imaging with multiples and source estimation

Speaker: Ning Tu (fifth year PhD) bibtex/slides

Abstract: During this talk, we present a computationally efficient (cost of 1-2 RTM’s with all data) iterative sparsity-promoting inversion framework where surface-related multiples are jointly imaged with primaries and where the source signature is estimated on the fly. Our imaging algorithm is computationally efficient because it works during each iteration with small independent randomized subsets of data. The multiples are handled by introducing an areal source term that includes the upgoing wavefield. We update the source signature for each iteration using a variable projection method. The resulting algorithm removes imaging artifacts from surface-related multiples, estimates and removes the imprint of the source, recovers true amplitudes, is fast, and robust to linearization errors by virtue of the statistical independence of the subsets of data we are working with at each iteration.

Imaging with hierarchical semi separable matrices

Speaker: Polina Zheglova (first year PDF) bibtex/slides

Abstract: Hierarchically Semi Separable (HSS) matrices are (in general) dense matrices that have low rank off-diagonal blocks that can be represented economically. Exploiting a specific structure of HSS representation, fast algorithms have been devised for matrix matrix multiplication, addition and computing a matrix inverse. We are interested in developing fast algorithms for seismic imaging using ideas from this approach. An overview of HSS representation and some methods using HSS representations of operators will be shown in this talk.

Fast imaging via depth stepping with the two-way wave equation

Speaker: Lina Miao (third year MSc) bibtex/slides

Abstract : In this presentation we propose a fast imaging algorithm via depth stepping with the two-way wave equation. Within the framework of survey sinking, a stabilized depth extrapolation operator is computed using a spectral projector which can efficiently split evanescent wave components. The computation of the spectral projector features with an Hierarchically Semi-Seperable (HSS) matrix representation speeded up polynomial recursion, resulting in an accleration from cubic numerical complexity to linear numerical complexity.

Extended images in action: efficient AVA via probing

Speaker: Rajiv Kumar (third year PhD) bibtex/slides

Abstract: Common image gathers (CIG) are an important tool to perform AVA analysis in areas of complex geology. Unfortunately, it is prohibitively very expensive to compute these CIG for all the subsurface points. In this work, we present an efficient way to compute CIG for all subsurface offsets without explicitly calculating the source and receiver wavefields for all the sources. Because the CIG contain all possible subsurface offsets, we compute the angle-domain image gathers by selecting the subsurface offset that is aligned with the local geologic dip. We propose a method to compute the local dip information directly from common-image-point gathers. To assess the quality of the angle-domain common-image-points gathers we compute the angle-dependent reflectivity coefficients and compare them with theoretical reflectivity coefficients yielded by the (linearized) Zoeppritz equations for a few synthetic models.

Time-jittered marine sources

Speaker: Haneet Wason (third year PhD) bibtex/slides

Abstract. Current efforts towards dense shot (and/or receiver) sampling and full azimuthal coverage to produce higher-resolution images have led to the deployment of multiple source vessels across the survey area. A step ahead from multi-source seismic acquisition is simultaneous or blended acquisition where different source arrays/vessels fire shots at near-simultaneous or slightly random times. Seismic data acquisition with simultaneous (or blended) sources has helped improve acquisition efficiency and mitigate acquisition related costs. Deblending then aims to recover unblended data, as acquired during conventional acquisition, from blended data since many processing techniques rely on full, regular sampling. We present a simultaneous/blended marine acquisition setup where shots fire at significantly jittered instances in time resulting in jittered shot locations for a given speed of the source vessel. The conventional, unblended data is recovered from the blended, jittered/irregular data by sparsity-promoting inversion using the non-equispaced fast discrete curvelet transform. The optimization scheme aims to deblend the blended data along with regularization and interpolation to a (finer) regular grid.

Estimating 4D differences in time-lapse using randomized sampling techniques

Speaker: Felix Oghenekohwo (second year PhD) bibtex/slides

Abstract: Repeatability in the seismic survey and processing has been cited as the main reason for which 4D seismic technology work. In the last decade, concerted efforts have been spent to make the 4D seismic process highly repeatable, without significant success. On the contrary, Compressed Sensing, which is also a relatively new sampling paradigm proposes that one can recover an estimate of a fully sampled signal from a noisy, under-sampled measurements provided the acquisition architecture satisfies some properties. By observing different under-sampled and random measurements from each vintage, corresponding to different acquisition geometries, we show that one can still detect the 4D change in time using recent ideas from Compressed sensing. Using a realistic synthetic model, we show two methods of estimating the 4D difference and compare their relative performance to each other.

Bootstrapping Robust EPSI with coarsely sampled data

Speaker: Tim T. Y. Lin (fourth year PhD) bibtex/slides

Abstract. The EPSI method of surface multiple removal directly inverts for the free-surface operator, i.e., the multiple-free Green’s function of the subsurface seismic response. One peculiar feature of this approach is the theoretical independence of the spectrum of the free-surface operator from the spectrum of the observed data. The SRME approach requires coarsely sampled data to be low-pass filtered sufficiently in order to avoid aliasing in multiple contribution gathers, which in turn limits the temporal resolution of the demultipled result. Conversely, such limitations in temporal resolution do not directly apply to the inversion solution of EPSI. This property can in turn be exploited both to significantly lower the cost of EPSI and to mitigate the effect of under sampled data in a controlled way.

Using prior support information in approximate message passing algorithms

Speaker: Navid Ghadermarzy (first year PhD, after finishing MSc) bibtex/slides

Abstract: Consider the standard compressed sensing problem. We want to recover a sparse—or compressible—signal from few linear measurements. In this talk we investigate recovery performance when we have prior information about the support, i.e., the indices of the non-zero entries, of the signal to be recovered. First we briefly review the results of “weighted \(\ell_p\) minimization algorithm with \(p=1\) and \(0< p<1\)”. Then we derive a weighted approximate message passing (AMP) algorithm which incorporates prior support information into the AMP algorithm. We empirically show that this algorithm recovers sparse signals significantly faster than weighted \(\ell_1\) minimization. We also introduce a reweighting scheme for AMP and weighted AMP which, we observe, substantially improves the recovery conditions of these algorithms. We illustrate our results with extensive numerical experiments on synthetic data and seismic data reconstruction.

Wavefield reconstruction with SVD-free low-rank matrix factorization

Speaker: Rajiv Kumar (third year PhD) bibtex/slides

Abstract: As shown in past, we can leverage the ideas from the field of compressed sensing to cast problems like seismic data interpolation or sequential shot data recovery from simultaneous data, as a compressed sensing problem. In this work we will show how we can borrow the same ideas of compressed sensing and cast these problems as matrix completion problems. Instead of sparsity we will show that we can exploit the low-rank structure of seismic data to solve these problems. One of the impediments in rank-minimization problem is the computation of singular values. We will also show how we can solve the rank minimization problems SVD-free. The practical application is divided into three parts:

  1. In case of sequential seismic data acquisition, how jittered subsampling helps to recover the better quality data as compared to random subsampling.

  2. How the incorporation of reciprocity principles help to enhance the quality of recovered fully sampled data.

  3. How we can recover the sequential source data from simultaneous source data.

Structured tensor formats for missing-trace interpolation and beyond

Speaker: Curt da Silva (Second year PhD) bibtex/slides

Abstract: High-dimensional data, alternatively known as tensors, occurs in a variety of seismic problems. By exploiting the fact that seismic data can be well represented as a structured tensor, we design algorithms that operate on much lower dimensional parameters. In this talk, we will review some recent developments in interpolating seismic data volumes in the so-called Hierarchical Tucker format as well as demonstrate the need for such formats when tackling high-dimensional problems such as Uncertainty Quantification.

Matrix and tensor completion for large-scale seismic interpolation: a comparative study

Speaker: Okan Akalin, Ben Recht, Felix J. Herrmann bibtex/slides

Abstract: Owing to their high dimensionality, interpolating 3D seismic data volumes remains a computationally daunting task. In this work, we outline a comprehensive framework for sampling and interpolating such volumes based on the well-understood theory of Matrix and Tensor completion. This interpolation theory consists of three major components — signal structure, structure-destroying sampling, and structure-restoring optimization. By viewing interpolation in the context of this theory, we are able to specify exactly when these approaches are expected to perform well. We also introduce structure-revealing transformations that promote the inherent low-rank structure in seismic data as well as a factorization approach that scales to large problem sizes. Our methods are able to handle large-scale data volumes more accurately and more quickly compared to other more ad-hoc approaches, as we will demonstrate. This is joint work with Curt Da Silva, Rajiv Kumar, Ben Recht, and Felix J. Herrmann.

The proximal-proximal gradient algorithm

Speaker: Ting Kei Pong bibtex/slides

Abstract: In many applications, one has to minimize the sum of a smooth loss function modeling misfit and a regularization term inducing structures. In this talk, we consider the case when the regularization is a composition of a convex function, whose proximal mapping is easy to compute, and a nonzero linear map. Such instances arise in system identification and realization problems. In this talk, we present a new algorithm, the proximal-proximal gradient algorithm, which admits easy subproblems. Our algorithm reduces to the proximal gradient algorithm if the linear map is just the identity map, and can be viewed as a “very inexact” inexact proximal gradient algorithm. We show that the whole sequence generated from the algorithm converges to an optimal solution, and establish an upper bound on iteration complexity.

Putting the curvature back into sparse solvers

Speaker: Julie Nutini bibtex/slides

Abstract: For many problems in signal and image processing, we seek a sparse solution that solves that approximately solves the problem \(Ax \approx b\), where \(A\) is an \(m-\)by\(-n\) matrix and \(b\) is an \(m\)-vector. Many of the most used approaches to this problem—such as iterative soft thresholding and SPGL1—are first-order methods. As a result, these methods can sometimes be slow to converge. In this talk, we present an approach that takes advantage of the easily-obtainable second-order information. By exploiting this available second-order information, we are able to put the curvature back into sparse solvers and improve upon the convergence rates of existing solvers.

A dual approach to PhaseLift via gauge programming and bundle methods

Speaker: Ives Macędo (4th year PhD student) bibtex/slides

Abstract: A feature common to many sparse optimization problems is that the number of variables may be significantly larger than the number of constraints- e.g., the matrix-lifting approach taken by PhaseLift for phase retrieval results in a problem where the number of variables is quadratic in the number of constraints. We consider a duality framework and numerical methods to leverage the relatively small number of constraints. Preliminary numerical results illustrate our approach and its flexibility.

Taming time through tangents

Speaker: Gabriel Goh (2nd year PhD student) bibtex/slides

Abstract: Given two vectors of (possibly) different lengths, the edit distance considers all possible alignments between the two and picks the one that minimizes the number of operations needed to turn one into the other. Though highly non-smooth and riddled with local minima, we show a way to compute the convex envelope of this function, which opens the door to using the approximate edit distance as a surrogate for the L2 distance and comparing vectors of different lengths.

Sparse recovery methods for _under_determined and _over_determined systems Speaker: Ozgur Yilmaz (Professor)

Abstract: We will review recent theoretical and algorithmic results on how to recover sparse signals from noisy measurements in two different settings: (i) The underdetermined case where the number of measurements is significantly less than the ambient dimension of the sparse signal. This is what we also call the “compressed sensing scenario”. (ii) The overdetermined case where the number of measurements is larger than the ambient dimension. In this setting, the emphasis is how fast the algorithms converge and how stable they are with respect to various sources of noise.

The bridge from _ortho_gonal to redundant transforms and weighted \(\ell_1\) optimization Speaker: Brock Hargreaves (third year MSc) bibtex/slides

Abstract: Traditional arguments in synthesis \(\ell_1\)-optimization require our forward operator to be orthogonal, though we use redundant transforms in practice. These traditional arguments do not translate to redundant transforms, and other arguments require impractical conditions on our effective measurement matrix. Recent theory in one-norm analysis, namely the optimal dual \(\ell_1\) analysis of Shidong et al, have provided point-wise reconstruction error estimates for synthesis using an equivalence relationship where we can use weaker assumptions. This exposes an important model assumption indicating why analysis might outperform synthesis, for which careful consideration in seismic is necessary, and the need for models such as the cosparse model. In this talk we will discuss these ideas, provide evidence which indicates this theory should generalize to uniform error estimates(and thus not signal dependent), and how redundancy, support information, and weighting play important roles.

Algorithms for phase retrieval

Speaker: Ernie Esser (first year PDF) bibtex/slides

Abstract: Phase retrieval is the non-convex optimization problem of recovering a signal from magnitudes of complex linear measurements. Solving convex semi-definite program (SDP) relaxations has been shown to be a robust approach, but it remains too expensive to apply to large problems. We will discuss methods for accelerating computations and explore applications to seismic deconvolution.

Robustness of the interferometric formulation for seismic data

Speaker: Rongrong Wang (first year PDF) bibtex/slides

Abstract: The interferometric formulation of linear wave-based inversion problem was proposed by Demanet and Jugnon recently. Instead of directly fitting the data, they proposed to fit a subset of the data’s cross-correlation. It can be verified that if the full cross-correlation is used, then the problem is equivalent to the usual least square problem. The subsampling, which is usually considered to cause instability to the solution, is surprisingly useful in this setting. Numerical experiments for the inverse source problem and the inverse scatting problem have both suggested that a ‘good’ sampling strategy can actually increase the stability under modeling error caused by the uncertainty of a kinetic nature. We will study the mathematical mechanism behind this phenomenon, and try to see whether or not there exists a universally ‘good’ sampling strategy independent of the types of forward operators we use.

A new class of random matrices with applications to seismic data

Speaker: Enrico Au Yeung bibtex/slides

Abstract: Compressed sensing is an emerging technique that allows us to recover an image using far fewer number of measurements than classical sampling techniques. Designing the measurement matrices with certain properties are critical to this task. Gaussian matrices are most commonly used. We discover a new class of random matrices that can outperform the Gaussian matrices when we are in a situation of taking an outrageously small number of samples.

Krylov solvers in frequency domain FWI

Speaker: Rafael Lago (first year PDF) bibtex/slides

Abstract: We briefly discuss here several aspects rising from the use of Krylov solvers for frequency domain FWI. Although several powerful preconditioners are in constant development by the linear algebra community targeting this application, some issues as the multishot and multifrequency scenarios as well as advanced Krylov method techniques in combination with these powerful preconditioners are rarely addressed. We provide an overview of some of the recent research on this regard and discuss possibility of use of some of these techniques in the context of an inversion.

Accelerating an iterative Helmholtz solver with FPGAs

Speaker: Art Petrenko (third year MSc) bibtex/slides

Abstract: Solution of the Helmholtz equation is the main computational burden of full-waveform inversion in the frequency domain. For this task we employ the CARP-CG algorithm (Gordon & Gordon 2010), an iterative solver that preconditions the original Helmholtz system into an equivalent symmetric positive definite system and then applies the method of conjugate gradients. Forming the matrix for the new system is not necessary as its multiplicative action on a vector is implemented using a series of projections onto the rows of the original system. Our contribution is implementing CARP-CG for a host + accelerator (FPGA) computing environment. The computational paradigm is one of dataflow: vector and matrix elements are streamed from memory through the accelerator which applies the row projections. The advantage of an FPGA to process streams of data is that unless the algorithm is memory bandwidth limited, computation time is directly proportional to the amount of data. The complexity of the algorithm implemented on the FPGA is irrelevant since all the operations programmed onto the FPGA happen in the same clock tick. In contrast, on a CPU, more complex algorithms require more clock ticks as the instructions are executed sequentially, or with only a small amount of parallelism. Ongoing work porting the CARP-CG algorithm to the accelerator is presented.

Solving the data-augmented wave equation

Speaker: Tristan van Leeuwen bibtex/slides

Abstract: The recently proposed penalty method promises to mitigate some of the non-linearity inherent in full-waveform inversion by relaxing the requirement that the wave-equation needs to be solved exactly. The basic workflow of this new method is as follows; i) solve an overdetermined wave-equation (the data-augmented wave-equation), where the data serves as additional constraints for the wavefields, ii) compute the wavefield-residual by substituting this wavefield in the wave-equation, and iii) correlate the wavefield with the wavefield-residual to obtain a model-update. As opposed to the conventional workflow, no explicit adjoint solve is needed to compute the model-update. However, instead of solving a wave-equation, we need to solve a data-augmented wave-equation. In this talk we explore some of the challenges of solving this data-augmented wave-equation and review some possible solution strategies for both time and frequency-domain applications.

Swift FWI

Speaker: Zhilong Fang bibtex/slides

In 3D case, there is much more data and modeling is much more expensive. As a result, parallel computing is very important for 3D full-waveform inversion. Both domain decomposition and data decomposition need a large number of parallel computing resources. However, programs based on parallel MATLAB suffer from the limitation of licenses. In order to obtain a MATLAB licenses free solution, we use SWIFT, which is a fast and easy parallel scripting language. Once the original Mat file is complied to an executable file, SWIFT can run the code inside the executable file in parallel without using parallel MATLAB. We use the SWIFT to compute the object functions and gradients of different shots in parallel, and test the parallel 3D FWI code with overthrust data. This is joint work with Thomas Lai, Harsh Juneja, Bas Peters, Zhilong Fang.