PDF VersionMarkdown Version

DNOISE III — Exploration Seismology in the Petascale Age

Felix J. Herrmann
Seismic Laboratory for Imaging and Modeling (SLIM), University of British Columbia, Canada

Released to public domain under Creative Commons license type BY.
Copyright (c) 2014 SLIM group @ The University of British Columbia."

Synopsis

This current proposal describes a comprehensive five-year continuation of our research project in dynamic nonlinear optimization for imaging in seismic exploration (DNOISE). DNOISE III—Exploration Seismology in the Petascale Age builds on the proven track record of our multidisciplinary research team that conducts transformative research in the fields of seismic-data acquisition, processing, and wave-equation based inversion. The overarching goals of the DNOISE series of projects can be simply summarized as:

“How to image more deeply and with more detail?” and “How to do more with less data?”

Also, to help overcome the current substantial challenges in the oil and gas industry, we maintain this focus with more specific follow-up questions such as:

“How can we control costs and remove acquisition-related artifacts in 3D (time-lapse) seismic data sets?” and “How can we replace conventional seismic data processing with wave-equation based inversion, control computational costs, assess uncertainties, extract reservoir information and remove sensitivity to starting models?”

To answer these questions, we have assembled an expanded cross-disciplinary research team with backgrounds in scientific computing (SC), machine learning (ML), compressive sensing (CS), hardware design, and computational and observational exploration seismology (ES). With this team, we will continue to drive innovations in ES by utilizing our unparalleled access to high-performance computing (HPC), our expertise and experience in CS and wave-equation based inversion (WEI) and our proven abilities in incorporating our research findings into practical scalable software of our inversion solutions.

DNOISE II: Status Report

Since its inception in 2010, the DNOISE II grant under the guidance of Felix J. Herrmann from the Department of Earth, Ocean, and Atmospheric Sciences (EOAS), Michael Friedlander from Computer Science, and Ozgur Yilmaz from Mathematics has produced 46 peer-reviewed journal publications (plus six more in review), 95 peer-reviewed expanded abstracts for the CSEG, SEG, and EAGE, and 167 presentations at national and international conferences (not including the 145 presentations given during the Seismic Inversion by Next-generation BAsis function Decompositions (SINBAD) Consortium meetings). During the DNOISE II project, 9 graduate students (2 PhD and 7 MSc) have completed their degrees (and 3 more PhD students are expected to graduate first half of 2015). We have had 10 post-doctoral fellows (PDFs) participate in DNOISE II and 4 PDFs continue to be employed. This research project has also employed 6 junior scientific programmers, 11 undergraduate coop students, one senior programmer and a research associate. Below we describe in detail our main successes (and setbacks), and summarize the new research areas that have grown out of DNOISE II, which we would like to pursue as part of DNOISE III. The following summary mirrors the outline of the original DNOISE II CRD proposal.

Compressive acquisition and sparse recovery

Objectives: Design and implementation of new seismic-data acquisition methodologies that reduce costs by exploiting structure in seismic data.

Seismic data collection is becoming more and more challenging because of increased demands for high-quality, long-offset and wide-azimuth data. As part of the DNOISE II project, we adapted recent results from CS techniques to generate innovations in seismic acquisition that in turn reduced costs and allowed us to collect more data. Over the last 5 years, our research team has made significant contributions in various areas of this general theme. Below we summarize some of the key contributions:

Compressive sensing. One of the main aims of the DNOISE II project has been to leverage ideas from CS to establish a new methodology for ES, where the costs of acquisition and processing are no longer dominated by survey area size but by the sparsity of seismic data volumes. DNOISE II research has resulted in several contributions to the theory and practice of CS. One area of focus has been to investigate ways of incorporating prior information into recovery algorithms. In [1], Hassan Mansour developed an algorithm that was based on weighted \(\ell_1\)-norm minimization and used theoretical guarantees for improved sparse recovery, when the approximate locations of significant transform-domain coefficients are known. This approach led to significant improvements in the wavefield recovery problem [2,3]. Our research also showed that prior information can be incorporated into alternative sparse recovery methods. Examples of this work include recovery by non-convex optimization as shown by Navid Ghadermarzy in [4], approximate message passing [5] with an immediate application to seismic interpolation [6], in low-rank matrix recovery [7], and in solving overdetermined systems with sparse solutions using a modified Kaczmarz algorithm [8]. Finally, we examined the synthesis-or-analysis problem that arises in theoretical signal processing in the context of curvelet-based seismic data interpolation methods by sparse optimization described in the work of Tim Lin, [9], and Brock Hargreaves, [10]. We proposed an inversion procedure that exhibits uniform improvement over existing curvelet-based regularization via sparse inversion (CRSI) algorithms by demanding that the procedure only returns physical signals that naturally admit sparse coefficients under canonical curvelet analysis. Also, we have applied curvelet denoising to enhance crustal reflections [11]. This technique is now widely used by Chevron in Canada. Outcome. Fast compressive-sensing based seismic interpolation and denoising that used prior support information.

Randomized marine acquisition with time jittering. One of the ideas CS relies on is the use of random sampling as a means of dimension reduction. Motivated by this, we have worked on establishing connections between random time dithering and jittered sampling in space. Specifically, we can recover high-quality seismic data volumes from time-jittered marine acquisition where the average inter-shot time is significantly reduced, leading to cheaper surveys due to fewer overlapping shots. The time-jittered acquisition, in conjunction with the shot separation by curvelet-domain sparsity promotion, allows us to recover high quality data volumes [12–15]. This line of work has been widely recognized by the oil and gas industry. For e.g., ConocoPhilips ran a highly successful field trial on marine acquisition with CS and obtained significant improvements compared to standard production. See Figure 1 where costs of conventional and compressive acquisition are compared. Recently, we have embarked on adapting techniques from distributed CS to time-lapse seismic data sets. We have proposed a radical new approach to processing 4-D seismic that exploits shared information between the base-line and monitor surveys. With this new approach, Felix Oghenekohwo in collaboration with Rajiv Kumar and Haneet Wason was able to shed new light on the fundamental repeatability requirements in time-lapse seismic data [16–19]. We are delighted to report that our new method for randomized marine acquisition has been successfully used in the field for acquisitions with ocean bottom nodes [20]. On another very encouraging note, the industry is pursuing randomized sampling in other settings [20] including simultaneous-source coil sampling [21]. Outcome. Innovations in seismic acquisition design and recovery techniques resulting in significant interest and increasing uptake by industry due to costs being cut by at least a factor of two.

Figure1Field trial of ConocoPhilips. (Thanks to Chuck Mosher.)

Matrix & tensor completion. Our team also focussed on leveraging low-rank structure in seismic data to solve extremely large data recovery problems. In [7] we introduced a large–scale Singular-Value Decomposition (SVD)-free optimization framework that is robust with respect to outliers and that uses information on the “support”. We used this framework for seismic data interpolation [22] and recently extended it to include regularization so it can be used on “off-the-grid” data [23]. Our findings show that major improvements are possible compared to curvelet-based reconstruction [24]. Furthermore, we also used low-rank structure for source separation [25] with short dither times where curvelet-based techniques typically fail. Aside from working on matrix-completion problems relevant to missing-trace interpolation, we also started a research program extending these ideas to tensor completion problems. Specifically, we have developed the algorithmic components for solving optimization problems in the hierarchical Tucker format, a relatively new decomposition method for representing high dimensional arrays [26]. As a result we are able to solve tensor completion problems, i.e., missing-trace recovery problems on four dimensional frequency slices with matrix factorizations [7,22,25] and optimization on hierarchical Tucker representations [26–28]. Other applications of our matrix completion techniques include lifting methods where non-convex problems are convexified by turning them into semi-definite programs [29]. Ernie Esser applied these methods to the blind deconvolution problem [30], where both the wavelet and the reflectivity are unknown, and to higher-order interferometric inversion [31] where Rongrong Wang extended recent work of Demanet in [32] to the nonlinear case and to include errors on the source and receiver side [31]. Outcome: A scalable framework for matrix and tensor completion with applications to time-jittered marine acquisition and lifting.

Large-scale structure revealing optimization. All of the above approaches depend on access to fast large-scale solvers. DNOISE has made major contributions to the development of large-scale solvers, resulting in SPG\(\ell_1\) [33], one of the fastest software implementations [34,35] of sparsity promoting problems. Brock ported this code into Total SA’s system. A parallel optimized version of this drives many of our applications. When the optimizations are carried out over higher dimensional objects such as matrices and tensors, it is crucial that these optimizations do not rely on carrying out SVDs over the full ambient dimension. As part of DNOISE II, our group continued to spearhead developments of state-of-the-art solvers [7,26]. Under the supervision of Michael Friedlander, Ives Macedo investigated a new generation of convex optimization algorithms for low-rank matrix recovery that do not require explicit factorizations of the matrices. The first step of this research program, developing an understanding of the mathematical underpinnings of this special type of optimization, co-authored with Ting Kei Pong is given in [36]. Proximal-gradient methods form the algorithmic template for almost all methods used in sparse optimization. Gabriel Goh developed proximal-gradient solvers that use tree-based preconditioners, which are structured Hessian approximations built from an underlying graph-representation of the relationship between the variables in the problem. At the same time, PhD student Julie Nutini researched rigorous approaches for seamlessly incorporating conjugate-gradient-like iterations, which are well understood and effective, into the proximal-gradient framework. Outcome. Versatile and structure-promoting solvers for large-scale problems in ES.

Free-surface removal

Objectives: Wave-equation-based mitigation of the free surface by sparse inversion.

During DNOISE II, Tim Lin made significant contributions to the mitigation of free-surface multiples. The impacts of his innovative work on enhancing the Estimation of Primaries by Sparse Inversion (EPSI) technique are manyfold, including a well cited paper [37], various refereed conference papers that include efficient multilevel formulations [38] and a method to mitigate the effects of near-offset gaps [39]. His method has been implemented on sail lines in 3D during an internship with Chevron, proving the scalability of his approach. With Bander Jumah, we have used randomized SVDs in combination with hierarchical matrix representations to reduce the memory use and the matrix-matrix multiplication costs of EPSI [40–42]. With our long-term visitor Joost van der Neut, we extended this wavefield inversion framework to interferometric imaging that led to several presentations and a journal publication [43–45]. Outcome: Industry-ready scalable solutions to surface-related multiple and interferometric redatuming problems.

Compressive modelling for imaging and inversion

Objectives: Design and implementation of efficient wavefield simulators in 2- and 3D.

Solving large-scale time-harmonic Helmholtz systems, given constraints associated with inversion, are extremely challenging problems. The system is indefinite, and we have to deal with many sources (right-hand sides) and many linearizations as part of the inversion. This means that a formulation is needed that has a small memory footprint, small setup cost and a preconditioner that is easy to implement and works for different types of wave physics. To control the computational costs, a formulation is required that reduces the number of sources where we have control over the accuracy to make the simulations suitable for an inversion framework that works with subsets of data (shots) at reduced accuracy. To meet these constraints, Tristan van Leeuwen implemented a 3D simulation framework based on Kaczmarz sweeps [46]. This solver is part of our efficient 3D full-waveform inversion framework with controlled sloppiness [46,47]. As part of his MSc research, Art Petrenko implemented this algorithm on the accelerators of Maxeler [48,49]. Rafael Lago extended our approach by using minimal residual Krylov methods to solve the acoustic time-harmonic wave equation [50]. Rafael also worked on improved discretizations of the Helmholtz system and on more sophisticated preconditioners. We also implemented proper discretizations of the wave equation along with the implementations of gradients, Jacobians, and Hessians that satisfy Taylor expansions of the objectives in 2D, 3D, and for the multiparameter case. These implementations, which include forward solves, are essential components of wave-equation based inversions [51,52]. Finally, we investigated utilizing depth-extrapolation with the full wave equation [53]. Outcome: A practical implementation of a scalable, object-oriented parallellized simulation framework in 2- and 3D for time-harmonic wave equations.

Compressive wave-equation based imaging and inversion

Objectives: Design and implementation of an efficient and robust wave-equation based inversion framework leveraging recent developments in ML, CS, sparse recovery, robust statistics, and optimization.

DNOISE II research has also made significant contributions to “high-frequency” reflection-based imaging that allows us to carry out inversions at greatly reduced costs while removing sensitivity to starting models. Xiang Li worked on sparsity-promoting imaging with CS [54] and approximate message passing [55] techniques that yield an inverted image roughly at the cost of a single reverse-time migration with all data. We consider this a major accomplishment. Ning Tu built on this work and developed a method to image with multiples and source estimation, once again at the cost of roughly one reverse-time migration with all data [56,57]. This effectively removes the computational costs of EPSI and is widely considered as a fundamental breakthrough. Tristan and Rajiv also made significant contributions to wave-equation migration velocity analysis with full-subsurface offset extended image volumes by calculating the action of these volumes on probing vectors [58]. Since it uses random matrix probing, this technique avoids forming these computationally prohibitive extended image volumes explicitly. This method allows us to conduct linearized Amplitude-Versus-Offset analysis (AVO) including geologic dip estimation [59]. Random matrix probings are also used by randomized SVDs and preconditioning [60] of the wave-equation Hessian. Finally, Lina Miao worked on imaging with depth-stepping using randomized hierarchical semi-separable matrices [61]. This approach has advantages over reverse-time migration because it treats turning waves differently [62].

Our team has also made important progress in full-waveform inversion. First, we firmly established randomized source encoding [63] as an instance of stochastic optimization [64,65]. Tristan extended this work to marine [66], using results from batching [67] derived by Michael Friedlander and Mark Schmidt that give conditions for convergence of (convex) optimization problems. This research is essentially based on the “intuitive” premise that data misfit and gradient calculations do not need to be accurate at the beginning of an iterative optimization procedure when the model explains the data poorly. This work was implemented by WesternGeco and several others and has led to four- to five-fold increases in speed of full-waveform inversion making the difference between loss or profit. In our extension to the marine case [66,68], Tristan also introduced a new method for on-the-fly source estimation [69] that formed the basis for his later research with Sasha Aravkin on nuisance-parameter estimation with variable projection [70]. Nuisance parameters include unknown source scaling, weightings or certain statistical parameters, including parameters of the student’s t distribution. Our work on source estimation with variable projection has produced several publications [71], and has been integrated into Total SA’s production full-waveform inversion codes [72].

To make our inversions more stable with respect to outliers, Sasha introduced misfit functions based on this criterion. This work resulted in peer-reviewed conference papers [73–75] and in a generalization of earlier convergence results for batching techniques [76]. While successful, the proposed batching techniques were not constructive. In other words, these techniques did not provide an algorithm to select the appropriate batch sizes (number of shots partaking in the optimization) and accuracy of the wave simulations as the inversion progressed. To overcome this problem, Tristan proposed a practical heuristic approach that was presented at major conferences and that has been communicated to the mathematical and geophysical literature [47,77].

Aside from relying on increasing the batch size to control the errors, Xiang adapted our accelerated sparsity-promoting imaging techniques to speed up computations of Gauss-Newton updates of full-waveform inversion. This included an extension to marine data and was presented at several conferences [78–80] and as a letter to GEOPHYSICS [81]. Xiang applied this method to Chevron’s blind Gulf of Mexico dataset and presented this work [82] at several full-waveform inversion workshops.

While these approaches to full-waveform inversion addressed important problems regarding the computational cost, estimation of nuisance parameters, and robustness to outliers, this inversion technique remains notoriously sensitive to cycle skipping and, as a consequence, to the accuracy of the starting models. This sensitivity severely limits the applicability of this technology since accurate starting models are generally not available. To derive a formulation that is less prone to cycle skipping, Tristan replaced the PDE-constraint of the “all-at-once” methods with a two-norm penalty, instead of eliminating this constraint as in reduced adjoint-state methods [83]. This approach leads to an extension—i.e., an increase of the degrees of freedom—where the unknowns are now the wavefields for each shot and the earth model. While keeping track of these wavefields is computationally not feasible in the “all-at-once” approach, Tristan overcame this problem by using his variable projection technique. In this method, for each shot, wavefields are calculated that fit both the physics (the PDE) and the observed data, followed by a gradient step that minimizes the PDE residual. This method differs from existing approaches—i.e., the “all-at-once” [83] or contrast-source method [84]—because these wavefields can be calculated for each shot independently, avoiding storage of all wavefields. There are strong indications that this approach is less sensitive to starting models while having additional advantages including sparse Hessians and less storage since there is no adjoint wavefield. This work, which we refer to as Wavefield Reconstruction Inversion (WRI), appeared as an express letter [85], is submitted for publication [86], and was presented at several conferences and invited talks at workshops around the world [87,88]. We also filed a patent on this work [89]. Finally, WRI is extended by Bas Peters to the multi-parameter case [52] and by Ernie to include total-variation regularization via scaled projected gradients [90]. We consider this to be groundbreaking because there are clear indications that this formulation is less sensitive to cycle skips. This work will also be presented at the EAGE Distinguished Lecturer Programme 2015 & 2016. Outcome: A framework for wave-equation based inversions that is economically viable and that removes fundamental issues related to outliers, nuisance parameters, and most importantly sensitivity to cycle skips thus relaxing some of the constraints on starting models.

Parallel software environment

Objectives: Development and implementation of a scalable parallel development environment to test and disseminate practical software implementations of our algorithms in 2- and 3D to our industrial partners. Team: Henryk Modzelewsk (senior programmer) and a team of co-op undergraduate students have been responsible for parallel hard/software development and maintenance for the DNOISE II research project.

During DNOISE II, we developed an object-oriented and scalable framework that has put us well ahead of the competition. As a result our software development times are roughly half than those of our competitors while we are able (due the object-oriented code) to develop more sophisticated high- (e.g., convex solvers and linear algebra in parallel Matlab) and low-level (e.g., row projections on Maxeler hardware), within an overarching integrated and verifiable software framework [51]. Aside from enabling our students to concentrate on the real problems at hand, the framework (including parallel SPOT—a linear operator toolbox), data containers, and map-reduce algorithms (based on SWIFT) also allow our researcher to develop numerous parallelized software prototypes. This results in algorithms that are adaptable, easy to understand, and above all scalable. During the DNOISE II project, we developed 25 software releases.

During DNOISE II, our researchers also extensively worked on blind field and synthetic data case studies provided to us by industry. We did this work with Professor Andrew Calvert and Professor Eric Takougang, as well as Professor Mike Warner (Imperial College London) and several members of our research team.

Research dissemination

Aside from presenting our work in 50 journal papers, 100 expanded abstracts, more than 200 presentations at international and our own conferences, we have presented our findings to other research communities (outside geophysics). These latter presentations include prestigious conferences in signal processing and applied mathematics such as International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE Global Conference on Signal and Information Processing (GlobalSIP), IEE Statistical Signal Processing, Signal Processing with Adaptive Sparse Structured Representations (SPARS), International Conference on Sampling Theory and Applications (SampTA) and SPIE Wavelets and Sparsity. Many of these presentations were invited talks and garnered wide-spread recognition of our work amongst a variety of different scientific disciplines.

For more details on our work on seismic data acquisition, processing, modelling, imaging, migration velocity analysis, full-waveform inversion, CS, and optimization, please refer to the Seismic Laboratory for Imaging and Modelling (SLIM) website, our mind map or the most recent progress report.

DNOISE III

Research Objectives

The field of Exploration Seismology (ES) is in the midst of a “perfect storm”. Conventional oil and gas fields are increasingly difficult to explore and produce calling for more complex wave-equation based inversion (WEI) algorithms that require dense long-offset samplings. These requirements result in exponential growth in data volumes and prohibitive demands on computational resources. As part of DNOISE III—Exploration Seismology in the Petascale Age—we will fundamentally rethink and reinvent the field of computational ES to meet these challenges via a truly cross-disciplinary research program that (i) builds on and further strengthens DNOISE’s proven track record in driving innovations in seismic data acquisition, processing, and wave-equation based inversions, such as reverse-time migration, migration velocity analysis, and full-waveform inversion; (ii) continues to leverage developments in mathematics (e.g., CS), large-scale optimization, and ML to optimally handle “Big data” sets; and (iii) addresses fundamental issues related to sampling, non-uniqueness of inversions, mitigation of risk, and integration with reservoir characterization.

To enhance and extend these research goals we will significantly expand our research team and scope of research in the DNOISE III project. Our primary aim is to define a completely new paradigm for computational seismology that is industry-scale proof, to enable 3D seismic experiments in complex geologies and unconventional plays such as shale gas, and to train a future generation of geophysicists by giving them a solid background in modern-day data science and SC. The main research objectives of DNOISE III are listed below and include elements that vary in risk and academic reward. These objectives are designed to solve both current and long-term challenges faced by industry.

I—Seismic data acquisition for wave-equation based inversion. Mainly due to the efforts of the DNOISE series of CRD grants, the oil and gas industry is at a watershed moment of accepting and implementing randomized [20,21] dense full-azimuth long-offset [91] acquisitions for both marine and land areas of exploration. While much progress has been made, fundamental questions remain, including “What is the role of fold (=redundancy) in randomized acquisition?”; “Can we control cost, optimize the sampling, and make the recovery robust with respect to calibration errors?”; and finally “How does randomized sampling affect time-lapse seismic measurements?” To answer these questions we will use a combination of CS and simulation-based acquisition design to produce optimized and physically-realizable acquisition schemes and performance metrics for 3D (time-lapse) seismic acquisition. Outcome: Cost-effective and optimized acquisition technology for WEIs that require large (>100k) channel counts on land and significant numbers (>10k) of cheap autonomous ocean-bottom nodes in marine environments.

II—Resilient wave-equation based inversion technologies. WEIs suffer from acquisition and surface multiples and source/receiver ghosts, excessive computational costs for complicated physics, sensitivity to starting models, and ill-posedness of multi-parameter inversions. Inverse problems in geophysics are often ill-posed, which leads to systems of equations that are ill-conditioned and difficult to solve. We plan to tackle these issues by investigating model-space structure and correlations among model updates as priors, by using techniques from stochastic optimization and batching, and by reformulating WEI using partial differential equation (PDE)-constrained and convex optimization approaches. Outcome: A fast, resilient and scalable wave-equation based inversion methodology for complicated wave physics. This innovative technology will be less sensitive to incomplete data, starting models, ill-posedness, and calibration errors.

III—Technology validation on field datasets. Great progress was made in DNOISE II in the development of the next-generation of seismic data acquisition, processing, and inversion technology. Validation of these technologies on realistic datasets [82] remains a crucial step towards the use of our technology by industry. As part of DNOISE III, major effort will be devoted towards the creation of versatile, automatic workflows and validation of our technology on realistic 3D blind synthetic and field data sets. Outcome: A framework including practical workflows that validates DNOISE’s technology on industry-scale datasets.

IV—Risk mitigation. ES in industry is mostly aimed at mitigating risk. Therefore these industries are risk averse and slow in the adoption of innovative techniques such as randomized acquisition, processing and inversion algorithms that are part of this research project. This relatively slow acceptance is partly due to the lack of “error bars” on our inversion results. Quantification of risk is therefore instrumental for major industrial uptake of our technology. Outcome. CS and wave-equation based inversion methodologies that include uncertainty quantification.

V—Wave-equation based inversion meets reservoir characterization. Obtaining accurate inversions for elastic properties is only a first step towards understanding the complexities of both conventional and unconventional oil and gas reservoirs. Questions such as “Where to drill?” and “How to produce?” are typically not addressed. We plan to address these questions by combining a target-oriented inversion approach with techniques from deep learning, which models high-level abstractions in data with multiple nonlinear transformations. Outcome. A targeted wave-equation based information retrieval technology that is capable of mapping the lithology and time-lapse changes in the reservoir associated with oil and gas production.

VI—Randomized computations and data handling. WEI exhibits—by virtue of the independence of the different (monochromatic) source experiments—a “map-reduce” structure that can be exploited by randomized algorithms. While this separable structure imposes a natural way to parallelize and manage data flows, the gradient calculations themselves are expensive and exceedingly large in number. In addition, the gradient calculations must be synchronized and of similar accuracy for each iteration. By using recent developments in stochastic optimization, we will relax these requirements thus allowing new approaches for parallelization and data handling. Outcome. A scalable computational inversion framework driven by gradient calculations that do not need to be synchronized, which works with small subsets of source experiments, and can vary in accuracy and can be decentralized.

In summary. By integrating these objectives into an overarching project, DNOISE III will ultimately provide scalable acquisition, processing, inversion, and classification methodologies that overcome the main challenges of modern-day ES. Our combination of breakthroughs in mathematics, data science and SC into a practical, computationally feasible, and scalable framework will enable our industrial partners to work with the massive seismic data volumes prevalent in this Petascale Age.

Research approach

The broadening of our research program into several key areas allows us to sustain our leading role in replacing the conventional seismic processing paradigm, where data is subjected to a series of sometimes ad hoc sequential operations, with a new inversion paradigm. By adapting breakthroughs in mathematics, theoretical computer science, and SC, acquisition and computational costs are better controlled, sensitivity to starting models is reduced, and risks are accurately assessed with our novel innovative methodology. Another essential requirement for the success of this research project will be our ability to develop and validate our technology on field data sets using our own computational resources and a $10M (17k CPU core) high-performance computing (HPC) facility in Brazil made available to us as part of the International Inversion Initiative (III). Having access to this world-class HPC facility puts our group in a unique position to make fundamental advances in ES and will allow our group to train students to be the future leaders in geophysics and data science, in industry and academia.

Research team. Our core team includes 4 senior tenured and 3 first-year tenure-track faculty members from the UBC-EOAS (PI, Felix J. Herrmann), SFU Department of Earth Sciences Andrew Calvert, UBC Mathematics (Ozgur Yilmaz and Yaniv Plan), UBC Computer Science (Chen Greif and Mark Schmidt), UBC Department of Electrical and Computer Engineering (Sudip Shekhar), and the Department of Mathematical and Industrial Engineering at Ecole Polytechnique de Montreal (Dominique Orban). This diverse team with experts in theoretical and observational ES (the PI and Andrew Calvert), CS (Ozgur Yilmaz and Yaniv Plan), SC and optimization (Chen Greif and Dominique Orban), ML (Mark Schmidt), and integrated circuits (Sudip Shekhar) covers all areas essential to ES that range from acquisition design, including hardware for the next-generation of field equipment, to modern tools from numerical linear algebra designed to handle the extremely large, expensive to evaluate, and data intensive problems in computational and observational ES. The team contains areas of expertise, which sufficiently overlap to accommodate changes in the team or interruptions because of leaves.

Our diverse research also allows us to tackle a wide variety of problems and gives the students access to faculty with highly complementary research skills and backgrounds. Despite varied research foci, we have built a tight-knit different, yet cross-disciplinary research team also including (under)graduate students, post-doctoral fellows (PDFs), scientific personnel, and local and international collaborators. Some of these additional co-investigators (co-Is) include Professor Rachel Kuske, UBC Mathematics, Professor Ben Recht (UC Berkeley), Dr. Tristan van Leeuwen (Utrecht University), Dr. Dirk-Jan Verschuur (Delft University of Technology), Dr. Sasha Aravkin and Ewout van den Berg (IBM Watson), Dr. Hassan Mansour (Mitsubishi Research), Professor Rayan Saab (UCSD), Dr. Gerard Gorman and Professor Mike Warner (Imperial College London, (ICL)), and Professor Michael Friedlander (UC Davis). As part of the International Inversion Initiative (III), we have also entered into formal collaborations with Professor Mike Warner’s group at ICL.

Managing a large, cross-disciplinary team is always a challenge. Our research collaboration epouses Alan Hastings suggestion in “Key to a Fruitful Biological/Mathematical Collaboration” (SIAM news, 2014)

“These collaborations succeed only when there is a common language, which means that participants must work to learn as much as possible about all fields involved.”

We have successfully overcome this challenge, and will continue to do so, by organizing a number of DNOISE III related activities including weekly seminars, weekly field data meetings, bi-weekly software development meetings, and regular meetings amongst the PIs once per term. The co-PIs in charge of the main research activities will meet at least three times per term with their teams. Also, co-advising students and the fostering of team-member collaborations— e.g. by teaming up PDFs with graduate students—has also proven to be highly effective. In our previous DNOISE collaborations, these management and training approaches have resulted in a world-class research program that is widely recognized for its scientific contributions, uptake of its innovations, a strong journal publications record (a total of 65 journal papers and 150 conference papers), extensive international collaborations and excellent training of highly qualified personnel (HQP) (exceeding 60). These success have resulted in a total of $3.2M in cash support from now 14 industrial partners and in $378k in in-kind contributions training a total of 60 HQP.

Our weekly meetings, which involve all DNOISE team members, have particularly contributed towards the development of a common language and generated numerous cross fertilizations between research areas. (See our mind map for the connections.) These synergies have occurred for the PIs of the first DNOISE grants and among other faculty collaborators, as demonstrated by the expansion of our research team. These collaborations have resulted in an increasing stream of publications while engaging our students with a wide range of cross-disciplinary researchers.

Research management. Effectively leading and synthesizing a large varied research project with many moving parts—including international collaborations, organization of industry consortium meetings attended by all the CRD participants, working with industry, coordinating a large research team, delivering on contracts and software releases—is critical for realizing the true potential of the proposed innovations. To support the team so that they can deliver on these goals, this proposal budget includes (aside from existing part-time program support and several IT, web, and scientific programming support staff) a project-management and grant-tenure position. This project-management person will assist with (i) schedule set up and work scopes, so DNOISE III deadlines and milestones are met; (ii) producing annual DNOISE III progress reports in tandem with project leaders; (iii) research dissemination via the SLIM website; and (iv) coordination of the SINBAD consortium meetings that bring in our industrial partners. The grant-tenure hire will assist with general student and post-doc supervision and with coordinating our industrial contacts. We are currently negotiating with the UBC administration to create this grant-tenure position whereby UBC will contribute to financial support for the position in return for the teaching and administrative services rendered to the EOAS Department.

Deliverables. In addition to disseminating our research results through refereed journal publications, refereed expanded conference presentations and papers, technical reports and annual meetings with our industrial partners, we will prepare frequent software releases and conduct trials on 3D field data sets. These activities will involve all participants in the CRD in a variety of researcher subsets. Our software releases will be documented and distributed via GitHub, a professional code repository, or via conventional means.

Detailed Proposal

In the following, the action items and project leaders are bold-face numbered and appear on the Activity Schedule of Form 101 and in the Detailed Activity Schedule included in the Appendix of the Budget Justification, which also includes a Gantt chart with timelines for the HQP.

I—Seismic data acquisition for wave-equation based inversion

Although the DNOISE I and II projects have made significant progress possible in the design and implementation of randomized sampling and structure-promoting (e.g. transform-domain sparsity) recovery techniques, many fundamental and practical challenges remain, including convincing risk-averse management of the “optimality” of the survey design while being cognizant of practical limitations and sampling requirements of 3D WEI. Key questions that need to be addressed to further support that adoption of the principles of CS [92–95] include: “How do noise and calibration errors affect randomized acquisition and wavefield recovery?”; “What is the impact of randomized acquisition on time-lapse seismic measurements?”; and “Why random sampling?”. To answer these questions, we propose to work on the following topics:

I(i)Randomized acquisition schemes. I(ia) We will build on conventional structure-promoting recovery techniques with transform-domain sparsity [3] and on the more recently developed matrix- [7,22,96,97] and tensor completions [26,27,60,98,99]. These include recovery from missing traces, short- [25] and long-time [13] jittered continuous marine acquisitions for towed arrays or ocean-bottom nodes (OBN) where single-vessel periodic firing-time recordings are replaced by (multi-vessel) continuous recordings with randomized firing times. In these approaches, coherent subsampling-related artifacts such as aliases and source-crosstalk are shaped into incoherent structure, breaking coherent interference noise. Incoherent noise is favored by convex optimization because these schemes penalize incoherent noise by restoring structure. I(ib) We will investigate the effect of ambient noise as a function of the “source fold”—i.e., the number of sources partaking in simultaneous source experiments. I(ic) We will analyze the performance of our recovery algorithms when including regularization necessary for datasets acquired on unstructured (irregular) grids [23,100]. I(id) We will continue to look for structure revealing representations via: permutations [101], mid-point offset transformations [7,22], Hierarchical Semi-Separable (HSS) matrix [102] and Hierarchical Tucker Tensor (HTT) formats. Also, we will investigate structure promoting optimizations including SVD-free matrix factorizations [7,22] and manifold optimization in the HTT format [26,101]. Our approaches differ significantly from existing Cadzow [103,104] and alternating tensor optimization techniques [98] because our methods are SVD-free and work on either complete data or on multi-level subsets of data as in HSS matrix and HTT formats. Finally, I(ie), we plan to theoretically analyze randomized time-lapse acquisitions [16–18], extending theoretical results from distributed CS [105].

I(ii)Performance metrics. “Optimal” acquisition design hinges on several key factors that include economics, practicality, theoretical and actual recovery quality, speed, noise sensitivity, and perceived risks. As part of this research project, we want to investigate I(iia) a number of quantitative performance measures that will allow us to compare different acquisition designs and assess the recovery quality in several domains. We also will I(iib) compare the performance of randomized acquisitions that make few assumptions to more data-adaptive sampling methods such as blue-noise (jitter, [23,106]) importance sampling [107] and experimental design [108,109].

I(iii)Simulation-based acquisition design. While CS gives fundamental insights, measuring and optimizing its performance—e.g., via “brute force” running different realizations and choosing the best performing—in realistic settings remains challenging. Therefore, we will supplement our efforts under I(i) (see above) by conducting acquisition simulations on realistic synthetic models. For 3D (time-lapse) seismic acquisition, this is a computationally expensive, however with our access to high-performance computing infrastructure we will be able to carry out experiments to: I(iiia) demonstrate the validity of different randomized acquisitions and I(iiib) optimize acquisitions for certain geological settings including time-lapse seismic data. These research projects require highly specialized and challenging simulations and performance metrics (under I(ii)) for different inversion modalities—e.g., full-waveform inversion (FWI) and reverse-time migration (RTM) differ in requirements and quality control.

I(iv)Survey design principles. We will combine results from I(i–iii) and derive I(iva) feasible, and field realizable, 3D (time-lapse) acquisition schemes that balance cost, sampling density for marine [13,25], and degree of repeatability in time-lapse seismic data sets [16–18]. These principles will include qualitative “Do’s and Don’t’s”, and quantitative constraints (e.g., “If you want this error sample this is the recommended sampling density”). The research projects will strongly encourage our industry partners to adopt our sampling technology. In particular, we will focus our efforts on developing I(ivb) economical acquisition scenarios that leverage marine acquisition with large numbers of inexpensive OBNs or vertical arrays and on land with high-channel count wireless systems. We will also further enhance our I(ivc) time-lapse acquisition approach by explicitly exploring commonalities among different vintages [16–18].

I(v)Calibration. Structure-promoting recovery techniques depend critically on accurate modelling of the acquisition (e.g., small multiplicative phase or coupling errors in sampling matrices) can be detrimental to structure-promoting recovery. Tantamount to recent successes in the increasing productivity of acquisition with CS [20], these calibration issues can be overcome but at an economic price. Requiring expensive, and technically difficult to achieve, calibrations during acquisition could prevent our method from reaching its true economic potential. To better understand what level of calibration is necessary and to make our acquisition schemes immune to certain acquisition errors, we will first I(va) study the effects of realistic multiplicative errors, such as timing errors in jittered marine. Next I(vb), we will incorporate lifting techniques into our recovery algorithms [110]. These lifting techniques are equivalent to reformulating multiplicative non-convex problems, such as “blind deconvolution” [110] or “phase retrieval” [29], into convex problems that do not have local minima. However, these approaches are quadratic in the unknowns and therefore computationally very challenging. When successful, these techniques will allow us to estimate and then compensate for certain multiplicative calibration errors such as coupling issues in geophones.

I(vi)Surface-related multiples. Aside from making seismic data acquisition more cost effective and robust with respect to calibration errors and noise, we will improve the recovery from incomplete data by including the physics of the free surface. By including the multiple-prediction operator in the recovery, we will investigate using the bounce points at the surface as secondary sources. For 3D seismic data, this will require on-the-fly I(via) interpolations of receivers with HSS matrices (as in I(id) above, [42]); I(vib) infill of missing near offsets [39]; and implementation of computationally efficient multilevel Estimation of Primaries by Sparse Inversion [37,38] in 3D. I(vic) In addition to providing additional information, inclusion of the physics of surface-related multiples in our mathematical formulations [94] may give us the possibility to solve the blind source estimation problem theoretically for problems that involve the free surface. Accomplishing a solution to “blind deconvolution problems with feedback” will be a major breakthrough, providing a thorough justification of source-estimation techniques proposed by our group.

I(vii)Next-generation (wireless) hardware. To meet the demands of modern-day WEI technology, a new sampling paradigm is needed. While projects summarized in I(i)—I(vi) will be important steps, a similar effort is required in the development of the next-generation OBNs and wireless land systems. I(viia) We will design hardware that exploits our knowledge of CS, quantization, and large-scale optimization to create techniques to limit communication, increase storage capacity of autonomous devices, and to improve battery life of field equipment. I(viib) We also will continue to investigate “sign bit” seismic acquisition [111] using recent work on 1-bit matrix completion [112], which could have a major impact on how wavefields are discretized since it would massively reduce data storage and cost of field equipment.

In summary: With continuing track record consisting of contributions to various aspects of CS–—ranging from theory development to the design of economic and practical acquisitions, and the implementation of scalable wavefield recovery technology based on large-scale solvers—we remain to be world leaders in ES, improving upon current and often ad-hoc solutions to data-acquisition challenges. Project leaders. Ozgur (Co_PI); Felix; Yaniv (Co-PI); Sudip (Co-Pi, (viia),(viib)). Project team. Navid Ghadermarzy (PhD, I(ia),I(ib) ); Rajiv (PhD,I(ia),I(id)); Curt da Silva (PhD,I(ia),I(id) ); Haneet (PhD,I(ia);I(ib),I(iiia),I(iva),I(ivb)); Oscar Lopez (PhD,I(ia);I(ic)); Felix O. (PhD,I(ie),I(iiia),I(iiib),I(iva),I(ivc)); Philipp Witte (PhD,I(ie),I(iiia),I(iiib)); new PDF 1 (I(iia),I(iib),I(iiia),I(iiib),I(iva),I(ivb)); new PDF 2 (I(iiib)), Rong (PDF,I(va),I(vb);I(vic)), Ernie (PDF,I(ia),I(vb),I(va),I(vb)); Mathias Louboutin (PhD, I(via),I(vib)), New PhD 2 (I(viib)).

II—Resilient wave-equation based inversion technologies

Wave-equation based inversions such as linear reverse-time migration (RTM), nonlinear wave-equation migration velocity analysis (WEMVA) and full-waveform inversion (FWI) have become the main inversion modalities utilized by industry with the advent of fast compute. While these types of inversions have important advantages, especially in 3D, major difficulties still exist regarding exorbitant demands on sampling and computations, non-uniqueness, and the lack of versatile automatic workflows. Albeit somewhat arbitrary — the boundaries between these inversion modalities are blurring — we separate our efforts into “high-frequency” reflection and “low-frequency” turning wave inversions:

II(i) “High-frequency” reflection-driven inversions. In many ways reflection-based inversions constitute the “old” imaging and velocity analysis techniques, which are prone to adverse effects of acquisition footprints, surface-related multiples, unknown sources, and sensitivity to timing errors. When given accurate velocity models and densely-sampled data, surface-related multiples and primaries can be jointly imaged via randomized sparsity-promoting optimization that produces artifact-free, true-amplitude images and source estimates in 2D at the cost of roughly one RTM with all the data [57]. II(ia) We will extend this groundbreaking work to 3D; II(ib) incorporate multiples into WEMVA with randomized probings [58]; and II(ic) study how surface-related multiples improve nonlinear migration—i.e., FWI with accurate starting models. To overcome missing near-offsets, we will incorporate II(id) near-offset predictions [39] and on-the-fly interpolations. In all cases, we will use the wave-equation itself to carry out the otherwise prohibitively expensive, dense matrix-matrix multiplies that undergird EPSI [37]. II(ie) We will use this linearized inversion methodology to explore common information among (4D) time-lapse images, to improve their image quality [19]. II(if) Remove timing errors by extending Reference 32’s interferometric imaging to higher order as in recent work by Rong, [31].

II(ii)Turning-wave and reflection-driven inversions. Directly inverting for the earth parameters, given adequately-sampled seismic data, is in many ways the “holy grail” since this would potentially give us access to elastic properties given sufficient computational resources and accurate starting models. Unfortunately, neither is available. II(iia) To reduce computational requirements, we will utilize a frugal FWI approach [47,82], which uses data and numerical accuracy only as needed, thereby reducing computing and data demands. During FWI we answer the question “How to maximize progress towards the solution given a limited computational budget—i.e. limited passes through the data and limited accuracy?”. We utilize our Wavefield Reconstruction Inversion (WRI)[52,85,87,88] to avoid cycle skips due to inaccurate starting models. We avoid these local minima by extending the search space. Contrary to recently proposed extensions by Symes [113] and Biondi [114], who make velocity models subsurface-offset or time-lag dependent, and by Warner [115], who introduced a matched filter in Adaptive Waveform Inversion, we derive our extension directly from the full-space “all-at-once” [83] approach. Instead of eliminating the wave-equation constraint [83,116], WRI involves a non-convex alternating optimization via variable projections where wavefields that fit both the wave equation and the observed data are solved first, followed by an update on the model parameters that minimizes the “PDE misfit” assuring adherence to the physics. Compared to the full-space “all-at-once” [83] and “contrast-source” [117] methods, no storage and updates are necessary, which is a major advantage. II(iib) We will continue to develop, extend to 3D and evaluate WRI compared to other extension methods, and II(iic) exploit its special structure—e.g., its sparse Gauss-Newton Hessian is highly beneficial to multi-parameter WRI [52] and uncertainty quantification (see Risk Mitigation below). II(iid) While there are strong indications that WRI is less prone to cycle skips there is no guarantee it arrives at the global minimum. Motivated by recent breakthroughs in “blind deconvolution” [110] and “phase retrieval” [29], we will explore the bi-linear structure of WRI in an alternating non-convex optimization procedure that includes the computation of starting models directly from the data that lead to the global minimum [118]. II(iie) Following [Rong] [31], we will also exploit high-order interferometric FWI, which is another instance of lifting. II(iif) Finally, we plan to extend our work on inverting time-lapse data [19] to nonlinear FWI.

II(iii)Mathematical techniques. WEI is challenging because it calls for various sophisticated techniques that include: II(iiia) Implementation of both frequency and time-domain “simulations for inversion” solvers that have low setup costs, controlled accuracy, and limited memory imprint. This work requires preconditioning for time-harmonic Helmholtz systems with conjugate gradients [47] or residuals [50], and of preconditioners with shifted Laplacians [119–121]. In addition, we will compare time-stepping methods to time-harmonic approaches extending recent work by Knibbe [121] to FWI. To further reduce computational costs, we will also control the accuracy of forward and adjoint solves. Finally, we will evaluate recently proposed model reductions [122] and multi-frequency simulations with shifts [123] and investigate possibilities using randomized sampling techniques in checkpointing [124] to limit storage and re-compute costs of time-stepping. II(iiib) WRI works well in 2D but its extension to 3D is arduous since it involves data-augmented wave solves. Aside from having a PDE block, these systems have data blocks that make them harder to solve. Possible solutions we will include preconditioning with incomplete lower-upper (LU) factors that call for efficient direct and iterative solvers [125–127]; deflation [128,129] to remove outlying eigenvalues, randomized dimensionality reduction [130]; and alternative saddle-point optimization formulations such as augmented Lagrangians [131,132] and quasi-definite systems [133]. II(iiic) Our WEI implementations are based on applying the chain rule to compute gradients that are consistent with the discretization of the wave equation (PDE), verifiable and suitable for second-order schemes such as quasi Newton [134]. However, these valuable properties are exploited at the expense of certain benefits of continuous formulations. We want to know “When it is best to discretize”. II(iiid) Inverse problems benefit from prior information. Therefore, we will incorporate regularization into our 3D WEIs via: modified Gauss-Newton with \(\ell_1\) for conventional FWI with dense randomized Jacobians [135]; via a Gauss-Newton method based on quadratic approximations and scaled projected gradients for WRI [90]; and projected quasi-Newton with randomization [47,67] for FWI and WRI [52]. Specifically, we will study projections on convex sets, including box constraints, \(\ell_1\) and total-variation norms, as well as other, possibly non-convex, regularizations. II(iiie) \(\ell_2\)-norm misfits are ill-equipped to handle modelling errors and correlations amongst different (monochromatic) source experiments. To overcome this shortcoming, we will derive formulations that capitalize on student t misfits [76] and factorized nuclear-norm minimization [7]. II(iiif) Regularizations call for parameter selections and we plan to make these selections void of user input by using properties of Pareto trade-off or “L” curves [76] and nuisance parameter estimation [70].

In summary. During DNOISE II, we have firmly established our research team as one of the leading academic groups in computational ES with expertise in a wide range of topics in wave-equation based inversion (WEI). While perhaps ambitious, our research program includes concrete plans to control computational costs and non-uniqueness. If our approaches indeed prove successful—i.e., we can invert for the earth properties from more or less arbitrary starting models and adequate data — the question arises “Should we recover fully sampled data first, followed by wave-equation based inversion, or should we directly invert randomly subsampled data?”. We ask this question because acquisition of fully sampled data will likely remain elusive because of physical constraints. We therefore will make a concerted effort to answer this fundamental question [94] as part of this grant. Project leaders. Felix, Chen (Co-pi), Tristan (Co-I), Mark (Co-PI) and Dominique (Co-PI). Project team. Mathias (PhD, II(ia),II(ic),II(id)); Rajiv] (PhD, II(ib),II(iiie)), Felix O. (PhD,II(ie),II(iif)); Rong (PDF,II(if),II(iid),II(iie)); Ernie (PDF,II(ia), II(if),II(iid),II(iiid),II(iiie)); New PDF 2 (II(iia)) Bas Peters (PhD,II(iib),II(iic),II(iiib),II(iiid)); Mengmeng Yan (PhD,II(iic)); Zhilong Fang (PhD,II(iic)); Rafael (PDF,II(iiia)); new PhD 3 (II(iiia),II(iiib)); New PDF 3 (II(iiic),II(iiie),ii(iiif)).

III—Technology validation on (3D) field datasets

The success of inverting seismic data for earth parameters depends on the proper conditioning of the data prior and/or or during the inversion, proper parameter settings and “computational steering” carried out during the inversions [136]. Given our team’s access to large computational resources both at UBC and in Brazil, as part of our partnership in the International Inversion Initiative (III), we will develop, preferably automatic, workflows on (blind) synthetic and field datasets representative of different geological settings. Specifically, our research projects in this research area include:

III(i)Simulation-based verification of wave-equation based inversion. WEI, WEMVA, and FWI in particular, are not yet mature “production ready” technologies and suffer from parameter sensitivities, non-uniqueness, and extremely high computational costs especially when including elasticity, realistic velocity heterogeneity, and fracture related anisotropy. During DNOISE II, we developed a number of core technologies that were designed to reduce the computational costs [76,82], automatically estimate parameter settings [70], and regularize the inversion problem [82,90,137]. III(ia) While initial tests on mostly 2D data are encouraging, these techniques have not yet been tested or fully implemented in 3D, made anisotropic, and applied to field data. III(ib) Also our time-harmonic FWI framework has not been compared to other implementations, such as those developed by Imperial College London [138], another group involved in the III project in Brazil. III(ic) To gain more insight into the performance of DNOISE’s technology in 3D and how it compares with other technologies, we will pursue a series of industry-scale, simulation-based FWI studies where we have control over the model complexity and where we know the answer. For example, we will evaluate how “close” the starting model must be to the final model, particularly when using new approaches that are less sensitive to cycle-skipping. III(id) With the experience obtained in these controlled experiments, we will be a in good position to tackle problems on field data.

III(ii)Automatic parameter selection. The success of WEI often critically depends on certain unknown parameters such as unknown source scalings, structure-inducing regularization parameters and noise levels. While the latter are relatively well understood for linear problems [76], the estimation of these parameters for nonlinear problems is typically more challenging. While nonlinear WEI problems will always require a certain level of interactive “trial and error” testing of parameters, we managed to automate a number of these parameter-setting exercises. We have also developed computationally efficient WEI techniques, based on certain automatic heuristics that have not yet been thoroughly verified in 3D [47]. III(iia) To get better insight into nuisance-parameter [70] and efficient WEI [47], we will validate our nuisance estimation for source and III(iib) other nuisance parameters on 3D FWI for carefully chosen synthetic models. III(iic) Likewise, we will test 3D imaging with multiples [57], III(iid) compare frugal Quasi-Newton [47] and efficient modified Gauss-Newton [90,137] techniques, which work with randomized subsets of data to FWI with all data, and III(iie) extend the method of variable projection to other nuisance parameters such as weighting and different types of (non-convex) misfit functions, including the student t misfit function or (spectral) matrix norms.

III(iii)Wave-equation based inversion on field data. The success of applying WEI to field data depends on a number of key factors that include being able to handle unmodelled phases and viscoelastic amplitude effects. A number of approaches will be developed such as III(iiia) including more wave physics, III(iiib) removing unmodelled phases, e.g., via (transform-domain) masking, by using III(iiic) robust misfit functions that are insensitive to outliers [75], or by III(iiid) appropriate regularization. To date, most DNOISE WEI work has focused on marine field datasets, and this work is directly applicable to continuing exploration and development offshore eastern Canada. WEI has the potential to identify exploration drilling objectives that cannot be interpreted from conventionally processed seismic surveys (e.g., smaller reservoirs). With quantitative estimates of seismic parameters from WEI, including time-lapse surveys, improved reservoir characterization and optimized exploitation is possible, particularly where production methods that modify elastic parameters, such as enhanced recovery or hydraulic fracturing, are employed. In some areas, for example the foothills of the western Canadian sedimentary basin, onshore exploration can be limited by reflection data quality, but WEI of transmitted phases provides a complementary approach for deriving structural information given that sufficiently long offsets and low frequencies are recorded. Our research views field data acquisition, data inversion, and interpretation, as an integrated process, and one result of this project is likely to be specific suggestions for modified future field acquisition that will enable practical WEI. We will seek Canadian datasets with limited confidentiality from our industry partners, preferably from the east coast, to contribute to the development of our computationally intensive, field-oriented WEI methods. The availability of large-scale computing infrastructure will allow us to derive results from these field data that cannot be achieved by other academic researchers.

In summary. By having access to virtually unparalleled computational resources, we will be in the position to thoroughly evaluate wave-equation based inversion technology on industry-scale synthetic and field data. This evaluation will lead to a maturing of the technology and practical workflows. Project leaders. Andrew (Co-PI) and Felix. Project team. Zhilong Fang (III(ia),III(ic),III(iia),III(iib)), Mengmeng Yan (PhD,III(ib),III(ic)); Shashin Sharan (PhD,III(ic),III(id),III(iid)); new MSc (III(iiia)); Mathias (PDF,III(iic)); Rafael (PDF,III(iie)); New PhD 10 (III(iiib);III(iiic);III(iiid)); .

IV—Risk mitigation

The above research projects give us a solid foundation for establishing WEI technologies. While WEI will undoubtedly yield estimates of earth properties of unprecedented quality and resolution, these results will only be of limited value if they do not include estimates of uncertainty to help industry mitigate risks in the exploration and production (E&P) of hydrocarbons. Determining quantitative assessments of risk, also known as uncertainty quantification (UQ)[139–141], is for large-dimensional problems notoriously difficult because of the excessive computational demands and the mathematical structure of CS and wave-equation based inversion (WEI). To study these difficult problems we will work on:

IV(i)Discrete Bayesian techniques. There is relatively little known regarding the UQ of individual components of structure promoting CS recoveries [142]. For e.g., questions such as “What is the error bar for this particular recovered curvelet coefficient?” are virtually unanswered for compressible signals. IV(ia) Extension of model selection techniques to compressible signals including UQ estimates in the physical domain. IV(ib) Techniques from belief propagation [143,144] have connections to Bayesian inference and may offer new ways to estimate UQ of CS. For more general inverse problems, including FWI, more is known about estimating uncertainties and people have proposed Monte-Carlo Markov Chains (MCMC) [139–141]. However, these techniques are prohibitively expensive for realistic problems because of slow convergence. Unfortunately, exploiting low-rank structure of the Hessian [140] is ineffective in ES where Hessians are “full” rank. We propose to work on two methods in this area. IV(ic) For high frequencies the GN Hessian can be modelled as a pseudodifferential operator, which can be well approximated by matrix probing [28,60]. IV(id) We will exploit the property that the GN Hessian of waveform reconstruction inversion (WRI) is a sparse matrix.

(ii) Bayesian UQ for WRI. With model and data sizes in 3D seismic problems exceeding \(10^9\) and \(10^{15}\), respectively, discrete formulations for UQ become computationally prohibitive and basic implementations of MCMC will not work. For this reason, we will take ideas from variable projection [70] in IV(iia) alternating optimization with variable projections in general and IV(iib) for WRI [52,85,87,88] specifically by combining these results with recent work on fast MCMC for functions [145].

In summary. Having access to uncertainty determinations for large-scale geophysical inversions will greatly facilitate the uptake of DNOISE’s technology by industry. Project leaders. Felix, Yaniv (Co-PI) and Rachel Kuske (Co-I). Project team. New PhD 5 [IV(ib),IV(ib)]; Curt da Silva (PhD,IV(ib),IV(iia)); Zhilong Fang (PhD,IV(ib),IV(iib)); Chia Lee (PDF,IV(iia),IV(iib))

V—Wave-equation based inversion meets reservoir characterization

E&P decisions by the oil and gas industry are not driven by estimations of elastic properties alone. Instead, these decisions are based on estimates of reservoir properties and their associated uncertainty. While seismic ES has, in principle, the capability to map lateral variations of these properties, the acceptance by reservoir engineers, who prefer to work with a variety of borehole and production data, has been slow because of our inability to include UQ and to translate seismic properties to relevant reservoir properties. As part of DNOISE III, we will change this with projects focussed on:

V(i)Target-oriented inversion and characterization. V(ia) We will investigate possible ways to conduct target-oriented inversions at the level of the reservoir using interferometric redatuming techniques [43] and V(ib) tie wave-equation inversions with rock properties and attributes that characterize the reservoir [146], such as porosity, permeability [147] and certain lithology indicators [148], and by including (empirical) rock physical relations in targeted WEI with UQ.

V(ii)Time-lapse inversions tied in with fluid flow. This project will be aimed at seismic monitoring of (un-) conventional production and is challenging because it will require integration with the physics of fluid flow and fracturing [149–152]. We will investigate how to use reservoir simulations as optimization constraints for time-lapse inversions and how to integrate these inversions with randomized time-lapse acquisitions that exploit shared information amongst the vintages [16–18].

V(iii)Information retrieval with deep learning. Translating inversions to tangible information that drives decisions “Where to drill?” and “How to produce?” remains the “holy grail”. We will investigate the use of, and connections between our work, and deep learning with “group-invariant scattering” [153], which is linked to “lifting”. Deep learning is leading to unprecedented accuracies in computer vision [154] but is extremely data and computationally intensive (to which we will have access).

In summary. Integration of seismic inversion and reservoir characterization is essential to meet current challenges in conventional and unconventional settings that require lithology and other characterizations from seismic data to monitor fluid flow, time-lapse changes and to mitigate risks. Deep learning also gives us a unique opportunity to shed new light on these challenging problems. Project leaders. Felix, Ozgur, and Mark. Project team. New PhD 6 (V(ia),V(ib)), new PhD 7 (V(ii)), new PDF 4 (V(iii)).

VI—Randomized computations and data handling

Without our extensive experience in sophisticated optimization, object-oriented programming and scientific-computing techniques, we would not have become the world’s leading group in computational ES. To continue building on our strengths, we will work on:

VI(i)Stochastic optimization. Like ML, ES has to contend with massive amounts of data, but unlike ML WEI also involves exceedingly large models and computationally expensive misfit and gradient calculations. By working with subsets of data (e.g., shots) and limited accuracy of wave simulations, significant progress has been made to control the cost of WEI. So far, this approach [47,82] has mostly been based on heuristics motivated by convex convergence analyses [155,156]. These results have provided fundamental insights but are of limited use because WEIs can be non-convex and can typically not be run to “convergence”. To address these issues, we will work: VI(ia) optimization schemes that maximize progress during the first iterations by adapting recent innovations [157]; VI(ib) extend this approach to non-convex problems [158] that work VI(ic) asynchronously, so that late arriving gradients still partake in the optimization [159–161], VI(id) adaptively, subsets of data that are strongly coupled to the model are selected with priority, and VI(ie) decentralized [162], and each machine works independently and only communicates locally to reduce communication. VI(if) To achieve additional performance improvements, we will also work on schemes that can work with cheap and inaccurate gradients (e.g., gradients based on simplified physics) interspersed with infrequent accurate gradient calculations [163]. Our aim is to achieve an order of magnitude improvement from these algorithms as in ML [164].

VI(ii)Large-scale (convex) optimization algorithms and concrete (parallel) implementations remain important in accomplishing DNOISE’s research objectives. VI(iia) Aside from continued development of highly optimized \(\ell_1\)-norm solvers, we will VI(iib) further build on SVD-free nuclear norm minimizations using matrix factorizations [7], in support of our matrix completion, interferometric [31] and lifting methods [30]. The latter two are extremely challenging due to quadratic variables but offer new possibilities for solving non-convex problems such as “blind deconvolution” [30]. However, there is no guarantee we arrive at a global minimum with our factorization approach. VI(iic) By working with Ben, we will remove this non-uniqueness via proper initializations [118,165,166] that can be extended to tensors; VI(iid) come up with a method to determine the rank of the factorization; and VI(iie) further leverage and integrate nuclear and other matrix and trace-norm objectives in WEI.

VI(iii) Randomized algorithms. The sheer size of numerical problems in ES puts strains on even the largest HPC systems, which severely limits benefits from WEI. DNOISE has successfully employed randomized dimensionality reductions to speed-up data-intensive operations by working on small randomized subsets of random phase encoded [63–65,137] or randomly selected shots [66,76]. Both approaches have connections to randomized-trace estimation [64,65,167] and sketching [168,169], and have incorporated into randomized solvers and SVD’s [130,170–172]. Building on our successful fast imaging [54,57], we will enhance these randomized techniques to meet the challenges of 3D ES by extending sketching to other than \(\ell_2\)-norm data misfits and by including VI(iiia) \(\ell_1\)-norms [168] and VI(iiib) matrix norms [169].

VI(iv)Scientific computing. SC offers the foundation for solving WEIs, including the reduced-space adjoint-state and full-space methods [83] for PDE-constrained optimization. The amount of data, the size of the models, and the mathematical structure of WEI make it one of the most challenging problems in SC. Therefore, we will exploit the mathematical properties and develop novel scalable techniques that respect matrix sparsity. In particular, we will formulate gradient computations as regularized least-squares problems. We will develop VI(iva) preconditioned iterative solvers for large-scale, ill-conditioned, indefinite systems with incomplete factorizations [125,126,173] and block-diagonal approximations; VI(ivb) alternative formulations for large-scale, saddle-point systems using augmented Lagrangian techniques [131,132]; interpretation of gradients as quasi-definite systems [133]; and VI(ivc) function space norms to measure residuals and regularization terms; VI(ivd) randomized linear algebra algorithms based on sub-sampling and other methodologies [174] for solving the linear systems that arise throughout the optimization.

VI(v)Object-oriented programming for Big Data. The large data size and algorithmic complexity of computational ES call for hybrid, verifiable, object-oriented code design for abstract numerical linear algebra [175,176], modelling and inversion [177] and PDE-constrained optimization [178]. During DNOISE, we built this object-oriented framework using Mathworks’ Parallel Computing Toolbox. This environment significantly shortened development times, and allowed for greatly improved code integrity and productivity of our research team. We will VI(va) include parallel I/O; VI(vb) incorporate checkpointing to improve resilience of large parallel jobs; VI(vc) include physical units and VI(vd) weighted norms and inner products; VI(ve) exploit multiple levels of parallelization; VI(vf) expose external libraries for a hybrid computing; and VI(vg) explore new data-handling methods such as map-reduce [179] and graphical models [180]. We will also continue to integrate the professional seismic data processing software package Omega into our workflows.

In summary. New cutting edge technologies in Computer Science are essential to make advances in computational ES. We are confident that the combination of randomized linear algebra with state-of-the art optimization and software development sustains our ability to drive innovations. Project leaders. Chen (Co-PI), Felix, and Dominique (Co-PI), and Mark (Co-PI). Project team. Gerard Gorman (Co-I), Ben (Co-I), Ernie (PDF,VI(ia),VI(ib),VI(id), VI(iia),VI(iid)); new PhD 8 (VI(ib),VI(ic),VI(ie), VI(if)); new PDF 5 (VI(iib),VI(iic)); Rajiv (PhD, VI(iid);VI(iie)); Curt da Silva (PhD,VI(iiib)); New PhD 11 (VI(iiia),VI(ivd)); Ben Boucher (MSc,VI(va),VI(vb),VI(vc),VI(vd),VI(ve)); New PhD 12 (VI(vf),VI(vg))

Scientific computing infrastructure

The research of DNOISE III is critically dependent on access to powerful high-performance computing (HPC) hardware and software, for parallel code development and for technology validation on 3D field data. The exclusive access to a large $10 M (17k core) computing facility in Brazil—made available to us by BG Group as an in-kind contribution—gives DNOISE III a virtually unparalleled position to scale up to industrial-size 3D field data sets, develop practical workflows and further optimize our algorithms. Access to this resource, which is the largest machine in South America and ranks 95 on the global top 500 HPC list, will also allow us to conduct experiments and answer scientific questions that are normally unattainable. While this HPC resource puts us in a truly unique position, this machine is only available for scale-up and technology validation and cannot be used for the earlier development stages of our parallelized software. Therefore, it is absolutely essential that we maintain access to local HPC for the duration of this CRD during the early stages of our parallel code development. Without this local HPC resource at UBC, we will lose access to the facility in Brazil. Also, short job return times during code development are critical. We have made provisions to upgrade our 1120-core cluster at the midway point to ensure we meet the research goals of the CRD. The essential need for HPC access was also identified during the external review of EOAS and we are working with the UBC administration towards an integrated solution.

The Mathworks Parallel Computing Toolbox (MPCT) and MATLAB Distributed Computing Server (MDCS) have been key components in our scalable, object-oriented software framework. The International Inversion Initiative (III) has adopted our approach and has purchased a license to 4000 MPCT workers. This ranks among the largest parallel Matlab installations in the world. We are working with BG Group, the main industrial stakeholder for III, to develop parallel I/O for field data sets. To meet the increased interest towards the industrialization of our research findings, we have added in extra support for our HPC resources and and seismic data processing. Thanks to our partnership with III, our technical team at UBC will get support from HPC and PGS-lead processing teams in Brazil. With these unparalleled resources, we will be the first academic group capable of developing WEI prototypes into industry-strengths solutions.

Contribution to the training of highly-qualified personnel

For years, the demographics of Canadian industrial geoscientists have shown a continued trend of increasing median age. This trend in combination with a shift towards more complex and quantitative ES raises serious concerns regarding the future replenishment of essential personnel in this critical area of the Canadian economy. This research project’s 34 graduate students and 14 PDFs will be trained in a unique, genuinely cross-disciplinary, environment where ideas from ML, CS, and SC are adapted for the solution of truly large-scale problems in ES. The ambitious goals, diversity and complexity of DNOISE III research objectives, call for a relatively large number of HQP. Working on DNOISE III’s research objectives with cutting-edge HPC equipment gives our highly-qualified personnel (HQP) the ideal conditions and work experience to develop unique skill-sets that make them highly employable in today’s job market. This is clearly demonstrated by the recent hiring of seven PDFs by top industrial research labs (IBM Watson and Mitsubishi Electric Research Laboratories), academic institutions (Tristan is assistant professor at Utrecht University) and several key oil and gas industry companies in Canada; our 2014 graduating MSc students Lina and Art were hired by CGG Canadian subsidiary office and Brock by Fotech Solutions both in Calgary.

With our unique ongoing HPC support at UBC and access to the industry-scale HPC in Brazil, our students and scientific personnel will be trained to be leading interdisciplinary data scientists, who are very knowledgeable in solving concrete, large-scale, geophysical problems that are of great strategic interest to industry and Canada. We have an active internship program, onsite visits with our industrial partners, individual projects on datasets, and Consortium meetings where students present their research findings and receive feedback from industry. Our trainees will receive a unique, broad, cross-disciplinary training, which engages the students by having them work with our industrial partners, present their research findings at conferences and have them prepare regular software releases. These activities constitute an invaluable experience for our trainees and are highly valued by industry evidenced by recent hires.

Value of the results and industrial relevance

Academic research has traditionally been the main driver behind innovations in ES — we only have to think of Surface-related multiple Elimination [181] and FWI [182,183] for marine and missing-trace interpolation for land data [184,185]. Each of these technologies, two of which were developed in Canada, have been of great benefit to E&P of oil and gas worldwide and within Canada. DNOISE has and will continue to have a major impact on the following areas: I—Seismic data acquisition for WEI, where our work on randomized sampling and CS has let to innovations in simultaneous marine acquisition with randomized coil sampling by WesternGeco. This technology is of great value to exploration on Canada’s East Coast. ConocoPhilips made significant cost savings by applying our ideas to marine acquisition with OBN and to acquisitions on land. Smaller land acquisitions will also benefit and so will arctic exploration, monitoring of \(CO_2\)-sequestration and hydraulic fracturing. For instance, DNOISE’s CS technology has let to a fourfold increase of square kilometers acquired on land per day so that surveys in the arctic can be done in one summer rather than being spread over 3 years. II—Resilient WEI technologies, where reliance on multiple removal, large HPC and accurate starting models is lowered, topics of keen interest to contractors, who will develop this technology into products that benefits oil and gas companies by reducing costs and risks, e.g., in the waters off the coast of Newfoundland and Labrador or in conducting inversions in complex geological areas where conventional processing fails. The DNOISE III project also includes objectives that will yield long-term benefits by leveraging IV—Risk mitigation, where we asses risks of inversion; V—WEI meetings reservoir characterization, where we will couple WEI to fluid flow and lithology classification; and VI—Randomized computations and data handling, where we will effectively handle massive data sets. Building future capability in these areas, by training HQP and developing sophisticated technologies, will be essential to sustain Canada’s leading role in sustainable E&P of its resources.

Benefits to Canada

DNOISE III matches substantial cash and in-kind contributions from an international consortium of oil and gas companies, contractor companies, and HPC companies. The aim of this consortium is to develop, industrialize, and deploy new innovative methodologies that raise the technical capabilities of ES worldwide. Canada, as a county with the third largest estimated global oil and gas reserves, stands to benefit directly from this concerted effort through active E&P operations by our industrial partners in Canada and by flow back of know-how from parent companies to their Canadian subsidiaries. The international character of this consortium is a reflection of R&D in ES being a truly global activity. Industrial research funding is typically coordinated and authorized by lead researchers from parent companies to maximize the technology uptake that must be incorporated in our partner’s subsidiary operations. This explains why the support letters and forms 183As are not signed by local representatives in Canada. Our consortium competes for funding with 105 geophysical consortia worldwide from fixed pools of money reserved by companies. Because we add real value, we are able to charge amongst the highest annual fees. We are extremely fortunate to work with leading researchers within these organizations and receive their financial support from which Canada benefits directly through technology development and HQP in this crucial knowledge-based field; through improved exploitation of its resources; development and ownership of intellectual property; improved imaging and characterization technologies needed to develop frontier resources. With research of DNOISE, Canada has the opportunity to lead and take responsibility to the future use of its energy resources.

Six of our industrial affiliates rank in the top 12 in R&D spending worldwide. All of our industrial partners are actively engaged in bringing innovations from academia, via multiple joint-industry partnerships (JIPs), to research and exploration projects in Canada. Our industrial partners either do active research in Canada or undertake activities in Canada that directly benefit from our research findings. CGG is involved in several surveys and conducts active research in Calgary; Chevron has E&P projects in offshore Newfoundland and Labrador, a proposed liquefied natural gas (LNG) project and shale gas acreage in British Columbia, and exploration and discovered resource interests in the Beaufort Sea region of the Northwest Territories; ConocoPhillips is active in Western Canada and the Canadian arctic, which is estimated to contain over a quarter of the world’s untapped natural gas reserves; ION/GXT has a subsidiary office in Calgary and will conduct extensive seismic surveys on the Northern Labrador Shelf in 2014—2015; PGS in partnership with TGS will acquire more than 30,000 km of seismic data in the Labrador Sea and Northeast Newfoundland; and Statoil has subsidiary offices in Newfoundland and Calgary and is the majority actor in the significant 2013 Bay du Nord discovery for which they will shoot extensive 3D seismic through 2014—2015;and Schlumberger Geosolutions, offers seismic services from their subsidiary office in Mount Pearl, Newfoundland, and has a R&D base in Calgary.

Because of this extensive involvement, we can be confident that our research findings and innovations will make it quickly to market. Companies that support the DNOISE III project also use large percentages of their local profits to maintain and grow their activities in Canada by investing in people and infrastructure. These investments not only benefit the local economy but also help Canada in the sustainable E&P of its resources. Several of our industrial partners are involved with LNG, which is a more environmentally friendly alternative to coal and oil. For instance, BGP, British Gas Group, Petronas, and Woodside are all involved in LNG on the West Coast. According to the 2012 State of the Canadian Nation [186]: “Strong global links are important for the adoption and diffusion of new ideas and technologies that can have a positive impact on innovation performance and international competitiveness.”International technology flows reflect … global linkages established through cross-border trade in R&D outcomes and production-ready technologies. While this includes both inter-firm and intra-firm trade, evidence points to the particular importance of technology flows between parents and affiliates." These statements clearly underline the importance of our international JIPs as will be realized in the DNOISE III project. Through these international alliances Canada can fully capitalize on global R&D expenditures and leverage innovations from world-class Canadian researchers. As a extremely resource rich nation, Canada has the responsibility and opportunity to engage and be a leader in ES.

DNOISE has an outstanding track record producing innovations and training of HQP in computational ES, CS, ML and SC. This unique mix of topics in data science and scientific computing we address in our research are directly applicable to a wide range of problems in “Big data”, putting Canada at the forefront of this important technology. The DNOISE III research is also very well aligned with Canada’s Science & Technology strategic priorities. There are several significant examples of the incorporation of our technologies into industry workflows. Moreover, our research has resulted in a startup company, Subsalt Solutions Inc.

International Inversion Initiative (III). We have been selected to be one of the two groups, outside Brazil, selected to partner with the International Inversion Initiative—a collaboration between Imperial College London, the University of Natal and UBC encompassing a $10M 17k core compute cluster with four-thousand Matlab workers, extensive HPC and processing support. By fostering a collaboration between the two leading research groups in FWI—Professor’s Mike Warner’s FULLWAVE and Professor Felix J. Herrmann’s SINBAD Consortia—and by providing them with unprecedented computing resources, BG Group has created an environment for us to become the world leaders in providing fundamental innovations for the oil and gas industry and the opportunity to enhance Canada’s presence and reputation on this global stage.

University commitment and infrastructure

Since DNOISE I, UBC has hosted our HPC equipment, which now has been moved to the University Data Centre on the Vancouver campus. The proposed HPC equipment in this CRD replaces our old compute cluster and we are planning on continuing our HPC operations as we have done in the past. The new proposed hardware will require approximately the same support resources, power, system administration, etc., as its predecessor, but will yield improved processing speeds, a 10-fold increase in RAM and a 20-fold increase in disk space. DNOISE III will also benefit from strong support from the UBC administration, as expressed in the attached letters from Dr. John Hepburn, UBC’s Vice President, Research & International and Dr. Beckie, Head of the Department.

[1] Friedlander, M, Mansour, H, Saab, R and Yilmaz, Ö (2012 ). Recovering compressively sampled signals using partial support information. Information Theory, IEEE Transactions on. 58 1122–34

[2] Mansour, H and Yilmaz, O (2012 ). Support driven reweighted \(\ell_1\) minimization. ICASSP annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/ICASSP/2012/MansourYilmazICASSPwL1/MansourYilmazICASSPwL1.pdf

[3] Mansour, H, Herrmann, F J and Yilmaz, O (2013 ). Improved wavefield reconstruction from randomized sampling via weighted one-norm minimization. Geophysics. 78 V193–206. https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2013/mansour2013GEOPiwr/mansour2013GEOPiwr.pdf

[4] Ghadermarzy, N, Mansour, H and Yilmaz, Ö (2013 ). Non-Convex Compressed Sensing Using Partial Support Information. Sampling Theory in Signal and Image Processing

[5] Ghadermarzy, N and Yilmaz, Ö (2013 ). Weighted and reweighted approximate message passing. Proc. SPIE. 8858 88580C–C–14. http://dx.doi.org/10.1117/12.2027069

[6] Ghadermarzy, N, Herrmann, F and Yilmaz, Ö (2014 ). Seismic trace interpolation with approximate message passing. UBC; UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/ghadermarzy2014SEGsti/ghadermarzy2014SEGsti.html

[7] Aravkin, A Y, Kumar, R, Mansour, H, Recht, B and Herrmann, F J (2013 ). Fast methods for denoising matrix completion formulations, with application to robust seismic data interpolation. http://arxiv.org/abs/1302.4886

[8] Mansour, H and Yilmaz, O (2013 ). A sparse randomized kaczmarz algorithm. Global conference on signal and information processing (globalSIP), 2013 iEEE. 621

[9] Lin, T T and Herrmann, F J (2013 ). Cosparse seismic data interpolation. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/lin2013EAGEcsd/lin2013EAGEcsd.pdf

[10] Hargreaves, B (2014 ). Sparse Signal Recovery: analysis and Synthesis Formulations with Prior Support Information. Master’s thesis. University of British Columbia. https://www.slim.eos.ubc.ca/Publications/Public/Thesis/2014/hargreaves2014THssr/hargreaves2014THssr.pdf

[11] Kumar, V, Oueity, J, Clowes, R and Herrmann, F J (2011 ). Enhancing crustal reflection data through curvelet denoising. Technophysics. 508 106–16. http://www.sciencedirect.com/science/article/pii/S0040195110003227

[12] Wason, H and Herrmann, F J (2012 ). Only dither: efficient simultaneous marine acquisition. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2012/wason2012EAGEode/wason2012EAGEode.pdf

[13] Mansour, H, Wason, H, Lin, T T and Herrmann, F J (2012 ). Randomized marine acquisition with compressive sampling matrices. Geophysical Prospecting. University of British Columbia, Vancouver. 60 648–62. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2478.2012.01075.x/abstract

[14] Wason, H and Herrmann, F J (2013 ). Time-jittered ocean bottom seismic acquisition. SEG technical program expanded abstracts. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/wason2013SEGtjo/wason2013SEGtjo.pdf

[15] Wason, H and Herrmann, F J (2013 ). Ocean bottom seismic acquisition via jittered sampling. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/wason2013EAGEobs/wason2013EAGEobs.pdf

[16] Oghenekohwo, F, Esser, E and Herrmann, F J (2014 ). Time-lapse seismic without repetition: reaping the benefits from randomized sampling and joint recovery. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/oghenekohwo2014EAGEtls/oghenekohwo2014EAGEtls.pdf

[17] Oghenekohwo, F, Wason, H, Esser, E and Herrmann, F J (2014 ). Foregoing repetition in time-lapse seismic - reaping benefits of randomized sampling and joint recovery. UBC. https://www.slim.eos.ubc.ca/Publications/Private/Submitted/2014/oghenekohwo2014GEOPfrt/oghenekohwo2014GEOPfrt.html

[18] Wason, H, Oghenekohwo, F and Herrmann, F J (2014 ). Randomization and repeatability in time-lapse marine acquisition. UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/wason2014SEGrrt/wason2014SEGrrt.html

[19] Oghenekohwo, F, Kumar, R and Herrmann, F J (2014 ). Randomized sampling without repetition in time-lapse surveys. UBC; UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/oghenekohwo2014SEGrsw/oghenekohwo2014SEGrsw.html

[20] Mosher, C, Li, C, Morley, L, Ji, Y, Janiszewski, F, Olson, R and Brewer, J (2014 ). Increasing the efficiency of seismic data acquisition via compressive sensing. The Leading Edge. Society of Exploration Geophysicists. 33 386–91

[21] Moldoveanu, N, Ji, Y and Beasley, C (2012 ). Multivessel coil shooting acquisition with simultaneous sources. SEG technical program expanded abstracts 2012. 1–6. http://library.seg.org/doi/abs/10.1190/segam2012-1526.1

[22] Kumar, R, Aravkin, A Y, Esser, E, Mansour, H and Herrmann, F J (2014 ). SVD-free low-rank matrix factorization: wavefield reconstruction via jittered subsampling and reciprocity. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/kumar2014EAGErank/kumar2014EAGErank.pdf

[23] Kumar, R, Lopez, O, Esser, E and Herrmann, F J (2014 ). Matrix Completion on Unstructured Grids : 2-d Seismic Data Regularization and Interpolation. UBC. https://www.slim.eos.ubc.ca/Publications/Public/TechReport/2014/kumar2014SEGmcu/kumar2014SEGmcu.html

[24] Kumar, R, Silva, C D, Akalin, O, Aravkin, A Y, Mansour, H, Recht, B and Herrmann, F J (2014 ). Efficient matrix completion for seismic data reconstruction. UBC; UBC. https://www.slim.eos.ubc.ca/Publications/Private/Submitted/2014/kumar2014GEOPemc/kumar2014GEOPemc.pdf

[25] Wason, H, Kumar, R, Aravkin, A Y and Herrmann, F J (2014 ). Source separation via SVD-free rank minimization in the hierarchical semi-separable representation. UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/wason2014SEGsss/wason2014SEGsss.html

[26] Silva, C D and Herrmann, F J (2014 ). Optimization on the Hierarchical Tucker Manifold - Applications to Tensor Completion. UBC. https://www.slim.eos.ubc.ca/Publications/Public/TechReport/2014/dasilva2014htuck/dasilva2014htuck.pdf

[27] Silva, C D and Herrmann, F J (2014 ). Low-rank promoting transformations and tensor interpolation - applications to seismic data denoising. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/dasilva2014EAGEhtucknoisy/dasilva2014EAGEhtucknoisy.pdf

[28] Demanet, L, Létourneau, P-D, N., Boumal, Calandra, H, Chiu, J and Snelson, S (2011 ). Matrix probing: a randomized preconditioner for the wave-equation Hessian. ArXiv e-prints

[29] Candès, E J, Strohmer, T and Voroninski, V (2013 ). PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics. Wiley Subscription Services, Inc., A Wiley Company. 66 1241–74. http://dx.doi.org/10.1002/cpa.21432

[30] Esser, E and Herrmann, F J (2014 ). Application of a convex phase retrieval method to blind seismic deconvolution. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/esser2014EAGEacp/esser2014EAGEacp.pdf

[31] Wang, R, Yilmaz, O and Herrmann, F J (2014 ). Full Waveform Inversion with Interferometric Measurements. UBC. https://www.slim.eos.ubc.ca/Publications/Public/TechReport/2014/wang2014SEGfwi/wang2014SEGfwi.html

[32] Demanet, L and Jugnon, V (2013 ). Convex recovery from interferometric measurements. arXiv preprint arXiv:1307.6864

[33] Berg, E van den and Friedlander, M P (2008 ). Probing the pareto frontier for basis pursuit solutions. SIAM Journal on Scientific Computing. 31 890–912. http://link.aip.org/link/?SCE/31/890

[34] Berg, E van den and Friedlander, M P (2007 ). SPGL1: A solver for large-scale sparse reconstruction

[35] Hennenfent, G, Berg, E van den, Friedlander, M P and Herrmann, F J (2008 ). New insights into one-norm solvers from the Pareto curve. Geophysics. 73 A23–6. https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2008/hennenfent08GEOnii/hennenfent08GEOnii.pdf

[36] Friedlander, M P, I. Macédo and Pong, T K (2014 ). Gauge optimization, duality, and applications

[37] Lin, T T and Herrmann, F J (2013 ). Robust estimation of primaries by sparse inversion via one-norm minimization. Geophysics. 78 R133–50. http://dx.doi.org/10.1190/geo2012-0097.1

[38] Lin, T T and Herrmann, F J (2014 ). Multilevel acceleration strategy for the robust estimation of primaries by sparse inversion. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/lin2014EAGEmas/lin2014EAGEmas.pdf

[39] Lin, T T and Herrmann, F J (2014 ). Mitigating data gaps in the estimation of primaries by sparse inversion without data reconstruction. UBC; UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/lin2014SEGmdg/lin2014SEGmdg.html

[40] Jumah, B (2012 ). Dimensionality-Reduced Estimation of Primaries by Sparse Inversion. Master’s thesis. University of British Columbia. https://www.slim.eos.ubc.ca/Publications/Public/Thesis/2012/jumah2012THdre.pdf

[41] Jumah, B and Herrmann, F J (2011 ). Dimensionality-reduced estimation of primaries by sparse inversion. SEG technical program expanded abstracts. SEG; SEG. 30 3520–5. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2011/Jumah11SEGdrepsi/Jumah11SEGdrepsi.pdf

[42] Jumah, B and Herrmann, F J (2014 ). Dimensionality-reduced estimation of primaries by sparse inversion. Geophysical Prospecting. 62 972–93. http://dx.doi.org/10.1111/1365-2478.12113

[43] Neut, J van der and Herrmann, F J (2013 ). Interferometric redatuming by sparse inversion. Geophysical Journal International. 192 666–70. http://gji.oxfordjournals.org/content/192/2/666

[44] Neut, J van der, Herrmann, F J and Wapenaar, K (2012 ). Interferometric redatuming with simultaneous and missing sources using sparsity promotion in the curvelet domain. SEG technical program expanded abstracts. SEG; SEG. 31 1–7. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2012/vanderneut2012SEGirs/vanderneut2012SEGirs.pdf

[45] Neut, J van der and Herrmann, F J (2012 ). Up / down wavefield decomposition by sparse inversion. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2012/vanderneut2012EAGEdecomp/vanderneut2012EAGEdecomp.pdf

[46] Leeuwen, T van, Gordon, D, Gordon, R and Herrmann, F J (2012 ). Preconditioning the Helmholtz equation via row-projections. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2012/vanleeuwen2012EAGEcarpcg/vanleeuwen2012EAGEcarpcg.pdf

[47] Leeuwen, T van and Herrmann, F J (2014 ). 3D frequency-domain seismic inversion with controlled sloppiness. https://www.slim.eos.ubc.ca/Publications/Public/Journals/SIAM_Journal_on_Scientific_Computing/2014/vanLeeuwen20143Dfds/vanLeeuwen20143Dfds.pdf

[48] Petrenko, A, Herrmann, F J, Oriato, D, Tilbury, S and Leeuwen, T van (2014 ). Accelerating an iterative Helmholtz solver with FPGAs. OGHPC conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/OGHPC/petrenko2014OGHPCaih.pdf

[49] Petrenko, A (2014 ). Accelerating an Iterative Helmholtz Solver Using Reconfigurable Hardware. Master’s thesis. University of British Columbia. https://www.slim.eos.ubc.ca/Publications/Public/Thesis/2014/petrenko2014THaih/petrenko2014THaih.pdf

[50] Lago, R, Petrenko, A, Fang, Z and Herrmann, F J (2014 ). Fast solution of time-harmonic wave-equation for full-waveform inversion. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/lago2014EAGEfst/lago2014EAGEfst.pdf

[51] Leeuwen, T van (2012 ). A Parallel Matrix-Free Framework for Frequency-Domain Seismic Modelling, Imaging and Inversion in Matlab. https://www.slim.eos.ubc.ca/Publications/Public/TechReport/2012/vanleeuwen2012smii/vanleeuwen2012smii.pdf

[52] Peters, B and Herrmann, F J (2014 ). A sparse reduced hessian approximation for multi-parameter wavefield reconstruction inversion. UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/peters2014SEGsrh/peters2014SEGsrh.html

[53] Zheglova, P and Herrmann, F J (2014 ). Application of matrix square root and its inverse to downward wavefield extrapolation. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/zheglova2014EAGEams/zheglova2014EAGEams.pdf

[54] Herrmann, F J and Li, X (2012 ). Efficient least-squares imaging with sparsity promotion and compressive sensing. Geophysical Prospecting. University of British Columbia, Vancouver. 60 696–712. https://www.slim.eos.ubc.ca/Publications/Public/Journals/GeophysicalProspecting/2012/herrmann11GPelsqIm/herrmann11GPelsqIm.pdf

[55] Herrmann, F J (2012 ). Pass on the message: recent insights in large-scale sparse recovery. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2012/herrmann2012EAGEpmr/herrmann2012EAGEpmr.pdf

[56] Tu, N, Li, X and Herrmann, F J (2013 ). Controlling linearization errors in \(\ell_1\) regularized inversion by rerandomization. SEG technical program expanded abstracts. 32 4640–4. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/tu2013SEGcle/tu2013SEGcle.pdf

[57] Tu, N and Herrmann, F J (2014 ). Fast imaging with surface-related multiples by sparse inversion. UBC. https://www.slim.eos.ubc.ca/Publications/Private/Submitted/2014/tu2014fis/tu2014fis.pdf

[58] Kumar, R, Leeuwen, T van and Herrmann, F J (2014 ). Extended images in action: efficient WEMVA via randomized probing. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/kumar2014EAGEeia/kumar2014EAGEeia.pdf

[59] Kumar, R, Leeuwen, T van and Herrmann, F J (2013 ). AVA analysis and geological dip estimation via two-way wave-equation based extended images. SEG technical program expanded abstracts. 32 423–7. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/kumar2013SEGAVA/kumar2013SEGAVA.pdf

[60] Silva, C D and Herrmann, F J (2012 ). Matrix probing and simultaneous sources: a new approach for preconditioning the Hessian. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2012/dasilva2012EAGEprobingprecond/dasilva2012EAGEprobingprecond.pdf

[61] Miao, L (2014 ). Efficient Seismic Imaging with Spectral Projector and Joint Sparsity. Master’s thesis. University of British Columbia. https://www.slim.eos.ubc.ca/Publications/Public/Thesis/2014/miao2014THesi/miao2014THesi.pdf

[62] Miao, L, Zheglova, P and Herrmann, F J (2014 ). Randomized HSS acceleration for full-wave-equation depth stepping migration. UBC; UBC. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2014/miao2014SEGrhss/miao2014SEGrhss.html

[63] Krebs, J R, Anderson, J E, Hinkley, D, Neelamani, R, Lee, S, Baumstein, A and Lacasse, M-D (2009 ). Fast full-wavefield seismic inversion using encoded sources. Geophysics. 74 WCC177–C188. http://geophysics.geoscienceworld.org/content/74/6/WCC177.abstract

[64] Leeuwen, T van, Aravkin, A Y and Herrmann, F J (2011 ). Seismic waveform inversion by stochastic optimization. International Journal of Geophysics. 2011. https://www.slim.eos.ubc.ca/Publications/Public/Journals/InternationJournalOfGeophysics/2011/vanLeeuwen10IJGswi/vanLeeuwen10IJGswi.pdf

[65] Haber, E, Chung, M and Herrmann, F (2012 ). An Effective Method for Parameter Estimation with PDE Constraints with Multiple Right-Hand Sides. SIAM Journal on Optimization. 22 739–57. http://epubs.siam.org/doi/abs/10.1137/11081126X

[66] Leeuwen, T van and Herrmann, F J (2013 ). Fast waveform inversion without source encoding. Geophysical Prospecting. University of British Columbia, Vancouver. 61 10–9. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2478.2012.01096.x/abstract

[67] Schmidt, M, Berg, E van den, Friedlander, M and Murphy, K (2009 ). Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm. JMLR. JMLR. 5 456–63

[68] Aravkin, A Y, Li, X and Herrmann, F J (2012 ). Fast seismic imaging for marine data. ICASSP annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/ICASSP/2012/AravkinLiHerrmann/AravkinLiHerrmann.pdf

[69] Tu, N, Aravkin, A Y, Leeuwen, T van and Herrmann, F J (2013 ). Fast least-squares migration with multiples and source estimation. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/tu2013EAGElsm/tu2013EAGElsm.pdf

[70] Aravkin, A Y and Leeuwen, T van (2012 ). Estimating nuisance parameters in inverse problems. Inverse Problems. 28 115016

[71] Leeuwen, T van, Aravkin, A Y and Herrmann, F J (2014 ). Comment on: Application of the variable projection scheme for frequency-domain full-waveform inversion (m. li, j. rickett, and a. abubakar, geophysics, 78, no. 6, r249R257). Geophysics. 79 X11–7. https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2014/vanLeeuwen2014GEOPcav/vanLeeuwen2014GEOPcav.pdf

[72] Aravkin, A Y, Leeuwen, T van, Calandra, H and Herrmann, F J (2012 ). Source estimation for frequency-domain FWI with robust penalties. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2012/aravkin2012EAGErobust/aravkin2012EAGErobust.pdf

[73] Aravkin, A Y, Friedlander, M P and Leeuwen, T van (2012 ). Robust inversion via semistochastic dimensionality reduction. ICASSP annual conference proceedings. 5245–8. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/ICASSP/2012/AravkinFriedlanderLeeuwen/AravkinFriedlanderLeeuwen.pdf

[74] Li, X, Tamalet, A, Leeuwen, T van and Herrmann, F J (2013 ). Optimization driven model-space versus data-space approaches to invert elastic data with the acoustic wave equation. SEG technical program expanded abstracts. 32 986–90. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/li2013SEGodmvdaiedwawe/li2013SEGodmvdaiedwawe.pdf

[75] Leeuwen, T van, Aravkin, A Y, Calandra, H and Herrmann, F J (2013 ). In which domain should we measure the misfit for robust full waveform inversion? EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/vanleeuwen2013EAGErobustFWI/vanleeuwen2013EAGErobustFWI.pdf

[76] Aravkin, A, Friedlander, M P, Herrmann, F J and Leeuwen, T (2012 ). Robust inversion, dimensionality reduction, and randomized sampling. Mathematical Programming. 134 101–25. http://www.springerlink.com/index/10.1007/s10107-012-0571-6

[77] Herrmann, F J, Hanlon, I, Kumar, R, Leeuwen, T van, Li, X, Smithyman, B, Wason, H, Calvert, A J, Javanmehri, M and Takougang, E T (2013 ). Frugal full-waveform inversion: From theory to a practical algorithm. The Leading Edge. 32 1082–92. http://tle.geoscienceworld.org/content/32/9/1082.abstract

[78] Li, X and Herrmann, F J (2010 ). Full-waveform inversion from compressively recovered model updates. SEG technical program expanded abstracts. SEG; SEG. 29 1029–33. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2010/li10SEGfwi/li10SEGfwi.pdf

[79] Herrmann, F J, Li, X, Aravkin, A Y and Leeuwen, T van (2011 ). A modified, sparsity promoting, Gauss-Newton algorithm for seismic waveform inversion. Proc. sPIE. 2011. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SPIE/2011/herrmann2011SPIEmsp/herrmann2011SPIEmsp.pdf

[80] Aravkin, A Y, Herrmann, F J, Leeuwen, T van and Li, X (2011 ). Fast full-waveform inversion with compressive sensing. SEG technical program expanded abstracts

[81] Li, X, Aravkin, A Y, Leeuwen, T van and Herrmann, F J (2012 ). Fast randomized full-waveform inversion with compressive sensing. Geophysics. University of British Columbia, Vancouver. 77 A13–7. https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2012/Li11TRfrfwi/Li11TRfrfwi.pdf

[82] Herrmann, F J, Calvert, A J, Hanlon, I, Javanmehri, M, Kumar, R, Leeuwen, T van, Li, X, Smithyman, B, Takougang, E T and Wason, H (2013 ). Frugal full-waveform inversion: from theory to a practical algorithm. The Leading Edge. 32 1082–92. https://www.slim.eos.ubc.ca/Publications/Public/Journals/The_Leading_Edge/2013/herrmann2013ffwi/herrmann2013ffwi.html

[83] Haber, E and Ascher, U M (2001 ). Preconditioned all-at-once methods for large, sparse parameter estimation problems. Inverse Problems. 17 1847–64. http://stacks.iop.org/0266-5611/17/i=6/a=319?key=crossref.8ad3fa0df4ae626ba0731b2d4158cdb6

[84] Abubakar, A, Hu, W, Habashy, T M and Berg, P M van den (2009 ). Application of the finite-difference contrast-source inversion algorithm to seismic full-waveform data. Geophysics. 74 WCC47–C58. http://geophysics.geoscienceworld.org/content/74/6/WCC47.abstract

[85] Leeuwen, T van and Herrmann, F J (2013 ). Mitigating local minima in full-waveform inversion by expanding the search space. Geophysical Journal International. http://gji.oxfordjournals.org/content/early/2013/07/30/gji.ggt258.abstract

[86] Leeuwen, T van and Herrmann, F J (2013 ). A Penalty Method for PDE-Constrained Optimization. UBC. https://www.slim.eos.ubc.ca/Publications/Private/Tech\%20Report/2013/vanLeeuwen2013Penalty2/vanLeeuwen2013Penalty2.pdf

[87] Leeuwen, T van, Herrmann, F J and Peters, B (2014 ). A new take on FWI: wavefield reconstruction inversion. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/leeuwen2014EAGEntf/leeuwen2014EAGEntf.pdf

[88] Peters, B, Herrmann, F J and Leeuwen, T van (2014 ). Wave-equation based inversion with the penalty method: adjoint-state versus wavefield-reconstruction inversion. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/peters2014EAGEweb/peters2014EAGEweb.pdf

[89] Leeuwen, T van and Herrmann, F J (2014 ). A penalty method for PDE-constrained optimization (CONFIDENTIAL). UBC; UBC. https://www.slim.eos.ubc.ca/Publications/Private/Submitted/2014/vanLeeuwen2014pmpde/vanLeeuwen2014pmpde.pdf

[90] Esser, E, Leeuwen, T van, Aravkin, A Y and Herrmann, F J (2014 ). A Scaled Gradient Projection Method for Total Variation Regularized Full Waveform Inversion. UBC. https://www.slim.eos.ubc.ca/Publications/Public/TechReport/2014/esser2014SEGsgp/esser2014SEGsgp.html

[91] Vigh, D, Moldoveanu, N, Jiao, K, Huang, W and Kapoor, J (2013 ). Ultralong-offset data acquisition can complement full-waveform inversion and lead to improved subsalt imaging. The Leading Edge. Society of Exploration Geophysicists. 32 1116–22

[92] Candès, E J, Romberg, J and Tao, T (2005 ). Signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59 1207–23

[93] Donoho, D (2006 ). Compressed sensing. IEEE Trans. Inf. Theory. 52 1289–306

[94] Herrmann, F J (2010 ). Randomized sampling and sparsity: getting more information from fewer samples. Geophysics. SEG. 75 WB173–B187. https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2010/herrmann2010GEOPrsg/herrmann2010GEOPrsg.pdf

[95] Herrmann, F J, Friedlander, M P and Yilmaz, O (2012 ). Fighting the curse of dimensionality: compressive sensing in exploration seismology. Signal Processing Magazine, IEEE. University of British Columbia, Vancouver. 29 88–100. https://www.slim.eos.ubc.ca/Publications/Public/Journals/IEEESignalProcessingMagazine/2012/Herrmann11TRfcd/Herrmann11TRfcd.pdf

[96] Recht, B, Fazel, M and Parrilo, P (2010 ). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review. 52 471–501

[97] Gandy, S, Recht, B and Yamada, I (2011 ). Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems. 27 025010

[98] Kreimer, N, Stanton, A and Sacchi, M D (2013 ). Tensor completion based on nuclear norm minimization for 5D seismic data reconstruction. Geophysics. 78 V273–84. http://geophysics.geoscienceworld.org/content/78/6/V273.abstract

[99] Kreimer, N and Sacchi, M (2012 ). A tensor higher-order singular value decomposition for prestack seismic data noise reduction and interpolation. Geophysics. 77 V113–22

[100] Hennenfent, G, Fenelon, L and Herrmann, F J (2010 ). Nonequispaced curvelet transform for seismic data reconstruction: a sparsity-promoting approach. Geophysics. SEG. 75 WB203–B210. https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2010/hennenfent2010GEOPnct/hennenfent2010GEOPnct.pdf

[101] Silva, C D and Herrmann, F J (2013 ). Structured tensor missing-trace interpolation in the Hierarchical Tucker format. SEG technical program expanded abstracts. SEG; SEG. 32 3623–7. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/dasilva2013SEGhtuck/dasilva2013SEGhtuck.pdf

[102] Kumar, R, Mansour, H, Aravkin, A Y and Herrmann, F J (2013 ). Reconstruction of seismic wavefields via low-rank matrix factorization in the hierarchical-separable matrix representation. SEG technical program expanded abstracts. 32 3628–33. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/kumar2013SEGHSS/kumar2013SEGHSS.pdf

[103] Trickett, S, Burroughs, L, Milton, A, Walton, L and Dack, R (2010 ). Rank-reduction-based trace interpolation. SEG technical program expanded abstracts 2010. 3829–33. http://library.seg.org/doi/abs/10.1190/1.3513645

[104] Gao, J, Sacchi, M and Chen, X (2013 ). A fast reduced-rank interpolation method for prestack seismic volumes that depend on four spatial dimensions. Geophysics. 78 V21–30. http://dx.doi.org/10.1190/geo2012-0038.1

[105] Baron, D, Wakin, M B, Duarte, M F, Sarvotham, S and Baraniuk, R G (2006 ). Distributed compressed sensing. preprint

[106] Shahidi, R, Tang, G, Ma, J and Herrmann, F J (2013 ). Application of randomized sampling schemes to curvelet-based sparsity-promoting seismic data recovery. Geophysical Prospecting. 61 973–97. https://www.slim.eos.ubc.ca/Publications/Public/Journals/GeophysicalProspecting/2013/shahidi2013GEOPROSars/shahidi2013GEOPROSars.pdf

[107] Ostromoukhov, V, Donohue, C and Jodoin, P-m (2004 ). Fast hierarchical importance sampling with blue noise properties. ACM TRANSACTIONS ON GRAPHICS. 23

[108] Haber, E, Horesh, L and Tenorio, L (2008 ). Numerical methods for experimental design of large-scale linear ill-posed inverse problems. Inverse Problems. 24 055012. http://stacks.iop.org/0266-5611/24/i=5/a=055012

[109] Tenorio, L, Lucero, C, Ball, V and Horesh, L (2013 ). Experimental design in the context of tikhonov regularized inverse problems. Statistical Modelling. 13 481–507. http://smj.sagepub.com/content/13/5-6/481.abstract

[110] Ahmed, A, Recht, B and Romberg, J (2012 ). Blind Deconvolution using Convex Programming. ArXiv e-prints

[111] O’Brien, J T, Kamp, W P and Hoover, G M (1982 ). Sign-bit amplitude recovery with applications to seismic data. Geophysics. 47 1527–39. http://geophysics.geoscienceworld.org/content/47/11/1527.abstract

[112] Davenport, M A, Plan, Y, van den Berg, E and Wootters, M (2012 ). 1-Bit Matrix Completion. ArXiv e-prints

[113] Symes, W W (2008 ). Migration velocity analysis and waveform inversion. Geophysical Prospecting. Blackwell Publishing Ltd. 56 765–90. http://dx.doi.org/10.1111/j.1365-2478.2008.00698.x

[114] Biondi, B and Almomin, A (2014 ). Simultaneous inversion of full data bandwidth by tomographic full-waveform inversion. Geophysics. 79 WA129–A140. http://dx.doi.org/10.1190/geo2013-0340.1

[115] Warner, M and Guasch, L (2014 ). Adaptive waveform inversion - fWI without cycle skipping - theory. EAGE annual conference proceedings

[116] Plessix, R-E (2006 ). A review of the adjoint-state method for computing the gradient of a functional with geophysical applications. Geophysical Journal International. 167 495–503. http://dx.doi.org/10.1111/j.1365-246X.2006.02978.x

[117] Abubakar, A, Hu, W, Habashy, T and Berg, P van den (2009 ). Application of the finite-difference contrast-source inversion algorithm to seismic full-waveform data. Geophysics. 74 WCC47–C58. http://dx.doi.org/10.1190/1.3250203

[118] Netrapalli, P, Jain, P and Sanghavi, S (2013 ). Phase retrieval using alternating minimization. Advances in neural information processing systems. 2796–804

[119] Riyanti, C, Erlangga, Y, Plessix, R, Mulder, W, Vuik, C and Oosterlee, C (2006 ). A new iterative solver for the time-harmonic wave equation. Geophysics. 71 E57–63. http://dx.doi.org/10.1190/1.2231109

[120] Gijzen, M van, Erlangga, Y and Vuik, C (2007 ). Spectral analysis of the discrete helmholtz operator preconditioned with a shifted laplacian. SIAM Journal on Scientific Computing. 29 1942–58. http://dx.doi.org/10.1137/060661491

[121] Knibbe, H, Mulder, W, Oosterlee, C and Vuik, C (2014 ). Closing the performance gap between an iterative frequency-domain solver and an explicit time-domain scheme for 3D migration on parallel architectures. Geophysics. 79 S47–61. http://dx.doi.org/10.1190/geo2013-0214.1

[122] Druskin, V, Remis, R and Zaslavsky, M (2014 ). An extended krylov subspace model-order reduction technique to simulate wave propagation in unbounded domains. Journal of Computational Physics. 272 608–18. http://www.sciencedirect.com/science/article/pii/S0021999114003271

[123] Gijzen, M B van, Sleijpen, G L G and Zemke, J-P M (2014 ). Flexible and multi-shift induced dimension reduction algorithms for solving large sparse linear systems. Numerical Linear Algebra with Applications. http://dx.doi.org/10.1002/nla.1935

[124] Symes, W (2007 ). Reverse time migration with optimal checkpointing. Geophysics. 72 SM213–M221. http://dx.doi.org/10.1190/1.2742686

[125] Saad, Y (2003 ). Iterative Methods for Sparse Linear Systems, Second. SIAM, Philadelphia, PA

[126] Greenbaum, A (1997 ). Iterative Methods for Solving Linear Systems. SIAM, Philadelphia, PA

[127] Greif, C, He, S and Liu, P (2014 ). SYM-ILDL: Numerical Software for Computing Incomplete \(L D L^T\) Factorizations Fo Symmetric Indefinite and Skew-Symmetric Matrices. The University of British Columbia

[128] Erlangga, Y A and Nabben, R (2008 ). Deflation and balancing preconditioners for Krylov subspace methods applied to nonsymmetric matrices. SIAM J. Matrix Anal. Appl. 30 684–99

[129] Tang, J M, Nabben, R, Vuik, C and Erlangga, Y A (2009 ). Comparison of two-level preconditioners derived from deflation, domain decomposition and multigrid methods. J. Sci. Comput. 39 340–70

[130] Avron, H, Maymounkov, P and Toledo, S (2010 ). Blendenpik: Supercharging lAPACK’s least-squares solver. SIAM Journal on Scientific Computing. 32 1217–36. http://dx.doi.org/10.1137/090767911

[131] Golub, G H and Greif, C (2003 ). On solving block-structured indefinite linear systems. SIAM Journal on Scientific Computing. 24 2076–92

[132] Golub, G H, Greif, C and Varah, J M (2005 ). An algebraic analysis of block diagonal preconditioner for saddle point systems. SIAM J. Matrix Anal. Appl. 27 779–92

[133] Arioli, M and Orban, D (2013 ). Iterative Methods for Symmetric Quasi-Definite Linear Systems—Part I: Theory. GERAD, Montréal, Canada

[134] Byrd, R H, Lu, P, Nocedal, J and Zhu, C (1995 ). A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing. Society for Industrial; Applied Mathematics, Philadelphia, PA, USA. 16 1190–208. http://dx.doi.org/10.1137/0916069

[135] Li, X, Aravkin, A, Leeuwen, T van and Herrmann, F (2012 ). Fast randomized full-waveform inversion with compressive sensing. Geophysics. 77 A13

[136] Warner, M, Ratcliffe, A, Nangoo, T, Morgan, J, Umpleby, A, Shah, N, Vinje, V, Štekl, I, Guasch, L, Win, C, Conroy, G and Bertrand, A (2013 ). Anisotropic 3D full-waveform inversion. Geophysics. 78 R59–80. http://dx.doi.org/10.1190/geo2012-0338.1

[137] Li, X, Aravkin, A, Leeuwen, T van and Herrmann, F (2012 ). Fast randomized full-waveform inversion with compressive sensing. Geophysics. 77 A13

[138] Warner, M, Ratcliffe, A, Nangoo, T, Morgan, J, Umpleby, A, Shah, N, Vinje, V, Štekl, I, Guasch, L, Win, C, Conroy, G and Bertrand, A (2013 ). Anisotropic 3D full-waveform inversion. Geophysics. 78 R59–80. http://geophysics.geoscienceworld.org/content/78/2/R59.abstract

[139] Gunning, J, Glinsky, M E and Hedditch, J (2010 ). Resolution and uncertainty in 1D cSEM inversion: A bayesian approach and open-source implementation. Geophysics. 75 F151–71. http://geophysics.geoscienceworld.org/content/75/6/F151.abstract

[140] Martin, J, Wilcox, L, Burstedde, C and Ghattas, O (2012 ). A stochastic newton mCMC method for large-scale statistical inverse problems with application to seismic inversion. SIAM Journal on Scientific Computing. 34 A1460–87. http://dx.doi.org/10.1137/110845598

[141] Fang, Z, Silva, C D and Herrmann, F J (2014 ). Fast uncertainty quantification for 2D full-waveform inversion with randomized source subsampling. EAGE annual conference proceedings. https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/fang2014EAGEfuq/fang2014EAGEfuq.pdf

[142] Candès, E J and Plan, Y (2009 ). Near-ideal model selection by \(\ell_1\) minimization. The Annals of Statistics. The Institute of Mathematical Statistics. 37 2145–77. http://dx.doi.org/10.1214/08-AOS653

[143] Donoho, D L, Maleki, A and Montanari, A (2009 ). Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences. 106 18914–9. http://www.pnas.org/content/106/45/18914.abstract

[144] Rangan, S (2011 ). Generalized approximate message passing for estimation with random linear mixing. Information theory proceedings (iSIT), 2011 iEEE international symposium on. 2168–72

[145] Cotter, S L, Roberts, G O, Stuart, A M and White, D (2013 ). MCMC methods for functions: Modifying old algorithms to make them faster. Statistical Science. The Institute of Mathematical Statistics. 28 424–46. http://dx.doi.org/10.1214/13-STS421

[146] Pendrel, J (2001 ). Seismic inversion–the best tool for reservoir characterization. CSEG Recorder. 26 18–24

[147] Mukerji, T, Avseth, P, Mavko, G, Takahashi, I and González, E (2001 ). Statistical rock physics: Combining rock physics, information theory, and geostatistics to reduce uncertainty in seismic reservoir characterization. The Leading Edge. 20 313–9. http://dx.doi.org/10.1190/1.1438938

[148] Herrmann, F J, Lyons, W J and Stark, C P (2001 ). Seismic facies characterization by monoscale analysis. Geophysical Research Letters. 28 3781–4. http://dx.doi.org/10.1029/2001GL013020

[149] Landro, M (2001 ). Discrimination between pressure and fluid saturation changes from time-lapse seismic data. Geophysics. 66 836–44. http://dx.doi.org/10.1190/1.1444973

[150] Dadashpour, M, Landrø, M and Kleppe, J (2008 ). Nonlinear inversion for estimating reservoir parameters from time-lapse seismic data. Journal of Geophysics and Engineering. 5 54. http://stacks.iop.org/1742-2140/5/i=1/a=006

[151] Chadwick, A, Williams, G, Delepine, N, Clochard, V, Labat, K, Sturton, S, Buddensiek, M, Dillen, M, Nickel, M, Lima, A, Arts, R, Neele, F and Rossi, G (2010 ). Quantitative analysis of time-lapse seismic monitoring data at the sleipner cO2 storage operation. The Leading Edge. 29 170–7. http://dx.doi.org/10.1190/1.3304820

[152] Liang, L, Abubakar, A and Habashy, T (2010 ). Three-dimensional fluid-flow constrained crosswell electromagnetic inversion. SEG technical program expanded abstracts. 742–7. http://library.seg.org/doi/abs/10.1190/1.3513889

[153] Bruna, J, Mallat, S, Bacry, E and Muzy, J-F (2013 ). Intermittent Process Analysis with Scattering Moments. ArXiv e-prints

[154] Krizhevsky, A, Sutskever, I and Hinton, G E (2012 ). ImageNet classification with deep convolutional neural networks. Advances in neural information processing systems 25. Curran Associates, Inc. 1097–105. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

[155] Friedlander, M P and Schmidt, M (2012 ). Hybrid Deterministic-Stochastic Methods for Data Fitting. SIAM Journal on Scientific Computing. Society for Industrial; Applied Mathematics. 34 A1380–405. http://epubs.siam.org/doi/abs/10.1137/110830629

[156] Aravkin, A, Friedlander, M P, Herrmann, F J and Leeuwen, T (2012 ). Robust inversion, dimensionality reduction, and randomized sampling. Mathematical Programming. 134 101–25. http://www.springerlink.com/index/10.1007/s10107-012-0571-6

[157] Roux, N L, Schmidt, M and Bach, F R (2012 ). A stochastic gradient method with an exponential convergence rate for finite training sets. Advances in neural information processing systems 25. Curran Associates, Inc. 2663–71. http://papers.nips.cc/paper/4633-a-stochastic-gradient-method-with-an-exponential-convergence-_rate-for-finite-training-sets.pdf

[158] Ghadimi, S, Lan, G and Zhang, H (2013 ). Mini-batch Stochastic Approximation Methods for Nonconvex Stochastic Composite Optimization. ArXiv e-prints

[159] Tsitsiklis, J N, Bertsekas, D P, Athans, M and others (1986 ). Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE transactions on automatic control. 31 803–12

[160] Agarwal, A and Duchi, J C (2011 ). Distributed Delayed Stochastic Optimization. ArXiv e-prints

[161] Recht, B, Re, C, Wright, S and Niu, F (2011 ). Hogwild: A lock-free approach to parallelizing stochastic gradient descent. Advances in neural information processing systems. 693–701

[162] Shi, W, Ling, Q, Wu, G and Yin, W (2014 ). EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization. ArXiv e-prints

[163] Johnson, R and Zhang, T (2013 ). Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems 26. Curran Associates, Inc. 315–23. http://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf

[164] Agarwal, A, Chapelle, O, Dudik, M and Langford, J (2011 ). A Reliable Effective Terascale Linear Learning System. ArXiv e-prints

[165] Jain, P, Netrapalli, P and Sanghavi, S (2012 ). Low-rank Matrix Completion using Alternating Minimization. ArXiv e-prints

[166] Jain, P and Oh, S (2014 ). Provable Tensor Factorization with Missing Data. ArXiv e-prints

[167] Avron, H and Toledo, S (2011 ). Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix. J. ACM. ACM, New York, NY, USA. 58 8:1–8:34. http://doi.acm.org/10.1145/1944345.1944349

[168] Avron, H, Sindhwani, V and Woodruff, D (2013 ). Sketching structured matrices for faster nonlinear regression. Advances in neural information processing systems. 2994–3002

[169] Li, Y, Nguyên, H L and Woodruff, D P (2014 ). On sketching matrix norms and the top singular vector. SODA. SIAM. 1562–81

[170] Halko, N, Martinsson, P G and Tropp, J A (2011 ). Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. Society for Industrial; Applied Mathematics, Philadelphia, PA, USA. 53 217–88. http://dx.doi.org/10.1137/090771806

[171] Xi, Y, Xia, J and Chan, R (2014 ). A fast randomized eigensolver with structured lDL factorization update. SIAM Journal on Matrix Analysis and Applications. 35 974–96. http://dx.doi.org/10.1137/130914966

[172] Needell, D and Tropp, J A (2014 ). Paved with good intentions: Analysis of a randomized block kaczmarz method. Linear Algebra and its Applications. 441 199–221. http://www.sciencedirect.com/science/article/pii/S0024379513000098

[173] Orban, D (2013 ). Limited-Memory \(LDL^T\) Factorization of Symmetric Quasi-Definite Matrices with Application to Constrained Optimization. GERAD, Montréal, Canada

[174] Shabat, G, Shmueli, Y and Averbuch, A (2013 ). Randomized LU Decomposition. ArXiv e-prints

[175] Padula, A D, Scott, S D and Symes, W W (2009 ). A software framework for abstract expression of coordinate-free linear algebra and optimization algorithms. ACM Trans. Math. Softw. ACM, New York, NY, USA. 36 8:1–8:36. http://doi.acm.org/10.1145/1499096.1499097

[176] Bartlett, R A, Waanders, B G V B and Heroux, M A (2004 ). Vector reduction/transformation operators. ACM Trans. Math. Softw. ACM, New York, NY, USA. 30 62–85. http://doi.acm.org/10.1145/974781.974785

[177] Symes, W W, Sun, D and Enriquez, M (2011 ). From modelling to inversion: designing a well-adapted simulator. Geophysical Prospecting. Blackwell Publishing Ltd. 59 814–33. http://dx.doi.org/10.1111/j.1365-2478.2011.00977.x

[178] Funke, S W and Farrell, P E (2013 ). A framework for automated PDE-constrained optimisation. ArXiv e-prints

[179] Lämmel, R (2008 ). Google’s mapReduce programming model — revisited. Science of Computer Programming. 70 1–30. http://www.sciencedirect.com/science/article/pii/S0167642307001281

[180] Gonzalez, J E, Low, Y, Gu, H, Bickson, D and Guestrin, C (2012 ). PowerGraph: Distributed graph-parallel computation on natural graphs. Proceedings of the 10th uSENIX conference on operating systems design and implementation. USENIX Association, Berkeley, CA, USA. 17–30. http://dl.acm.org/citation.cfm?id=2387880.2387883

[181] Verschuur, D J, J.Berkhout, A and Wapenaar, C P A (1992 ). Adaptive surface-related multiple elimination. Geophysics. 57 1166–77

[182] Tarantola, A and Valette, A (1982 ). Generalized nonlinear inverse problems solved using the least squares criterion. Reviews of Geophysics and Space Physics. 20 129–232

[183] Pratt, G, Shin, C and Hicks, G (1998 ). Gauss-Newton and full Newton methods in frequency-space seismic waveform inversion. Geophysical Journal International. 133 341–62. http://doi.wiley.com/10.1046/j.1365-246X.1998.00498.x

[184] Sacchi, M and Ulrych, T (1996 ). Estimation of the discrete fourier transform, a linear inversion approach. Geophysics. 61 1128–36. http://dx.doi.org/10.1190/1.1444033

[185] Trad, D (2009 ). Five-dimensional interpolation: Recovering from acquisition constraints. Geophysics. 74 V123–32. http://dx.doi.org/10.1190/1.3245216

[186] Technology and Innovation Council (2013 ). State of the Nation 2012. http://www.stic-csti.ca/eic/site/stic-csti.nsf/eng/00065.html