Herrmann - CRDPJ 375142 - 08

Released to public domain under Creative Commons license type BY.
Copyright (c) 2015 SLIM group @ The University of British Columbia."

Collaborative Research and Development (CRD) Grants Progress Report

Due Date: (May 30, 2014)
Covers the Period: (May 1, 2013 to April 30, 2014)

Is your personal information below correct? (please enter an “x” in the appropriate box)

__ Yes
X No (please make the necessary corrections)

Dr. F.J. Herrmann
Dept. of Earth, Ocean and Atmospheric Sciences
University of British Columbia
2020-2207 Main Mall
VANCOUVER BC V6T 1Z4

Tel.: (604) 822-8628
E-mail Address: fherrmann@eos.ubc.ca

Is the project information below correct?

X Yes
__ No (please make the necessary corrections)

Project title: Dynamic Nonlinear Optimization for Imaging in Seismic Exploration (DNOISE)

File Number: CRDPJ 375142 - 08

Co-investigator(s):

M.P. Friedlander, Computer Science, British Columbia
O.Y. Yilmaz, Mathematics, British Columbia

Collaborator(s):

E.Y. Haber, British Columbia
A.M. Powell, Vanderbilt University
M.A. Saunders, Stanford University
C.C. Stolk, University of Twente
E. Verschuur, Delft University of Technology

Supporting Organization(s):

S.T. Kaplan, Chevron Canada Resources Ltd
Z. Yu, British Petroleum Oil
H. MacIntyre, BG Group (Canada)
C. Mosher, ConocoPhillips Canada Resources Corp.
D. Nichols, Schlumberger Canada Limited
T. Hertweck, CGGVeritas (CAN)
C.E. Theodoro, Petrobras
S. Jaffer, Total
S. Brandsberg-dahl, Petroleum Geo-Services
T. Ridsdill-Smith, Woodside
J. Brittan, ION Geophysical

1. Progress Towards Objectives/Milestones

Executive summary. As we entered the second half of the DNOISE II project, we are happy to report that we have made significant progress on several fronts. Firstly, our work on seismic data acquisition with compressive sensing is becoming widely recognized. For instance, ConocoPhilips ran a highly successful field trial on Marine acquisition with compressive sensing and obtained significant improvements compared to standard production (see figure below). Moreover, one of the main outcomes of this year’s EAGE workshop was that industry is ready to adapt randomized sampling as a new acquisition paradigm. Needless to say this is a big success for what we have been trying to accomplish with DNOISE II. Finally, we have made a breakthrough in the application of randomized sampling in 4-D seismic, which is receiving a lot of interest from industry. Secondly, our work on large-scale optimization in the context of wave-equation based inversion is also increasingly widely adapted. For instance, our batching techniques are making the difference between making a loss or profit for a large contractor company active in the area of full-waveform inversion. We also continued to make progress in exciting new directions that go beyond sparsity promotion and which allow us to exploit other types of structure within the data, such as low-rank for matrices or hierarchical Tucker formats for tensors. Application of these techniques show excellent results and in certain cases, such as source separation problems with small dithering, show significant improvements over transform-domain methods. Thirdly, we continued to make significant progress in wave-equation based inversion. We extended our new penalty-based formulation now called Wavefield Reconstruction Inversion/Imaging to include total-variation regularization and density variations. We also continued to make progress on multiples, imaging with multiples and 3-D full-waveform inversion. Statoil is the latest company to join and we have several other companies that have shown a keen interest. We also received substantial in-kind contributions including a license to WesternGeco’s iOmega and HPC equipment discounts. After many years of support BP decided unfortunately to no longer support SINBAD quoting financial headwind related to the Deep horizon disaster. On a more positive note, we are extremely happy to report major progress on our efforts to secure access to high-performance compute, including renewed funding from NSERC and our involvement in the International Inversion Initiative in Brazil. 9 peer-reviewed journal publications have resulted from our work within the reporting period, with a further 6 submitted, and DNOISE members disseminated the results of our research at 49 major national and international conference presentations. On the HQP training side, 4 MSc students have recently graduated, with one obtaining a position with CGG Calgary, and we added 4 postdocs and 3 PhD students to our team in September 2014, greatly increasing our research capacity. As can be seen from the report below, we are well on schedule and on certain topics well beyond the milestones included in the original proposal. With the purchase of the new cluster we expect to see a surge of activity in extending our algorithms to 3D. With this increased capacity, we continue to be in an excellent position to make fundamental contributions to the fields of seismic data acquisition, processing, and wave-equation based inversion. In the sections below, we give a detailed overview of the research and publication activities of the different members of the group and how these relate to the objectives of the grant, to industrial uptake, and to outreach. Unless stated otherwise the students and PDFs are (co)-supervised by the PI. We refer to the publications section 4.0 for a complete list of our presentations, conference proceedings, and journal publications. We also refer to our mindmap, which clearly establishes connections between the different research topics we have embarked upon as part of the DNOISE II project.

Figure1Field trial of ConocoPhilips. Thanks to Chuck Mosher.

Compressive acquisition and sparse recovery

Objectives: Design and implementation of new seismic-data acquisition methodologies that reduce costs by exploiting structure in seismic data.

Seismic data collection is becoming more and more challenging because of increased demands for high-quality long-offset and wide-azimuth data. At SLIM and as part of the DNOISE II project, we adapt recent results from compressive sensing towards innovations in marine acquisition that reduce cost and allow us to collect more data.

Marine acquisition with structure promotion

Objectives: Design and implementation of new seismic-data acquisition methodologies (for ocean bottom surveys) that help mitigate costs by exploiting structure in seismic data.

Randomized marine acquisition with time jittering

Haneet Wason (PhD student) continued to work on the development of new marine-acquisition schemes that derive from compressive sensing. Connections between random time dithering and jittered sampling in space were made, wherein high-quality seismic data volumes can be recovered from time-jittered marine acquisition where the average intershot time is reduced significantly. While this reduction leads to cheaper surveys at the cost of overlapping shots, the time-jittered acquisition, in conjunction with the shot separation by curvelet-domain sparsity promotion, allows us to recover high quality data volumes. This work was presented at our 2013 Spring SINBAD Consortium meeting and at the EAGE in London by Felix J. Herrmann in the talk entitled “Ocean bottom seismic acquisition via jittered sampling”. Haneet Wason also extended her work to incorporate irregular grids in the time-jittered acquisition design, followed by sparsity-promoting recovery via the non-equispaced fast discrete curvelet transform (NFDCT). This work was presented at our 2013 Fall SINBAD Consortium meeting and the 2013 SEG in Houston by Haneet Wason in the talk entitled “Time-jittered ocean bottom seismic acquisition”, and by Tim Lin (PhD student) in the talk “Dense shot-sampling via time-jittered marine sources” at the SEG Workshop: “Seismic Data Acquisition with Simultaneous Sources”.

Figure2Recovery from time-jittered marine acquisition.

Source separation via rank minimization

Objectives: Develop algorithms to address the source separation problem for simultaneous towed-array acquisition surveys.

From July — October, 2013, Haneet Wason worked as an intern at Petroleum Geo-Services (PGS) in Weybridge, London (UK), where she worked on the problem of source separation for a simultaneous (or blended) towed-array acquisition with two sources (at different depths) firing shots within 1 second of each other. The main aim of this project was to separate each blended shot gather into its constituent source components. Haneet Wason observed that such low variability in the firing times did not favour recovery (or separation) via sparsity-promotion, as in the case of the time-jittered acquisition (high variability scenario), and hence, began work on developing an algorithm to address this problem. In early 2014, Haneet Wason, along with Rajiv Kumar (PhD student), extended work towards recovery algorithms that exploit low-rank structure. Specifically, they developed an algorithm based on SVD-free nuclear-norm minimization in the hierarchical semi-separable representation and the research findings have been submitted in the following abstract to the 2014 SEG: “Source separation via SVD-free rank minimization in the hierarchical semi-separable representation”. The algorithm is an extension of the work presented by Rajiv Kumar presented at the 2013 SEG in Houston in the talk entitled “Reconstruction of seismic wavefields via low-rank matrix factorization in the hierarchical-separable matrix representation”, wherein data is recovered from missing shots or receivers, i.e., the missing-trace interpolation problem. Haneet Wason will be testing this algorithm on the real dataset that she has received from PGS.

Time-lapse seismic

Objectives: Develop randomized sampling and recovery algorithms that leverage recent insights from compressive sensing towards the recovery of time-lapse surveys including possible implications of randomized sampling on repeatability of surveys.

Felix Oghenekohwo (PhD student) was actively involved with research focused on randomized sampling techniques and compressive sensing extensions to time-lapse seismic. Preliminary observations of this work were presented at the CSEG GeoConvention in Calgary, in May 2013 and an abstract titled “Time-lapse seismics with randomized sampling” was submitted to SEG later in the same year. At the 2013 [SINBAD Fall Consortium meeting], Felix Oghenekohwo proposed a radical new approach to 4-D seismic that exploits shared information amongst the base-line and monitor surveys explicitly. With the new approach, Felix Oghenekohwo was able to shed fundamental new light on repeatability requirements in time-lapse seismic. For instance, Felix Oghenekohwo showed in his EAGE paper “Time-lapse seismic without repetition: reaping the benefits from randomized sampling and joint recovery”, a method for computing time-lapse signals where the acquisition was not repeated, exploiting the fact that different vintages have information in common. Together with Rajiv Kumar, Felix Oghenekohwo extended his method to recover time-lapse signals in the model space, which resulted in a submission of the SEG abstract “Randomized sampling without repetition in time-lapse surveys”. With Haneet Wason, Felix Oghenekohwo tested his model for randomized acquisition for time-jittered marine acquisition and co-authored the abstract “Randomization and repeatability in time-lapse marine acquisition”. He is also preparing a software release on this new 4D method, and he is about to submit a manuscript summarizing his research findings to a peer-reviewed journal.

Figure3Recovery of 4-D shot records (original on the left) from random time-jittered marine acquisition with a four-fold subsampling = four-fold reduction survey time. Middle. Recovery with independent recovery model where the different vintages are reconstructed independently. Right. Recovery with the joint recovery model.

Compressive sensing

Objectives: Develop compressive sensing theory with applications to exploration seismology.

Together with Ozgur Yilmaz (co-PI), Brock Hargreaves (MSc student, recently graduated) worked on transform-domain sparsity promotion. Specifically, he investigated ways of incorporating prior information to recovery algorithms based on promoting “analysis-sparsity”. He devised an original algorithm that in certain cases outperforms existing methods. After an internship at Total in Houston, Brock Hargreaves completed his studies and obtained his MSc degree in April 2014. Brock’s thesis work—thesis title: [“Sparse Signal Recovery: Analysis and Synthesis Formulations with Prior Support Information”]—will be submitted for journal publication shortly.

Under the supervision of Ozgur Yilmaz, Navid Ghadermarzy obtained his MSc degree in August 2013—thesis title: “[Using Prior Support Information in Compressed Sensing]”—and started as a PhD student in the Mathematics Department of UBC, again under the supervision of Ozgur Yilmaz. Since then Navid followed up his research on using prior support information to enhance recovery conditions of different signal recovery algorithms. Specifically, he developed an algorithm for seismic trace interpolation that utilizes fast convergence of the approximate message passing (AMP) algorithm in the Fourier domain and uses it as a pre-calculator for missing seismic trace interpolation in the curvelet domain. This makes the interpolation both faster and better (in terms of recovery error). The following SEG abstract is submitted on this work: “Seismic trace interpolation with approximate message passing”. Together with Ozgur Yilmaz and Rongrong Wang (PDF since September 2013), Navid also started working on fast iterative methods for overdetermined linear systems of equations and possible applications for sparse inversion.

In collaboration with Ozgur Yilmaz, Enrico Au-Yeung (PDF) developed a new class of random matrices that enables the recovery of signals with sparse representation in a known basis with overwhelmingly high probability. These matrices are obtained via what we call “randomized Bernoulli transform” of a fixed non-random matrix that satisfies two very general conditions. One benefit of our approach, compared to the standard method of using Gaussian random matrices, is that far fewer number of random variables are needed to generate these new types of random matrices. A second benefit of our method is that by further randomizing the signs of the columns of the matrix, we can use this matrix to accomplish dimensional reduction in large data sets. The matrix will satisfy the requirement in the Johnson-Lindenstrauss lemma, and hence large number of vectors in a high dimension can be projected into a lower dimensional space, while nearly preserving the original distances among distinct pairs of vectors.

Tim Lin made several major contributions spanning different areas of seismic signal processing, all of which have been, or will be, presented at major geophysical society meetings. These areas include data interpolation/regularization, separation of simultaneously acquired data, and automatic removal of surface-related multiples (see below). During a talk titled “Cosparse seismic data interpolation” presented at the 2013 EAGE annual meeting in London, Tim Lin examined the synthesis-or-analysis problem that arise in theoretical signal processing in the context of Curvelet-based seismic data interpolation methods by sparse optimization. He was able to propose an inversion procedure that exhibits uniform improvement over existing Curvelet-based regularization via sparse inversion (CRSI) algorithms, by demanding that the procedure only return physical signals that naturally admits sparse coefficients under canonical Curvelet analysis. At the annual SEG meeting in Houston that same year, Tim Lin was invited to present at the Simultaneous Sources workshop a talk titled “Dense shot-sampling via time-jittered marine sources”, where he demonstrated that (i) shot-time randomness is the key element that links land and marine-based simultaneous acquisition schemes, and (ii) that the ability to separate and to regularized simultaneously acquired data are simply two aspects of the an underlying fundamental property introduced by shot-time randomness in marine data. This work is anticipated to be published in a joint journal paper with Haneet Wason.

Matrix & tensor completion

Objectives: Find alternative structure promoting wavefield reconstruction and optimization techniques that are well-suited for large-scale (seismic) problems.

Missing trace-interpolation and regularization

Rajiv Kumar (PhD student) continued to work on missing-trace recovery algorithms that exploit low-rank structure, resulting in “SVD-free low-rank matrix factorization : wavefield reconstruction via jittered subsampling and reciprocity”, which will be presented at the 2014 EAGE. In this work, he compares the performance of different randomized subsampling techniques on missing-trace interpolation, namely jittered subsampling and uniform random subsampling. Rajiv Kumar also extended matrix-factorization based missing-trace interpolation to include regularization so it can be used on unstructured (off the grid) data. Jointly with Oscar Lopez (PhD student) and Ernie Esser (PDF since September 2013), Rajiv Kumar submitted this work in the conference paper “Matrix completion on unstructured grids : 2-D seismic data regularization and interpolation”. He is now testing these algorithms on realistic field data sets and future work is to extend this algorithm to 3-D seismic. Finally, Rajiv Kumar is working with Curt Da Silva on a paper where different matrix completion techniques are being compared. He expects to submit this manuscript this summer.

Oscar Lopez started his PhD studies in September 2013. He is currently working on matrix completion techniques for data supported on unstructured grids, i.e. irregular data resulting from non-equispaced recording locations. Specifically his work focuses on negative effects of an unstructured grid in incomplete data reconstruction via quadratically constrained nuclear norm minimization and reformulations to overcome such impediments. Such work finds applications in seismic data interpolation.

Tensor completion in the Hierarchical Tucker format

Curt Da Silva (PhD student) has developed the algorithmic components for solving optimization problems in the Hierarchical Tucker format, a relatively recent format for representing high dimensional arrays. The culmination of this work is a Matlab toolbox equipped to solve large-scale tensor interpolation problems both using dense and sparse linear algebra routines, which support low and high dimensional problems, respectively. This work has been presented at several meetings (EAGE/SEG/SAMPTA) and has been very well received. Building upon previous understanding of the smooth manifold structure of the Hierarchical Tucker format, Curt Da Silva has worked out the necessary components for performing manifold optimization in this tensor format, including methods to regularize the recovery problem when there is very little data. These methods have been presented at the EAGE under “Hierarchical Tucker Tensor Optimization - Applications to 4D Seismic Data Interpolation”, at SAMPTA under “Hierarchical Tucker Tensor Optimization - Applications to Tensor Completion”, and at the SEG under “Structured tensor missing-trace interpolation in the Hierarchical Tucker format”. Curt Da Silva will also be presenting an extension of this work, using the midpoint-offset transformation to mitigate large, sparse noise in Hierarchical Tucker interpolation, at the upcoming EAGE, which is titled “Low-rank Promoting Transformations and Tensor Interpolation — Applications to Seismic Data Denoising”. The manuscript “Optimization on the Hierarchical Tucker manifold - applications to tensor completion” summarizing this work has recently been submitted as a journal publication.

Figure4Recovery with Hierarchical Tucker format.

Lifting and trace-norm minimization

This year we also embarked on exciting new developments in convex optimization referred to as lifting methods. This work resulted in the conference papers “Application of a convex phase retrieval method to blind seismic deconvolution” by Ernie Esser (PDF), offering a new approach to the classical problem of blind deconvolution, and “Full Waveform Inversion with Interferometric Measurements” by Rongrong Wang (PDF in Math since September 2013) that provides fundamental new insights in seismic interferometry. Both these papers constitute first steps towards dealing with the classical problem of blind deconvolution and source/receiver statics. The work of Rongrong Wang extends recent work by Laurent Demanet towards the nonlinear case and as far as we know the first instance of higher-order (read source/receiver) interferometry. Ernie Esser will also attend the SIAM Imaging Conference in Hong Kong in May 2014 and give a talk related to work at SLIM about operator splitting techniques for certain nonconvex optimization problems and an application to 2D phase unwrapping.

Large-scale structure revealing optimization

All of the above approaches hinge on having access to fast large-scale solvers. When the optimizations are carried out over higher dimensional objects such as matrices and tensors, it is crucial that these optimizations do not rely on carrying out SVDs over the full ambient dimension. As part of DNOISE II, we continued to spearhead developments of state of the art solvers as reported in the recently revised manuscript “Fast methods for denoising matrix completion formulations, with application to robust seismic data interpolation” by Aleksandr Aravkin (former PDF) and in the work by Curt Da Silva, who in his revised manuscript “Hierarchical Tucker Tensor Optimization - Applications to 4D Seismic Data Interpolation” introduced highly efficient manifold optimization in the Hierarchical Tucker format. Both approaches appear to be highly competitive and an order-of-magnitude faster than methods used by others in our field.

Under the supervision of Michael Friedlander, PhD student Ives Macedo is focused on a new generation of convex optimization algorithms for low-rank matrix recovery that do not require explicit factorizations of the matrices. This attempts to circumvent the need for SVD computation in the existing rigorous algorithms, while preserving the low cost-per iteration of heuristic algorithms. The first step of this research program required an understanding of the mathematical underpinnings of this special kind of optimization that is based on the concept of gauges, which are generalizations of norms. This mathematical analysis was co-authored with post-doc Ting Kei Pong, and submitted to the SIAM Journal on Optimization (“Gauge optimization, duality, and applications”).

Proximal-gradient methods form the algorithmic template for almost all methods used in sparse optimization. The principle behind these methods is to alternate between the optimization of the smooth and nonsmooth parts of the function. A fundamental bottleneck, however, is that it is only known how to use first-order information. (A notable exception is our earlier work on projected-quasi Newton methods (“Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm”), though this approach does not use true second-order information. In answer to this, PhD student Gabriel Goh is developing proximal-gradient solvers (the workhorse algorithmic template for almost all methods used in sparse optimization) that use tree-based preconditioners, which are structured Hessian approximations built from an underlying graph-representation of the relationship between the variables in the problem. Our recent breakthrough is our discovery that, by exploiting the tree-structure of the preconditioner, each proximal iteration can have a low computational complexity (i.e., the same cost of a sort operation). At the same time, PhD student Julie Nutini is exploring rigorous approaches for seamlessly incorporating conjugate-gradient like iterations, which are well understood and effective, into the proximal-gradient framework.

Industry uptake and outreach

During the EAGE Workshop on Land and Ocean Bottom; Broadband Full Azimuth Seismic Surveys, it became very clear that we are at a cusp where industry is keen to adapt our ideas on randomized sampling. For instance, WesternGeco has already commercialized randomized coil sampling while ConocoPhilips has successfully conducted a field trial using a compressive-sensing acquisition and recovery technology derived from the work conducted as part of DNOISE II. Finally, Haneet Wason worked with BG Group on the design of a marine survey with random time jittering. Although everything was ready to be carried out in the field (on board the sampling vessel), the acquisition software of the contractors unfotunately did not (yet) support our random time dithering sample design. BG is commited to carry out this type of sampling at the earliest future opportunity. We consider these recent developments as a great accomplishment and a first step towards instituting a fundamental new sampling paradigm. We also received several requests from our industrial partners regarding our work on unstructured matrix completion and 4-D seismic. To further facilitate uptake of our technology, we plan to continue to work with our industrial partners and to use our new high-performance computing facilities towards the development and evaluation of our sampling methodologies on 3D–4D seismic data volumes. Finally, we are planning to make simulation-based acquisition design an integral part of our collaboration in Brazil (see for more details below). Having access to significant compute will not only allow us to test our acquisition and recovery techniques on realistic 3D–4D scenarios but it will also allow us to investigate possible risks. Mitigation of perceived risks of this technology by industry will be instrumental for the wide-spread adaptation of randomized sampling technology by industry.

Aside from extensive interactions with industry, we also continue to interact with the wider academic and practicing geophysical community where our work is becoming more and more recognized. The PI was invited as plenary speaker to the SPIE Optics and Photonics: Wavelets and Sparsity XV workshop with the talk entitled “Randomized sampling in exploration seismology”. This meeting is attended by specialists in applied and computational harmonic analysis and compressive sensing. The PI was also invited to present a CSEG Technical Luncheon in Calgary entitled “Breaking structure - why randomized sampling matters”, which was attended by approximately 500 Calgary-area geophysicists.

Progress Towards Objectives/Milestones

We are very well on track albeit some challenges remain regarding the development of parallel one-norm solvers and design of CS matrices for 3-D Marine acquisition. The former we overcame by introducing matrix and tensor completion techniques that scale. With the new machine and access to HPC in Brazil, we also expect to make progress towards the design of practical CS marine based acquisitions.

Outcomes. Development of a new paradigm for seismic data acquisition and sparsity/low-rank-promoting recovery that will allow us to acquire high-resolution wide azimuth seismic data volumes at significantly reduced costs. Our technology will be a key enabler for full-waveform inversion by pushing access to both the low and high end of the spectrum.

For more information on this topic see also the SLIM website and in particular Acquisition and Optimization.

Free-surface removal

Objectives: Wave-equation-based mitigation of the free surface by sparse inversion.

Estimation of Primaries by Sparse Inversion (EPSI)

In the winter of 2013/14 Tim Lin introduced two novel ideas that significantly improve the practicality of performing automatic surface-related multiple removal on field seismic data. These contributions involve fundamental improvements to a technique called Estimation of Primaries by Sparse Inversion (EPSI). First of these is an acceleration method for EPSI based on the ideas of multilevel/multiscale solvers in numerical linear algebra. This scheme can lower the computational cost of EPSI by an order of magnitude, by essentially obtaining the majority of its solution components from coarse spatial grids. The key insight is that these multiscale methods only make sense in the context of a sparse inversion process like EPSI, and not in the prevailing school of prediction-subtraction methods for multiple removal. Tim Lin described this method in an extended abstract titled “Multilevel acceleration strategy for the robust estimation of primaries by sparse inversion”, which was accepted for presentation at the 2014 EAGE annual conference in Amsterdam. The second of these ideas is a new formulation of the multiple-prediction step in EPSI that enables the process to automatically tolerate any large contiguous holes in the acquisition geometry, such as the near-offset gap. Previous proposed solutions to the data gap problem usually involve interpolation of seismic data in these areas, but the uncertainties introduced can often significantly complicate (and sometimes entirely subvert) an EPSI application in practice. Tim Lin’s reformulation instead eliminates the need for data reconstruction by using auto-convolution terms of the current primary wavefield model to account for multiple contributions that would otherwise come from within the data gap. This work is discussed in an extended abstract titled “Mitigating data gaps in the estimation of primaries by sparse inversion without data reconstruction”, and is submitted to the technical program committee for the 2014 SEG annual meeting in Denver. Tim Lin is preparing both of these works for submission as journal articles. Tim Lin is scheduled to defend his PhD thesis this Fall.

Seismic interferometry

Recently, interferometry has demonstrated its ability to produce robust solutions in imaging and inversion. It is observed that, in the presence of certain modeling errors, fitting interferometric measurements can return a more accurate estimate than the usual least squares. Such modeling errors include errors in the background velocity model and source-receiver displacements. In the past year, in collaboration with Felix J. Herrmann and Ozgur Yilmaz, Rongrong Wang worked on extending the interferometry idea to full waveform inversion. So far this work has resulted in the submitted conference paper “Full Waveform Inversion with Interferometric Measurements”. We found that the nature of cross correlation makes interferometry particularly useful when the noise is assumed to lie mostly in the phase. Such noise appears in, for example, models with the above mentioned modeling errors and when the errors are relatively small. Under this assumption, we designed several new misfit functions. We introduced interferometry among 4 traces to fit the needs of dealing with multiple modeling errors. We proved that the minimizers of these functions approach the true image as the noise approaches 0. We also did simulations on a stylized seismic example under the above mentioned modeling errors, and obtained results that confirmed the superiority of interferometric estimates.

Industry uptake and outreach

The industry has continued to show a keen interest in these topics and especially in REPSI. While we made good progress to implement 2-D REPSI in a quasi 3-D environment, the real challenge will be to implement this algorithm on 3-D seismic, which is extremely challenging because of the high computational demands and “everything talks to everything” type of parallellism that is required. Therefore, we are particularly excited with the funding for our cluster and with our more or less exclusive access to a $10 M HPC facility in Brazil. This access to HPC will allow us to extend EPSI to 3D seismic. Finally, CGG used our curvelet-domain matched filter in their multiples removal technology.

Progress Towards Objectives/Milestones

Overall we made excellent progress on these topics. The advent of EPSI made it no longer necessary for us to continue the development of curvelet-domain matched filters. Instead, we opted to develop EPSI instead by making it more computationally feasible and by mitigating the effects of (near-offset) gaps in the data.

Outcomes. A robust framework for the estimation of surface-free Green’s function and source signatures that serve as input to imaging, migration-velocity analysis, and full-waveform inversion.

For more information on this topic see also the SLIM website and in particular Processing.

Compressive modeling for imaging and inversion

Objectives: Design and implementation of efficient wavefield simulators in 2- and 3-D.

Accelerated Kaczmarz preconditioner

Art Petrenko has used the current reporting period to design and implement seismic wave simulation on a heterogeneous computing platform consisting of a conventional host processor and a reconfigurable hardware accelerator. By taking advantage of the accelerator chip’s deep pipeline parallelism, the solution of the forward seismic problem was sped up by more than a factor of two compared to one core of the conventional processor working alone. Throughout the development process, Art received frequent feedback from the hardware vendors (Maxeler Technologies) and visited their London offices for a two week period which saw significant progress on the code. Preliminary results from the project were presented at the 2013 Fall SINBAD Consortium meeting in December, and a poster detailing the work was shown at the Rice University Oil & Gas High Performance Computing Workshop in March 2014. This work, which is at the interface between applied mathematics and scientific computing, was published as an MSc thesis “Accelerating an iterative Helmholtz solver using reconfigurable hardware”, defended in April 2014, and is now available through the UBC library. We are very excited about this work because there are very strong indications that a hundred-fold speedup factor may be achievable in this framework. Art Petrenko’s prior research into software optimization of the same seismic wave simulation algorithm (“Software acceleration of CARP, an iterative linear solver and preconditioner”) was presented as a poster at the 2013 High Performance Computing Symposium in Ottawa. Having completed his program requirements, Art Petrenko will graduate in May 2014. Future work on this project will include further development of the linear algebra capabilities of the reconfigurable hardware accelerator, which will be presented at the EAGE Conference 2014 and written up for journal publication. This development will ensure that the computing power of the accelerator remains easily accessible to researchers at the SLIM group.

Extensions of Kaczmarz preconditioners

Rafael Lago (PDF since September 2013) has been addressing the solution of the forward (and backward) problem that arises in full-waveform inversion. One of his most noticeable findings while working in our group is associated with the behaviour of minimal residual Krylov methods for the solution of acoustic time-harmonic wave equation. According to his experiments and studies, minimal residual type of solvers such as CR are a strong competitor of more classical and widespread methods such as CG, specially for higher frequencies. The importance of this result lies in the fact that the most computationally intensive step of any inversion strategy is associated with computation of (approximated) PDE solutions, and Rafael Lago’s research successfully sped up such a computation by about 25%. His work has been presented at the SIAM 13th Copper Mountain Conference on Iterative Solvers in a paper entitled “CRMN method for solving time-harmonic wave equation”, which is scheduled to be presented with further improvements at the EAGE 2014, Amsterdam, with the presentation entitled “Fast solution of time-harmonic wave-equation for Full-Waveform Inversion”. Rafael Lago is investigating more efficient heuristics for the stopping criterion of Krylov iterative solvers and efficient preconditioning techniques for wave equation in more realistic scenarios such as anisotropy and elasticity, using his expertise in high performance computing and massively parallel programming. He is working on a full paper on Krylov methods for solving wave-equation to be submitted later this year.

Accelerated time-stepping

Objectives Order of magnitude speedup of wave simulations with time stepping

Sebastien Pacteau (ex PhD student) worked time-stepping approaches to full-waveform inversion on field-programmable gate arrays (FPGA’s). While significant progress was made on this topic, Sebastien decided to return to France. At this point, it is unclear whether we will continue this line of research.

Multi-parameter discretization of PDE’s, gradients & Hessians

Objectives: Discretize the acoustic wave equation as a function of multiple parameters such that the gradient and Hessian of objective functions for PDE-constrained optimization exactly satisfy the Taylor expansion.

Bas Peters (PhD student) created a discretization for the two-parameter (compressibility and buoyancy) Helmholtz equation. The discretization is done with finite-differences on a staggered grid and has PML-boundary boundaries. This work in itself is not new, but it is extremely rare to find expressions for the partial derivatives of the PDE’s in geophysical and mathematical literature. These partial derivatives occur in the KKT system for Lagrangian and penalty formulations for PDE-constrained optimization problems and are therefore required in pretty much all solvers for these problems. In this project, using the discretize-then-optimize formalism, we are able to show that the gradient and Hessian exactly satisfy the Taylor expansion of the penalty and Lagrangian based objective functionals for PDE-constrained optimization. To our knowledge, this has never been published for PDE-constrained with the Helmholtz equation. This work is not published yet, but was used to do the research for A sparse reduced Hessian approximation for multi-parameter Wavefield Reconstruction Inversion.

Depth stepping

Matrix functions

Objectives: Efficient computation of matrix functions with applications to seismic modeling.

Polina Zheglova (PDF since September 2013) has been investigating algorithms for efficient computation of matrix functions: square root and inverse, with applications to one-way wave equation modeling. Since Helmholtz matrix and its inverse are known to have a hierarchically semi-separable structure in one and two dimensions, possibilities arise for building fast depth extrapolation algorithms based on computation of matrix functions with polynomial recursions. An example of such recursion is the matrix sign function recursion that has been used in seismic imaging applications before. Our interest is in adapting similar recursions for calculation of the matrix square root and matrix inverse, which can be both used in the one-way wave equation depth extrapolation. In our applications, e.g. in depth extrapolation, the matrices involved are very ill-conditioned numerically due to the removal of the evanescent wave modes. It is known that Newton method based recursions can be unstable for ill conditioned matrices. Investigation of ways to stabilize such iterations by damping and search for an appropriate stopping criteria and damping parameters is part of this project. Preliminary results will be presented at the 2014 EAGE “Application of matrix square root and its inverse to downward wavefield extrapolation”, and the main ideas have also been presented at the Fall 2013 consortium meeting.

Depth extrapolation with the full wave equation

Objectives: Application of depth extrapolation with the full wave equation to modeling, imaging and inversion.

Polina Zheglova investigated the possibility of using solution to the full wave equation depth extrapolation problem as a preconditioner for the full Helmholtz equation. The goal of this research is to find a solution to the Helmholtz boundary value problem (BVP) by marching it as an initial value problem (IVP) in one spatial direction, e.g. depth. The initial data for the IVP needs to be determined. The IVP for Helmholtz equation is unstable due to the evanescent modes, so these modes are filtered out by the spectral projectors. The projected IVP is stable, however its fundamental solution is singular and can not be inverted in the usual sense. This leads to two questions that Polina is currently investigating: (i) inverting for the initial data for the IVP in generalized sense by formulating it as an underdetermined inversion problem with regularization, in order to recover the propagating modes in the initial data, and (ii) how accurate the solution obtained in such way is and whether it is useful for practical purposes, e.g. preconditioner.

Uptake

Large-scale modeling of wave physics is amongst the most challenging topics in our field and in the field of experts in scientific computing. Around the world, several major initiatives are in play but few of them are well tailored to modelling for wave-equation based inversion. We find that there is an increasing interest in our approach where we opt for a preconditioned time-harmonic framework that is relatively simple, allows for controlled accuracy, and that is easily extended to more involved (read poro-elastic) wave physics. We were informed that our results were successfully replicated by Jean Virieux’s group at ISTerre (France) and extended to the elastic case. Our choice working with time-harmonic solvers also seems to have been justified given recent results published in the literature that show that time-harmonic solvers can be competitive with time-stepping methods.

Progress Towards Objectives/Milestones

Because of the massive computational cost of (elastic) 3D modelling, we incurred somewhat of a setback on the testing of our 3D codes on large-scale models. However, 3D acoustic codes have been implemented as part of 3D FWI and we also have implementations of anisotropic and elastic codes in 2-D. The latter includes Jacobians. With the funding of the new cluster, we expect to make major progress towards the testing and roll-out of our 3D time-harmomic modelling software. We also expect to report on 3D time stepping.

Outcomes: Concrete implementation of a scalable virtually parameter-free object-oriented parallel simulation framework in 2- and 3-D for time-harmonic wave equations including explicit control of simulation accuracy, matrix-free definition of the linearized Born scattering operator (the Jacobian) and its adjoint the reverse-time migration operator (adjoint of the Jacobian).

For more information on this topic see also the SLIM website and in particular Modelling.

Compressive wave-equation based imaging and inversion

Objectives: Design and implementation of an efficient and robust wave-equation based inversion framework leveraging recent developments in machine learning, compressive sensing, approximate message passing, sparse recovery, robust statistics, and optimization.

Linearized wave-equation based inversion

Reverse-time migration with sparsity promotion

Objectives: Implementation of a time domain least-squares migration workflow with sparsity promotion via \(\ell_1\)-norm constraint and random sampling.

Valentin Tschannen (visiting MSc student from Ecole et Observatoire des Sciences de la Terre, France) has been working with Zhilong Fang (PhD student) on implementing our compressive approach to least-squares migration using a time-stepping code (iWave++, developed by professor Symes from Rice University). Results of this work will be presented as part of Valentin’s Masters degree. Mathias Louboutin (new PhD student since September 2013) will continue to work on time-domain methods to invert seismic data.

Imaging with multiples & source estimation

Objectives Improved images from data with surface-related multiples at the cost of roughly one RTM with all data.

Ning Tu focused his efforts to make the least-squares type of imaging techniques more robust and applicable (i.e., fewer assumptions are imposed) for real data applications, especially for data that contain surface-related multiples. This work attracted a considerable amount of attention when it was shown at several (EAGE/SEG) conferences. Following his previous work, we are especially excited about the following progress: (i) we can achieve high fidelity on-the-fly source estimation in fast least-squares seismic imaging by variable projection; we also demonstrated that a wrong source estimate can have a detrimental effect on least-squares imaging; (ii) by rerandomization, our fast least-squares imaging algorithm is robust to linearization errors caused by the discrepancy between the linearized modelling and the real data that contains various types of non-linear wave-propagation effects w.r.t. the medium perturbations of the earth, and can potentially better resolve fine sub-salt structures; (iii) we clearly showed that the more popular approach to image data with surface-related multiples by applying the deconvolutional imaging condition can only have limited success by not accounting for the receiver-side propagation effects; on the contrary, our fast least-squares approach can lead to virtually artifact-free images. Ning Tu presented “Sparse seismic imaging using variable projection” at the ICASSP 2013 conference in Vancouver (as co-author), “Fast least-squares migration with multiples and source estimation” at the EAGE conference, “Controlling linearization errors in l1 regularized inversion by rerandomization” and “Limitations of the deconvolutional imaging condition for two-way propagators” at the SEG conference. He also submitted an journal article “Fast imaging with surface-related multiples by sparse inversion” to Geophysical Journal International.

Figure5Examples using the cropped Sigsbee 2B model. (a) The true model. (b) The background model. (c) RTM image of data with multiples. (d) Fast inversion result using the proposed method after curvelet thresholding. (e) Fast inversion result without accounting for the multiples, after curvelet thresholding.

Imaging with depth stepping

Objectives Derivation of alternative (to reverse-time migration) imaging formulations.

In March, Lina Miao (MSc) defended her MSc thesis on migration with depth stepping successfully. She also submitted the conference paper “Randomized HSS acceleration for full-wave-equation depth stepping migration” jointly with Polina Zheglova, who is exploring possibilities to extend this work towards full-waveform inversion and preconditioning of the Helmholtz system. This work is based on a spectral projector, which makes depth stepping with the two-way wave equation stable. Lina Miao’s contribution to this approach was to introduce a randomized Hierarchical Semi-Separable representation for this spectral projector. (For more details also see Depth extrapolation with the full wave equation.) Lina Miao is now employed by CGG in Calgary.

Wave-equation migration velocity analysis

Objectives Derivation of migration-velocity and amplitude-versus-offset analyses based on the two-way way equation.

Rajiv Kumar and Tristan van Leeuwen (ex PDF, now professor in Math at Utrecht university) continued to work on the computational efficienvy of wave-equation based migration-velocity analysis, summarized in the conference paper “Efficient WEMVA using extended images”. They show that relatively cheap (compared to explicit multi-D cross correlations of the forward and adjoint wavefields) matrix-free actions can be used to efficiently glean information from the full-subsurface offset extended image volumes. Early results of this method were included in the poster “Efficient WEMVA using extended images”, presented at the 2013 SEG workshop, and in the conference paper “Extended images in action: efficient WEMVA via randomized probing”, which will be presented at the 2014 EAGE by Rajiv Kumar. As far as we know, this work is the first example of automatic migration-velocity analysis, based on two-way full subsurface offset image volumes. Rajiv is about to finalize the draft “A new perspective on extended images” to be submitted as a journal publication. Future work includes incorporation of WEMVA into FWI somehow to not only go beyond the linearization but to also make FWI less prone to cycle skipping.

Outcomes: An efficient, concrete, and versatile linearized imaging framework accelerated by random subsampling that includes imaging with surface-related multiples, source, and velocity-model estimation.

Wave-equation based inversion

Frugal full-waveform inversion

Objectives Reduction of prohibitive computational costs of wave-equation based inversion.

Our approach is essentially based on the “intuitive” premise that data misfit and gradient calculations do not need to be accurate at the beginning of an iterative optimization procedure when the model explains the data poorly. We translate this idea into a concrete and practical algorithm where we control the errors by adaptively increasing the number of shots and the simulation accuracy. This intuition can perhaps be explained as follows. If we need to drive from A to B it is generally not necessarily to depart one’s journey in the exact direction. Heading in the right direction first, following some general imprecise directions, is fine as long as the directions become more precise as we are reaching the target destination B.

Tristan van Leeuwen (former PDF, now professor at Utrecht University) submitted a major revision of “3D frequency-domain seismic inversion with controlled sloppiness”, which will appear in the SIAM Journal on Scientific Computing. I presented “Recent developments in wave-equation based inversion technology” at the SEG workshop in Oman and at the Full-waveform inversion workshop prior to the annual EAGE meeting in London. Jointly with Andrew J. Calvert (Prof at SFU), Ian Hanlon (RA), Mostafa Javanmehri (contractor), Rajiv Kumar, Tristan van Leeuwen, Xiang Li (PhD), Brendan Smithyman (PDF since September 2013), Eric Takam Takougang (prof at the University of Western Australia), and Haneet Wason, I wrote the broad audience paper “Frugal full-waveform inversion: from theory to a practical algorithm” that was published in a special issue for the Leading Edge in full-waveform inversion (FWI). In this paper, I discussed techniques developed as part of DNOISE II to reduce the computational costs of full-waveform inversion (FWI). In this paper, I also included our findings on Chevron’s Gulf of Mexico data set, which we presented by Xiang Li (PhD student) at a post-convention workshop at last year’s SEG.

Robust FWI

Because “getting the wave physics right” remains pretty much an oxymoron, FWI requires a formulation that is less sensitive to noise, outliers in the data, and scaling by the source function. While we made good progress in the formulation of FWI using penalty functionals that derive from the Student’s t distribution, unmodelled, e.g., elastic, phases are not spiky and coherent with modelled (acoustic) phases rendering robust penalty functionals ineffective. By formulating the misfit in the Fourier domain, where elastic phases can be expected to relatively sparse, we have been able to invert elastic data with an acoustic code. We presented this work entitled: “In which domain should we measure the misfit for robust full waveform inversion?” at the EAGE. Because data-side robust penalty formulations are not the only alternative to deal with unmodelled phases, Xiang Li and Anais Tamalet (former long-term visitor from Total) presented the extended abstract “Optimization driven model-space versus data-space approaches to invert elastic data with the acoustic wave equation” at the SEG. In this work, they compared model-space curvelet-domain one-norm regularization with Fourier-domain student’s t.

Variable projection

Our work on variable projections is having a continued impact within our group, e.g. it undergirds Wavefield Reconstruction Inversion, and outside. In response to recent work by industry Tristan van Leeuwen wrote “Application of the variable projection scheme for frequency-domain full-waveform inversion” (M. Li, J. Rickett, and A. Abubakar, Geophysics, 78, no. 6, R249–R257), which recently appeared in Geophysics.

Wavefield Reconstruction Inversion/Imaging

Objectives Derive a formulation for wave-equation based inversion that is less prone to cycle skipping.

To meet this objective, a completely new adjoint-free penalty formulation of FWI was developed that shares the computational advantages of the reduced unconstrained adjoint-state formulation—where the constraints are eliminated explicitly by solving the forward and adjoint wave equations—and the “all-at-once” constrained formulation, which involves iterations on sparse KKT systems. While the reduced system has the advantage that it has limited memory requirements, it involves dense matrices (the Jacobian and (Gauss-Newton) Hessian are both dense) and a highly nonlinear relationship between the unknown medium properties and the synthetic data. Compared to the reduced method, the “all-at-once” method is bi-linear in the source and medium properties and involves updates of the forward, adjoint wavefields, and medium properties. This increase in the degrees of freedom allows “all-at-one” methods to sidestep local minima. Unfortunately, the “all-at-once” method is impractical because it needs to store wavefield updates for all sources, which is computationally infeasible. By replacing the PDE-constraint with a (least-squares) penalty, we arrived at a minimization procedure involving a cost functional comprising of a data-misfit term and a penalty term that measures how accurately the wavefields satisfy the wave-equation. The inversion method is composed of two alternating steps, namely (i) the solution of a system of equations forming the discretization of the data-augmented PDE, and (ii) the solution of physical model parameters from the PDE itself given the field that solves the data-augmented system and an estimate for the source functions. Compared to the “all-at-once” approaches, there is no need to update and store the fields for all sources leading to significant memory savings. As in the all-at-once-approach, the proposed method explores a larger search space and is therefore less sensitive to initial estimates for the physical model parameters. Furthermore this method solves the data-augmented PDE for the field in step (ii) making the method less prone to cycle skips. Contrary to the reduced formulation, our method does not require the solution of the adjoint PDE, effectively halving the number of PDE solves and memory requirement. As in the reduced formulation, wavefields are computed independently and aggregated, possibly in parallel.

Tristan van Leeuwen and Bas Peters worked on further development of the Penalty method now coined Wavefield Reconstruction Inversion (WRI). I presented this work during an invited talk at the Computational Mathematics for Geophysics Workshop at last year’s SEG. This work also appeared in the express letter “Mitigating local minima in full-waveform inversion by expanding the search space” that appeared in Geophysical Journal International and will be presented by Bas Peters (PhD) during the talk “A new take on FWI: Wavefield Reconstruction Inversion” and myself “A new take on FWI: Wavefield Reconstruction Inversion” at the coming EAGE. Recently, Ernie Esser included total-variation regularization in the conference paper “A scaled gradient projection method for total variation regularized full waveform inversion” while Bas Peters extended WRI to the multi-parameter case in the conference paper “A sparse reduced Hessian approximation for multi-parameter Wavefield Reconstruction Inversion”. Finally, a full patent application “A Penalty Method for PDE-Constrained Optimization” was submitted for review.

Figure6Comparison conventional FWI (reduced Lagrangian) and RWI (penalty method)
Formulations for PDE-constrained optimization

Objectives: Develop a strategy for the iterative solution and preconditioning of least-squares problems arising in penalty methods for PDE-constrained optimization

To make WRI competitive with common reduced Lagrangian methods, we need to solve the overdetermined data-augmented system in approximately the same time as two Helmholtz systems. Bas Peters (PhD student) worked on this challenging problem and found that eigenvalue distributions of the data-augmented wave equations are not particularly favorable to most standard preconditioning strategies. All methods in the Krylov-subspace family work with the normal equations for overdetermined systems. This implies that the upper-bound on the number of iterations for this overdetermined system is at least the square of the number of iterations expected for a standard Helmholtz problem. Unfortunately, this is not acceptable, as the standard Helmholtz problem may already require tens of thousands of iterations for large 3D high frequency problems. A simple but somewhat expensive preconditioner was found to reduce this dependency from squared to a multiplication by a reasonable constant. Current research is focussing at reducing this constant by combining preconditioning and deflation strategies. Finally, we commenced a collaboration with Dominique Oban (Ecole Polytechnique de Montreal) to work on this challenging problem using augmented Lagrangian techniques.

Multi-parameter WRI

Objectives: Develop an algorithm for seismic waveform inversion that can invert for multiple parameters reliably without user intervention using the true Hessian or sparse approximations thereof.

Bas Peters also worked on multi-parameter seismic waveform inversion, which is a also a challenging problem when (in this case the compressibility and buoyancy) occur in the same equation (Helmholtz) or in a coupled system of first-order differential equations. Simple examples show that 95% of the observed data can be fitted using just one of the two variables and that the gradients of the objective functional with respect to different parameters have widely varying scales. Unfortunately, vanilla gradient-descent or quasi-Newton schemes do not lead to acceptable results for this situation. To overcome this problem, Bas Peters developed an algorithm based on a penalty formulation of the objective functional. The advantage of the penalty form in the multi-parameter setting over the commonly used Lagrangian form is that the Hessian naturally separates in a sparse and a dense part. This does not happen in the Lagrangian form. The sparse part turns out to be a useful approximation to the Hessian and can be inverted efficiently in a Newton-type optimization scheme. This leads to an algorithm which does not require user intervention and obtains decent estimates for both parameters. The first results were submitted for publication: A sparse reduced Hessian approximation for multi-parameter Wavefield Reconstruction Inversion.

WRI with TV regularization & stochastic-average gradients

Objectives: To incorporate total-variation regularization into WRI and to introduce techniques from stochastic optimization to make the computations more efficient.

Ernie Esser has submitted an abstract to the SEG 2014 Annual meeting entitled, “A scaled gradient projection method for total variation regularized full waveform inversion”, which is joint work with Tristan van Leeuwen, and Aleksandr Aravkin (former postdocs of SLIM). Among his ongoing projects at SLIM, he is exploring new applications of semidefinite relaxation techniques that approximate nonconvex optimization problems by convex relaxations, and he is also working on extensions of the recently developed stochastic average gradient method to more general problems.

Figure7FWI via RWI with total-variation regularization

Uncertainty Quantification

Objectives To put error bars on wave-equation based inversions.

Zhilong Fang (PhD student) worked on Uncertainty Quantification (UQ) for full-waveform inversion (FWI) and on 3D full-waveform inversion. His work on uncertainty quantification for FWI will be presented at EAGE during the presentation “Fast uncertainty quantification for 2D full-waveform inversion with randomized source subsampling”. In this work—and in a submitted expanded abstract “A stochastic quasi-Newton McMC method for uncertainty quantification of full-waveform inversion” for the SEG— Zhilong Fang made the following contributions: (i) improved convergence of Markov chain Monte Carlo (McMC) by using the quasi-Newton Hessian of the posterior probability density function; (ii) reduced computational costs of uncertainty quantification via randomized source subsampling. Zhilong Fang is also working with Curt Da Silva, Chia Lii (PDF at Math), and professor Rachel Kuske (Math) to meet the challenges of finding new, and most importantly computationally feasible, ways to formulate UQ for FWI.

The work on 3D full-waveform inversion is still in process. In this work, Zhilong Fang combines simultaneous shots with randomized source subsampling to obtain a more practical and efficient sampling method for 3D full-waveform inversion. He is about to finalize this new sampling strategy for the 3D full-waveform inversion. Zhilong Fang also plans to write up his uncertainty quantification work for FWI in a journal publication.

Industry uptake and outreach

Over the last year, we have been seeing increased tangible impact and uptake of our research outcomes by our industrial partners. Here are some highlights

  • PGS invited Ning Tu for a two-week visit to explain his imaging with multiples and source estimation technology. CGG has also shown an interest and Ning Tu is planning to visit CGG in Houston over the summer;
  • WesternGeco has adapted our batching technology (the technique underpinning frugal FWI) and they have informally communicated to us that this technique is leading to roughly a four-to-five-fold speedup of their FWI codes. This significant speedup makes their FWI technology profitable;
  • Our collaboration with BG group and professor Mike Warner’s group from Imperial College London has materialized in having soon semi-exclusive access to a $10M 16k core machine in Brazil to further test and industrialize our wave-equation based inversion technology (see more details below).

Progress Towards Objectives/Milestones

We made excellent progress in this area well beyond the objectives of the original grant proposal. Aside from deriving new formulations for FWI, we also implemented computational feasible pre-stack imaging with extended images, AVO inversion with extended images, and wave-equation migration velocity analysis with extended images. We made these otherwise computationally infeasible approaches feasible using randomized probing techniques.

Outcomes: Development of full-wave inversion technology that is computationally efficient, less sensitive to starting models, able to handle multi-parameter inversions, and allowing for incorporation of prior information via total-variation regularization.

For more information on this topic see also the SLIM website and in particular Imaging and Full-waveform Inversion.

Case studies: wave-equation based inversion on industrial data

Objectives Test developed algorithms for seismic waveform inversion and imaging on real field data, as well as to develop a mostly automated workflow and regularization strategies

Figure8FWI result GOM dataset.

Over the last year, we have continued to work extensively on blind field and synthetic data case studies provided to us by industry. We did this work with professor Andrew Calvert and professor Eric Takam Takougang, and professor Mike Warner (Imperial College London) and several team members from our research team.

  • Chevron Gulf of Mexico synthetic data set. Xiang Li continued to work on this very challenging dataset, which was updated last spring to include larger offsets. Xiang Li presented our findings at the second SEG workshop “Gulf of Mexico Imaging Challenges: What Can Full Waveform Inversion Achieve?”. This time our group was the only non-anonymous contribution and one of only two presentations. The other presentation was by Denes Vigh from WesternGeco. As the year before, it was clear that applying FWI technology to subsalt is extremely challenging and more or less an open problem in particular when referring to hands-off approaches where a single algorithm with relatively little input from the user is run opposed to extremely hands-on conventional velocity building workflows utilized by industry. The SEG released a new simpler dataset 2-D dataset to evaluate FWI and we have started working on this blind case study as well.

  • North Sea field data set. Ning Tu and Tim Lin obtained exciting results working on the Machar dataset provided to us by BP. We presented these results, which included curvelet-domain regularization and interpolation to remove harmful aliased Scholte waves from this ocean-bottom node dataset and migration, at the Spring SINBAD Consortium meeting and Fall SINBAD Consortium meeting. Unfortunately, we had to stop working on this data set given BP’s discontinued support.

  • North Sea synthetic data set. Our group has made significant progress on working with the BG synthetic ocean bottom node data set both for 2-D and 3-D FWI. We use this data set extensively, which contains lots of well-constrained complexity, to test our (multi-parameter) inversion and imaging algorithms. In collaboration with BG, we plan to use this data set to conduct some of our planned simulation-based acquisition research.

  • Permian basin VSP land field dataset. Brendan Smithyman and Bas Peters have been involved in implementing optimization algorithms and the development of regularization strategies to work on this challenging dataset. So far we have implemented a 2D reduced Lagrangian based algorithm for waveform inversion. Regularization of the ill-posed inverse problem is required and is added as a quadratic penalty term. The main result so far is that our algorithms based on this relatively standard approach, work without much user intervention. These results are submitted for publication: Joint full-waveform inversion of on-land surface and VSP data from the Permian Basin. Current research is aimed at extending the workflow to 3D waveform inversion. Another current topic is the replacement of regularization using penalty methods by projecting onto suitable convex sets. We use this in conjunction with some quasi-Newton methods developed in recent years, which can maintain quick convergence rates while projecting onto the convex sets representing regularizers of the inverse problem.

aliased image
artifact-free image
Figure9Results on the BP Machar dataset.

Challenges

Application of FWI and other algorithms to realistic synthetic and field data sets remains challenging because of a mix of factors including

  • availability of compute. Even the smallest 3D and serious 2D datasets quickly become computational infeasible on our current hardware. With NSERC’s funding of our new cluster and our involvement in the International Inversion Initiative in Brazil we have mostly resolved this issue and we expect to report more application of our FWI technology to blind case studies over the final year of the DNOISE II grant;
  • availability of suitable datasets. It is challenging to obtain high-quality and scientifically interesting and manageable data sets to test our algorithms. The BP Machar dataset was an excellent dataset because it was of high quality (ocean bottom) and came with velocity-model information. This allowed us to make relatively quickly progress, which was unfortunately stopped by BP’s decision to discontinue support for the DNOISE II grant. This means that we had to abort working with this data set. While a number of companies have kindly provided us with data, we are still on the lookout for datasets to test our acquisition, processing, and wave-equation based inversion technologies.
  • access to processing software. Field data often needs particular preprocessing before it can be input into our wave-equation based inversion and imaging algorithms. Therefore, we are particularly excited with WesternGeco’s donation of a license to their professional seismic processing package Omega. This in-kind contribution (valued commercially at $8M) allows us to build up capability to handle field data sets. Challenge will be to integrate this package into our existing software environment (see more details below).
  • existence of practical and robust workflows for FWI. FWI is a relative new technology for which no versatile and robust workflows exist. This makes it more challenging to evaluate our wave-equation based inversion technology because we now also need to develop these workflows. Having access to blind synthetic case studies, a professional data processing package, and our collaborations with Andrew Calvert and Mike Warner, both experts in working with real data and with applying FWI to real data, will allow us to continue to make progress.

Outcomes: We continued to make good progress on evaluating our FWI technology on realistic industrial data sets. Our experience so far showed that our FWI technology can indeed be applied to realistic synthetic and field data sets but that challenges remain to apply this technology to complicated geologic settings. With impending access to our new cluster and to significant compute in Brazil, we are in a strong position to continue developing practical workflows with our acquisition, imaging, and inversion technology.

Parallel software environment

Objectives: Development and implementation of a scalable parallel interoperable development environment to test and disseminate concrete software implementations of our algorithms in 2- and 3-D to our industrial partners. Team: Henryk Modzelewsk (senior programmer) and a team of coop students has been responsible for parallel hard/software maintenance and development for the DNOISE II grant.

Parallel SPOT (pSPOT) — a linear-operator toolbox for matlab.

Objectives: Design and implementation of abstract parallel linear operator and vector types, including abstractions for meta data, suitable for coordinate free optimization and coordinate dependent problem setups.

Over the last year, we continued to design, implement, and optimize SLIM’s parallel extension of SPOT called pSPOT. This extension of SPOT, which is based on parallelized Kronecker products that allow us to seamlessly work with multidimensional arrays with dimensions that are contiguous or distributed, removes current limitations regarding the computational costs and storage requirements of our transform-based algorithms that involve the solution of extremely large convex optimization problems. Because this environment exploits our parallel compute cluster, we continue to be able to rapidly prototype and test our algorithms. This environment also leads to code that is readable and scalable to large problem sizes.

Tim Lin wrote a high-performance benchmark that was used for the evaluation of our own HPC equipment and the equipment part of our collaboration with Brazil. The outcome showed that this framework, and therefore the parallelization of many of our algorithms, allows us to scale to truly large-scale industrial problems and we are excited to try this out on our new machine and on the machine in Brazil. We are also thrilled with the latter because BG Group has agreed to buy a large number of workers for the matlab parallel toolbox that allow us to scale to unprecedented territory–i.e., we will be the largest parallel matlab toolbox user in the world.

To support this endeavor, we worked on several fronts including:

Performance improvements

Our efforts were mainly concentrated on memory and speed optimization including

  • major improvement in the memory utilization of SPOT operators and modelling code used in our 3D-FWI framework. We reduced memory consumption of the modelling algorithm and associated operators by half by rearranging the code and using explicit garbage collection;
  • reduction in the number of transposes needed by Kronecker products. We reduced the number of transposes (real all-to-all communications) needed in opKron nearly threefold using smarter data manipulation via out-of-order dimension transposes and sizes’ permutations. As a result, we increased the performance of array-dimensions swapping part of opKron for up to the factor of two depending on the array’s sizes. While optimizing the transposes, we also reduced memory footprint of opKron by half.

Seismic data container & parallel data handling

Finding the proper levels of abstractions while providing access to or hiding relevant meta data still remains an important challenge when dealing with extremely large data volumes. For instance, solvers for linear problems should only care about compatibility of implicit mat-vec products while code to setup Helmholtz matrices or to compute penalty functions should have awareness of the physical units and dimensions. To tackle these challenge, we continued to develop our

  • seismic data container designed to be tightly integrated (i.e., transparent or detail hiding when appropriate) with our abstract linear operator framework pSPOT;
  • map-reduce framework designed to handle massive field data volumes transparently.
Seismic data container

This framework allows us to work seamlessly with very large multidimensional arrays in Matlab that are in-core or out-of-core. Our implementation allows us to extend (again through Kronecker products) pSPOT to include out-of-core dimensions, and also allows us to keep track of meta data, e.g., sizes and header information. Because our implementation includes parallel IO, we limit overhead that comes with working with out-of-core dimensions. This software is made available at https://github.com/slimgroup/SeisDataContainer/. We have plans to make this data container interoperable with our physics-based optimization framework. We also are looking into fundamental new ways of handling data by bringing the computations to the data, as in hadoop used in “big-data” machine learning community to solve massive map-reduce problems. ii) new methods that feed dimensionality-reduced wave-equation inversions with data rather than responding to requests for data. With this work, we hope to jump on the wave of new developments in truly scalable “big-data” handling. During last year we designed a prototype to allow SPOT operators to not only act on the data, but also have the container object being properly modified to reflect the changes in metadata resulting from the action of the operator. This prototype was implemented on a few selected SPOT operators (e.g. opKron, opCTranspose, opWavelet) and tested with the SPGL1 solver. The benefits of such object-oriented approach include simplification and improved readability of code, potential for implementing custom norms via normed spaces and inner products, and features for data integrity checking. Finally, the data container will also facilitate the integration of our algorithms with field-data workflows.

Map-reduce framework via SWIFT

Aside from handling meta data, development of practical workflows hinges on having access to data. Since seismic (field) data volumes are large this is challenging from the IO perspective since the data volumes generally far exceed the amount of available memory. To overcome this challenge, we used Swift parallel scripting language to implement a map-reduce framework for 3D FWI. During last year we made progress in both reducing memory footprint of the Helmholtz solver (mentioned above) and the performance and stability of the SWIFT part of the algorithm and resolved minor shortcoming of MATLAB compiled applications.

Object-oriented framework for physics-based optimization.

We continued our work on an object-oriented parallel environment for optimization problems. Our activities focussed on optimization and extensions 3-D FWI code. As before, our object-oriented framework allows us to implement matrix-free scalable parallel PDE-constrained optimization problems with flexibility regarding the the PDE, e.g., constant versus varying density, 2-D versus 3-D and the misfit functional, e.g. two-norm versus student t misfit. We are also making a start with coding up WRI—our new formulation of FWI—in this framework.

We also continued to make efforts to tackle challenging numerical and mathematical problems that arise from the application of algorithms to field data in the geophysical exploration industry. An ongoing project is aimed at addressing a conflict between (i) the discrete modelling of wavefields in computers and, (ii) the true behaviour of the earth. Whereas there are wide range of distinctions between these two settings, the appropriate handling of the model boundaries is particularly critical. In the true earth, waves are generally free to propagate outside of a fenced-in region of the subsurface, but numerical modeling works with a finite domain and therefore has edges.

The implementation of boundary conditions for seismic modelling and inversion algorithms specifies how waves behave as they encounter the edges of the computational earth model. SLIM uses absorbing boundary layers to simulate the continuous medium that obtains in the true earth. When forming seismic images, or carrying out seismic inversion, the artificial absorbing boundaries may negatively affect the performance of the numerical algorithms. It is common practice to use ad hoc modifications of the wavefields to avoid these problems, but this may cause problems with the stability of the computational code. SLIM takes a different approach, and aims to optimally reconcile the mathematical ideal with the physical reality.

We identified specific cases that result in conflicts between numerical stability and physical analogy. Our current research in this domain involves testing a set of different solutions to the boundary condition problem; this must be done comprehensively across all of the state-of-the-art algorithms that we maintain. This work is relevant for full-waveform inversion, wavefield-reconstruction imaging, wave-equation migration velocity analysis and least-squares migration, among others. Early results show that one or two approaches have superior stability in specific cases; our ongoing research involves designing a comprehensive testing suite to test each solution from this collection of methods, in the context of at least six high-impact algorithms.

The conventional practice in the field of seismic inversion research is to deal with idealized (i.e., continuous) models of earth physics and to implement numerical inversion solvers after working out the math; this epitomizes the so-called “optimize, then discretize” approach. SLIM approaches this problem differently, by recognizing that it is better to design stable numerical algorithms that “discretize, then optimize”. This may yield implementations that are more stable, easier to test, easier to debug and consequently, easier to prepare for industry uptake.

Future plans

Independently of our efforts, BG Group has developed a similar framework to handle parallel IO. As part of our collaboration, we will evaluate this software, which they will release publicly, and see how we can integrate both approaches.

Aside from making these approaches interoperable with our physics-based optimization framework and data containers, we continue to look into fundamental new ways of handling data by either bringing the computations to the data, as in hadoop used in “big-data” machine learning community to solve massive map-reduce problems. With this work, we hope to jump on the wave of new developments in truly scalable “big-data” handling.

Software releases in GIT

Objectives: Concrete implementation of scalable parallel algorithms for verification and evaluation of our algorithms by industry.

We are glad to announce that we moved the SINBAD Software Releases to GitHub. We hope that this transition to GitHub will make it easier for our industrial partners to download our latest software. GitHub will also give us more flexibility to share software updates more frequently with our industrial sponsors. Since not everybody has access to GitHub, we also continue to provide our industrial partners access to our software the old way.

  1. WRIm—Wavefield Reconstruction Imaging. Wave-equation based imaging derived from our Wavefield Reconstruction Inversion framework where images are created via cross-correlations of solutions of the data-augmented wave equation – where solutions are sought that fit both the data and the wave-equation — and the corresponding wave-equation residues. The advantage of this method is that it does not need to solve the adjoint wave equation. For questions contact Bas Peters. [Read more] [GitHub]

  2. Seismic data regularization, interpolation, and denoising using factorization based low-rank optimization. This application package demonstrate the simultaneous seismic data interpolation and denoising on a 2D seismic line from Gulf of Suez. For questions contact Rajiv Kumar. [Read more] [GitHub]

  3. Missing-receiver interpolation of 3D frequency slices using Hierarchical Tucker Tensor optimization. Missing receiver interpolation of frequency slices in 3D seismic acquisition using the Hierarchical Tucker tensor format. This interpolation scheme exploits the low-rank behavior of different reshapings of seismic data into matrices, in particular at low frequencies. We demonstrate this technique on a simple frequency slice of data. For questions contact Curt DaSilva. [Read more] [GitHub]

Planned software releases

Now that the new generation of students and PDFs have gained experience, we expect to increase the number of software releases. Over the summer, we plan to release

  1. 3D Frequency Modelling by Rafael Lago—an improved version of the routines used for discretizing the Helmholtz problem that are more accurate, faster and flexible, allowing input to be passed in different units, and allowing more realistic scenarios such as free-surface;

  2. Joint recovery method for time-lapse seismic data by Felix Oghenekohwo—joint recovery method for a time-lapse (4D) data acquired via time-jittered (blended) marine acquisition;

  3. Multiscale robust EPSI via one-norm minimization with mitigation of unknown data by Tim Lin—an update to the Robust EPSI package released in 2012. This release implements the multiscale bootstrapping scheme discussed in an upcoming talk “Multilevel acceleration strategy for REPSI” for the SINBAD 2014 Spring consortium meeting, as well as the implicit data interpolation technique described in “Implicit interpolation of trace gaps in REPSI using auto-convolution terms” during the same meeting. Other improvements include the processing of data in a marine acquisition geometry, and regularization of the solution wavefield by reciprocity;

  4. Source separation via SVD-free rank minimization in the hierarchical semi-separable representation by Haneet Wason—this package contains a MATLAB implementation of a 2-D over/under, blended marine acquisition scheme, and a deblending (or source separation) algorithm based on SVD-free rank minimization in the hierarchical semi-separable (HSS) representation;

  5. Time-jittered, blended marine acquisition on non-uniform grids by Haneet Wason—this package contains a MATLAB implementation of a 2-D time-jittered, blended marine acquisition scheme on non-uniform spatial (source) grid, and a deblending algorithm based on sparse inversion via \(\ell_1\) minimization incorporating the non-equispaced fast discrete curvelet transform;

  6. Total Variation Regularized Wavefield Reconstruction Inversion by Ernie Esser—This MATLAB code implements an extension of the Wavefield Reconstruction Inversion Algorithm with additional constraints on the estimated slowness squared. Its total variation can be constrained to be less than or equal to some positive parameter \(\tau\). Bound constraints can also be included to keep the estimated velocity within a physically realistic range;

  7. Wavefield Reconstruction Inversion by Bas Peters—time-harmonic implementation of full-waveform inversion with the penalty method;

  8. Discretization of the (multi-parameter) Helmholtz equation by Bas Peters—finite-difference implementation including partial derivatives, gradients & Hessians of the objective function that satisfy the Taylor expansion in a discretize-then-optimize setting.

Outcomes: A versatile, maintainable, and scalable development and seismic data processing & inversion environment supporting concrete industry-strength implementations and testing on field data of the algorithms developed as part of DNOISE II.

Seismic data processing

Objectives: Integration of professional seismic data processing software in our (FWI) workflows to the evaluation of our technology and for the necessary pre- and postprocessing. Team: Ian Hanlon (research associate)

Omega processing system donation

WesternGeco generously donated the full commercial Omega seismic data processing system, which we have hosted on a top of the range IBM multithreaded workstation. We have installed and are currently configuring this package to work with the Chevron Challenge dataset and will be reporting and presenting the results over the next few months, including the SEG Denver conference in October 2014.

Technology evaluation

Our success depends for a large part on demonstrating the viability of our technology on field data sets. The main responsibility of Ian Hanlon is to organize these efforts including the following tasks:

  • Evaluation of Omega: We will be evaluating this software tool to asses its added value towards the research carried as part of the DNOISE II grant. This activity includes

    • making an extensive inventory of the tools present in this package and how these can be used to help DNOISE II’s researchers (students and PDFs) meet their research goals;

    • communicating the capabilities of this package to the members of DNOISE II grant;

    • assembling feedback from the DNOISE II researchers on how the functionality of this processing package will impact their research and what they need in order to use this functionality.

  • Administration of field-data sets: As part of the DNOISE II grant, we work concurrently on a number of field-data case studies that call for careful coordination with industry. This activity includes

    • protection of the student’s interests as to make sure that the students will be able to publish their research findings subject to approval by the industrial partner providing the DNOISE II members with field data;

    • negotiation and maintenance of data-release agreements with industry;

    • coordination of the legal aspects of data-release agreements with industry in collaboration with the University Liaison Office;

    • administration of field data sets including monitoring of expiry dates and getting signatures from industry;

    • handling, administration, and documentation of field data sets we are working on as part of the DNOISE II grant.

  • Support for field-data studies: This activity includes

    • training of the DNOISE II students and PDFs in the use of the professional seismic-data processing software package Omega;

    • support in uploading seismic field datasets, including assistance with determining acquisition geometries;

    • help with the design of seismic processing flows, including pre/post-processing, parameter selections, visualization, and quality control;

    • invocation of best practices including documentation and dissemination of field-data case study findings at SINBAD Consortium meetings & international conferences and preparation of materials for the peer-reviewed literature;

    • assistance with interpretation and understanding of the geologic settings.

  • Coordination of field-data case studies: Ian Hanlon will work with the DNOISE II team on

    • formulation of scientific questions to be answered as part of the different field-data case studies;

    • running our weekly data-seminar during which DNOISE II’s students and PDFs;

    • setting priorities amongst the different scientific questions;

    • assisting students with dissemination of research findings.

  • Evaluation of DNOISE II’s technology on field data: This activity includes

    • continued evaluation of DNOISE II’s technology, including seismic data regularization, multiples, migration-velocity analysis, full-waveform inversion, and wave-equation based imaging by comparing these techniques to existing approaches, including those that are part of Omega;

    • assistance with dissemination of research findings that include comparisons of SLIM’s technology to other competing technologies. These comparisons should be carried out on the basis of juxtaposing outputs of particular processing steps (e.g., SLIM’s versus Omega technology) and on the basis of an evaluation of the end results, i.e., by comparing seismic images created by processing flows that do or do not contain SLIM’s technology.

  • Integration of Omega in SLIM’s research program: This activity includes

    • establishment of workflows that incorporate DNOISE II’s technology in Omega workflows;

    • evaluation of the added value that Omega brings to DNOISE II’s research mission;

    • study of possible pass ways towards integration of Omega into DNOISE II’s research program.

With these activities, we are confident that we will make full use of WesternGeco’s generous donation. However, we are aware of the challenges associated with the adaptation of industrial seismic data processing software in an academic setting.

Research dissemination

Objectives: To come up with a new multi-usage platform that allows us to improve dissemination of our research findings by being able to render our research output to different output formats including conventional PDF and HTML 5.

In addition to the above scientific efforts, Tim has also launched a new project focusing on an important development for next-generation scientific publishing called “Scholarly Markdown”. The goal of this project is to formalize a lightweight plain-text document format (in the vein of reStructuredText, ASCIIDoc, Textile, etc.) that throughly decouples scientific text and academic information from the eventual presentation aspect, and therefore is concise yet powerful enough to serve scientists at all stages of information dissemination, from online communication and note-taking to the eventual publication and archiving. Tim builds on the recent surge in popularity (in terms of usage amongst internet-based communities) of the Markdown format, adding syntax and conventions for mathematical expressions, floats/figures, and statements such as proofs and definitions. Because the Markdown syntax is minimal yet prescriptive in terms of semantic document structure, it naturally allows simultaneous publishing to several important formats: HTML, LaTeX, Word, InCopy, and JATS (the latter two are important in many publication and archival workflows). The output of these formats aim to be compatible with in-place workflows. For example, users can template the LaTeX output to automatically render documents in a specific existing style, while using native Natbib/Bibtex for bibliography formatting. Earlier this year Tim has produced a working prototype code for Scholarly Markdown that many SLIM members have successfully used to prepare their extended abstracts for the 2014 SEG annual meeting. Due to this workflow, most of the SLIM extended abstracts this year are available as pure HTML webpages, in addition to LaTeX-produced PDF manuscripts used in the submission process. This means that our work is now accessible on devices that include smartphones, ipads, and tablets. Scholarly Markdown is nearing official announcement as an open-source project, pending completion of official documentation and concrete examples demonstrating manuscript preparation for several major journals. Tim plans to use Scholarly Markdown to produce his own PhD dissertation later this year.

Aside from regular activities between students, PDFs, and their advisors, the PIs organize a weekly seminar on topics pertinent to the grant. This seminar has the primary goal of keeping the whole group—PIs, PDFs, and graduate students—up to date with the cutting edge developments in subjects that are instrumental to the success of the DNOISE projects. While the PDFs help with the week-to-week organization, the PI’s have been responsible for selecting the topics and providing our students and PDFs context so they are able to relate the often technical topics to their research part of the DNOISE project. Graduate students in the group are asked to take this as a directed studies course. With the highly interdisciplinary nature of the DNOISE group these seminars often result in vibrant discussions and a fruitful exchange of ideas which many times end up providing the core of new projects. In addition, this environment is conducive for students and PDFs in the group to interact scientifically. This seminar is also regularly attended by students, PDFs, and faculty—in addition to those in the DNOISE group—from departments including Earth, Ocean, and Atmospheric Sciences, Computer Science, Mathematics, Physics, and Electrical and Computer Engineering. To coordinate our case-studies and computing related activities, we also have weekly data and computer meetings with group members and collaborators.

HPC & international collaborations

Objectives. Sustained access to HPC though public-private international partnerships.

The research of DNOISE II is widely considered to be highly innovative and of key interest to industry. The fact that the number of industrial partners tripled over the first three years of DNOISE II is a clear demonstration of this increasingly strong interest. With our previous and now renewed, thanks to mid-term funding of the DNOISE—HPC grant, we are now in the unique position of being able to quickly develop and test concrete parallel implementations of our algorithms. This ability not only allows us to concentrate on the geophysics and mathematical underpinning of our approaches but access to HPC also gave our researchers the resources needed to get results on complicated synthetic and real field datasets in 2D. It is the excellence of our results that is mainly responsible for the considerable industry uptake we have enjoyed.

While DNOISE—HPC funding will for the remainder of the grant allow our team to continue our successful formula of sophisticated algorithm development, concrete parallel implementation and testing on realistic data in 3D, finding sustained funding for HPC in an academic research environment remains exceedingly challenging. To meet these challenges, we have continued to work on several fronts.

HPC—Big data forum

Objectives Create awareness of HPC needs in our field and possible public-private partnerships between the oil & gas industry and HPC industry on the one hand and academia on the other.

Resources are increasingly harder to find as the easy targets have been delineated and exploited with relatively simple processing flows which, whilst still not easy because of the large data volumes involved, are achievable on relatively vanilla cluster hardware. With the easy pickings gone and demand for oil & gas increasing, portfolios of proven reserves are more difficult to come by. So we find ourselves challenged by having to create models and images in increasingly complex geological settings that include sub-salt, sub-basalt, intra-carbonate and unconventional (Shale).

While traditional—typically ray-based but also more modern wave-equation based imaging workflows—have served us well they tend to fail in these complex areas and this has led to the emergence of iterative optimization driven wave-equation based imaging and inversion technology that work on massive seismic data volumes. These recent developments where we are moving away from “one-data-pass-only” seismic data processing flows to “multiple-pass” inversion flows is driving a step change in the industry where we are seeing a explosion in demand for data-intensive HPC to handle cycle-intensive algorithms on extremely large seismic data volumes.

Aside from having to move towards exascale compute to grind through petabytes of field data, the industry is challenged by

  • replacing (single-iteration) processing flows by iterative optimization schemes where data is touched multiple times. This puts strains on IO and interconnects of even the largest and most state-of-the-art HPC systems

  • developing and maintaining complex code bases that are in sync with current developments in wave-equation based imaging (modelling, FWI, WEMVA, etc.), “big data” (Hadoop, map reduce, etc.), and HPC technology (connection fabric, accelerators, etc.)

  • intake of new external technology developed in academia and governmental laboratories into their scale-up workflows

  • finding talented personnel that understands, is familiar with, and productive in modern-day data-intensive HPC environments

Academia, on the other hand, are challenged by

  • having sustained access to HPC hardware and software capable of handling “big data” and “big models”—the latter refers to problems that involves many unknowns—which is quite different from “classical” big data and HPC. This challenge is compounded by “HPC fatigue” amongst funding agencies and lack of true cost of research models of most academic consortia to cover the cost of even modestly sized HPC

  • developing codes that maximize uptake by industry. While groups like SLIM have greatly benefited from recent developments of high-level programming languages (e.g. matlab and the parallel matlab toolbox) that allow for parallel implementations of complex (iterative) algorithms, the industry has been struggling with testing and industrializing these codes in their systems. This leads to missed opportunities and hampers the ability to innovate

  • develop, QC, and make reproducible parallel codes that are of an increasing level of complexity in an academic environment where there is little to no support/training/recognition for this sort of work. While we are seeing increased discussions on this important topic, there is no funding model in place to adequately support this type of activity

The goal of our HPC/Big Data Forum at the Fall SINBAD Consortium meeting was to start a discussion on how address some of these challenges by

  • having an informal discussion by asking interested parties (SINBAD member representatives and “HPC” guests) to prepare short 5 minute presentations highlighting how their organizations are approaching these challenges with emphasis on the role industry-academic partnerships can play

  • shaping industry-academic collaboration in the form of targeted initiatives, such as the International Inversion Initiative (III) in Brazil, which includes a large (0.5 petaflop) HPC component and cost of research recovery, and a to-be-formed HPC Consortium with the aim to create a platform that

    • provides access for academic group such as SLIM to the latest hard/software technology

    • supports training of the next generation of computational geoscientists

    • takes a leading role in engaging in joint-venture projects encouraging involvement of HPC vendors and joint-industry projects (JIPs)

Attendees

  1. Fusion IO — Daniel St-Germain, Thomas Armstrong
  2. Scalar — John Gardner, Neil Bunn
  3. Cray — Geert Wenes, Wulf Massel
  4. SGI — Paul Beswetherick, Angela Littrell
  5. IBM —- Peter Madden, Josh Axelson, David Decastro
  6. Limitpoint Systems — David M. Butler
  7. Maxeler — Jacob Bower, Richard Veitch
  8. CGG — Alan Dewar
  9. UBC — Steve Cundy
  10. BG Group —- Hamish Macintyre
  11. ConocoPhillips — Larry Morley

Follow up

Ian Hanlon is organizing meetings with the stakeholders at the next EAGE.

International Inversion Initiative (III)

The UBC—SLIM is involved in a major $30M initiative to advance full-waveform inversion into a transformative technology capable of addressing problems across the entire spectrum of oil & gas exploration and production. The project, which is spearheaded by our collaboration with the British Gas Group, is designed to leverage opportunities for funding in Brazil related to the 1% R&D levy. Under this levy, oil & gas producing companies must invest 1% of their revenue in R&D. BG Group is also a main player in the LNG project in BC.

This project includes a $10M high-performance compute facility—stationed in Salvador, Bahia and to which our group will have access this summer, and a Science without Borders Fellowship program supporting the exchange of graduate students, PDFs, and long-term visitors amongst the participating institutions (these include professor Mike Warner’s group at Imperial College London and Universidade Federal do Rio Grande do Norte). We are very excited about this project because we believe that such a project produce huge benefits to Canada & Brazil by (i) exposing our students to true industrial-scale problems, (ii) offering us access to a facility that allows us to test and further develop our algorithm and workflows on industrial scale problems, (iii) exposing our team to workflows and practices of Mike Warner’s group which is one of the world leaders in applying FWI technology, and (iv) allowing us to engage in simulation-based acquisition design with the possibility to materialize these designs in the field.

BG Group is not the only company with obligations under the 1% levy and other SINBAD members to join III—e.g., Chevron, have shown an interest to participate in this project as well. To clarify the relationship between SINBAD (and thereby DNOISE) and III, Mike Warner and I put together a SINBAD and FULLWAVE FAQ.


2. Research team

Please provide an overview of the participation in, and scientific contributions to, the project for each member of the research team (principal investigator, co-investigators, collaborators, company and government scientists, research associates, postdocs, students, etc.).

We have discussed the participation of the research team in the text of the Section 1 Progress Report. Further information specific to the research team:

Late March, we were informed that Michael Friedlander, co-PI on the DNOISE II grant, will be taking a leave of absence at the Mathematics Department of UC Davis in the USA. Michael has been a great contributor to this CRD and several of his trainees remaining at UBC will continue to be involved in the project. Professor [Chen Grief] of UBC’s Computer Science Dept has agreed to take over Michael Friedlander’s role as co-PI for the last year of the project. Michael’s leave will not have a major impact on the research and milestones as outlined in the original grant.


3. Training

Please list each trainee (Undergraduate Students, Master’s Students, Doctoral Students, Postdoctoral Fellows, Research Associates, Technicians …) on a separate line in the table below providing: a) the number of years they have been on the project, b) the percentage (%) of time each type of trainee spent on this project, and c) the percentage (%) of funding from this CRD grant (NSERC and industry contribution). If a trainee is fully paid from other sources, enter “0” in the “% of funding from this grant” column. Insert additional rows if necessary. (DO NOT INCLUDE FAMILY NAMES.)

Specify type of trainee (e.g. M.Sc., Ph.D. etc) (a) Number of calendar years on the project (b) % of research time spent on this project (c) % of salary from this grant
Michelle – Summer Student .4 100% 100%
Arnold – Undergraduate Co-Op .8 100% 100%
Shruti – Undergraduate Assistant 1 10% 10%
Thomas – Undergraduate Assistant 1 10% 10%
Daniel – Undergraduate Co-Op .8 100% 100%
Jane – Undergraduate Co-Op .8 100% 100%
Harsh – Programming Intern .8 100% 100%
Igor – Summer Student .25 100% 0%
Valentin – Intern (visiting MSc) .25 100% 0%
Marion – Intern (visiting MSc) .25 100% 0%
Lina – MSc 3.5 100% 100%
Art – MSc 3.5 100% 100%
Brock – MSc 3.5 100% 100%
Navid – MSc 3.5 100% 100%
Oscar – PhD 1 100% 100%
Mengmeng – PhD 1 100% 100%
Mathias – PhD 1 100% 100%
Bas – PhD 2 100% 100%
Zhilong – PhD 2 100% 100%
Julie – PhD 2 50% 0%
Gabriel – PhD 2 100% 100%
Felix – PhD 3 100% 100%
Ives– PhD 3 100% 25%
Xiang – PhD 4 100% 100%
Haneet – PhD 4 100% 100%
Curt - PhD 4 100% 80%
Tuning – PhD 4 100% 100%
Tim – PhD 5 100% 100%
Rafael – Postdoctoral Fellow 1 100% 100%
Polina – Postdoctoral Fellow 1 100% 100%
Ernie – Postdoctoral Fellow 1 100% 100%
Ting Kei – Postdoctoral Fellow 1 25% 25%
Rongrong – Postdoctoral Fellow 1 100% 100%
Enrico – Postdoctoral Fellow 4 60% 60%
Henryk – Scientific Programmer-Systems Admin 4 65% 65%
Ian – Research Associate 3 100% 100%
Table1. DNOISE II Trainees 2013-14.

4. Dissemination of Research Results and Knowledge and/or Technology Transfer

4.1

Please provide the number of publications, conference presentations, and workshops to date arising from the research project supported by the grant in the table below.

Status Refereed Journal Articles Conference Presentations/Poster Other including Technical Reports, Non-Refereed Articles, etc.
Accepted-Published 9 49 54
Submitted 6 16 0
Table2. DNOISE II Publications 2013-14.

4.2

Please provide the bibliographical reference data for the above publications, conference presentations and workshops under the corresponding headings. For publications, specify whether submitted, accepted or published.

Refereed Journal Articles:

  1. Submitted. Michael P. Friedlander, Ting Kei Pong, and Ives J. Macedo, “Gauge optimization, duality, and applications”. 2014. Abstract

  2. Submitted. Michael P. Friedlander and Gabriel Goh, “Tail bounds for stochastic approximation”. 2014. Abstract

  3. Submitted. Ning Tu and Felix J. Herrmann, “Fast imaging with surface-related multiples by sparse inversion”. 2014. Abstract

  4. Submitted. Curt Da Silva and Felix J. Herrmann, “Optimization on the Hierarchical Tucker manifold - applications to tensor completion”. 2014. Abstract

  5. Submitted. Navid Ghadermarzy, Hassan Mansour, and Ozgur Yilmaz, “Non-Convex Compressed Sensing Using Partial Support Information”, Sampling Theory in Signal and Image Processing, 2014.

  6. Submitted. Ning Tu and Felix J. Herrmann, “Fast imaging with surface-related multiples by sparse inversion”, 2014.

  7. In Press. Tristan van Leeuwen and Felix J. Herrmann, “3D} frequency-domain seismic inversion with controlled sloppiness, SIAM Journal on Scientific Computing, 2014.

  8. Alexandre Y. Aravkin, James V. Burke, and Michael P. Friedlander, “Variational properties of value functions”. SIAM Journal on Optimization, 23(3):1689-1717, 2013.

  9. Hassan Mansour,, Felix J. Herrmann, and Ozgur Yilmaz, “Improved wavefield reconstruction from randomized sampling via weighted one-norm minimization”. Geophysics, 78(5), V193-V206, 2013.

  10. Reza Shahidi, Gang Tang, Jianwei Ma, and Felix J. Herrmann, “Application of randomized sampling schemes to curvelet-based sparsity-promoting seismic data recovery”, Geophysical Prospecting, vol. 61, p. 973-997, 2013.

  11. Felix J. Herrmann, Andrew J. Calvert, Ian Hanlon, Mostafa Javanmehri, Rajiv Kumar, Tristan van Leeuwen, Xiang Li, Brendan Smithyman, Eric Takam Takougang, and Haneet Wason, “Frugal full-waveform inversion: from theory to a practical algorithm”, The Leading Edge, vol. 32, p. 1082-1092, 2013.

  12. Peyman P. Moghaddam, Henk Keers, Felix J. Herrmann, and Wim A. Mulder, “A new optimization approach for source-encoding full-waveform inversion”, Geophysics, vol. 78, p. R125-R132, 2013.

  13. Tim T.Y. Lin and Felix J. Herrmann, “Robust estimation of primaries by sparse inversion via one-norm minimization”, Geophysics, vol. 78, p. R133-R150, 2013.

  14. Tristan van Leeuwen and Felix J. Herrmann, “Mitigating local minima in full-waveform inversion by expanding the search space”, Geophysical Journal International, vol. 195, p. 661-667, 2013.

  15. Tristan van Leeuwen, Aleksandr Y. Aravkin, and Felix J. Herrmann, “Comment on: “Application of the variable projection scheme for frequency-domain full-waveform inversion” (M. Li, J. Rickett, and A. Abubakar, Geophysics, 78, no. 6, R249–R257)”, Geophysics, vol. 79, p. X11-X17, 2014.

Plenary talks:

  1. Felix J. Herrmann, “Randomized sampling in exploration seismology”, in SPIE Optics and Photonics: Wavelets and Sparsity XV, 2013.

  2. Felix J. Herrmann, Breaking structure - why randomized sampling matters, CSEG Technical Luncheon, January 2013

Conference Presentations/Poster:

  1. Submitted. Rongrong Wang, Ozgur Yilmaz, and Felix J. Herrmann, “Full Waveform Inversion with Interferometric Measurements”. Submitted to SEG. 2014.

  2. Submitted. Brendan Smithyman, Bas Peters, Bryan DeVault, and Felix J. Herrmann, “Joint full-waveform inversion of on-land surface and VSP data from the Permian Basin”. Submitted to SEG. 2014.

  3. Submitted. Rajiv Kumar, Oscar Lopez, Ernie Esser, and Felix J. Herrmann, “Matrix completion on unstructured grids : 2-D seismic data regularization and interpolation”. Submitted to SEG. 2014.

  4. Submitted. Tim T.Y. Lin and Felix J. Herrmann, “Mitigating data gaps in the estimation of primaries by sparse inversion without data reconstruction”. Submitted to SEG. 2014.

  5. Submitted. Haneet Wason, Felix Oghenekohwo, and Felix J. Herrmann, “Randomization and repeatability in time-lapse marine acquisition”. Submitted to SEG. 2014.

  6. Submitted. Lina Miao, Polina Zheglova, and Felix J. Herrmann, “Randomized HSS acceleration for full-wave-equation depth stepping migration”. Submitted to SEG. 2014.

  7. Submitted. Felix Oghenekohwo, Rajiv Kumar, and Felix J. Herrmann, “Randomized sampling without repetition in time-lapse surveys”. Submitted to SEG. 2014.

  8. Submitted. Ernie Esser, Tristan van Leeuwen, Aleksandr Y. Aravkin, and Felix J. Herrmann, “A scaled gradient projection method for total variation regularized full waveform inversion”. Submitted to SEG. 2014.

  9. Submitted. Navid Ghadermarzy, Ozgur Yilmaz, and Felix J. Herrmann, “Seismic trace interpolation with approximate message passing”. Submitted to SEG. 2014.

  10. Submitted. Haneet Wason, Rajiv Kumar, Aleksandr Y. Aravkin, and Felix J. Herrmann, “Source separation via SVD-free rank minimization in the hierarchical semi-separable representation”. Submitted to SEG. 2014.

  11. Submitted. Bas Peters and Felix J. Herrmann, “A sparse reduced Hessian approximation for multi-parameter Wavefield Reconstruction Inversion”. Submitted to SEG. 2014.

  12. Submitted. Zhilong Fang and Felix J. Herrmann, “A stochastic quasi-Newton McMC method for uncertainty quantification of full-waveform inversion”. Submitted to SEG. 2014.

  13. Submitted. Tristan van Leeuwen and Felix J. Herrmann, “3D frequency-domain seismic inversion with controlled sloppiness”. 2014. Submitted to SIAM Journal on Scientific Computing.

  14. Submitted. Navid Ghadermarzy, Hassan Mansour, and Ozgur Yilmaz, “Non-Convex compressed sensing using partial support information”. Submitted to SEG. 2013.

  15. Submitted. Aleksandr Y. Aravkin, Rajiv Kumar, Hassan Mansour, Ben Recht, and Felix J. Herrmann, “Fast methods for denoising matrix completion formulations, with application to robust seismic data interpolation”. 2013.

  16. Submitted. Tristan van Leeuwen and Felix J. Herrmann, “A penalty method for PDE-constrained optimization”. 2013.

  17. Art Petrenko, Tristan van Leeuwen, Diego Oriato, Simon Tilbury, and Felix J. Herrmann, “Accelerating an iterative Helmholtz solver with FPGAs”, In proceedings EAGE technical program, Amsterdam, 2014.

  18. Ernie Esser and Felix J. Herrmann, “Application of a convex phase retrieval method to blind seismic deconvolution”, In proceedings EAGE technical program, Amsterdam, 2014.

  19. Polina Zheglova and Felix J. Herrmann, “Application of matrix square root and its inverse to downward wavefield extrapolation”, In proceedings EAGE technical program, Amsterdam, 2014.

  20. Rajiv Kumar, Tristan van Leeuwen, and Felix J. Herrmann, “Extended images in action: efficient WEMVA via randomized probing”, In proceedings EAGE technical program, Amsterdam, 2014.

  21. Rafael Lago, Art Petrenko, Zhilong Fang, and Felix J. Herrmann, “Fast solution of time-harmonic wave-equation for Full-Waveform Inversion”, In proceedings EAGE technical program, Amsterdam, 2014.

  22. Zhilong Fang, Curt Da Silva, and Felix J. Herrmann, “Fast uncertainty quantification for 2D full-waveform inversion with randomized source subsampling”, In proceedings EAGE technical program, Amsterdam, 2014.

  23. Curt Da Silva](https://www.slim.eos.ubc.ca/biblio/author/288) and Felix J. Herrmann, “Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising”, In proceedings EAGE technical program, Amsterdam, 2014.

  24. Tim T.Y. Lin and Felix J. Herrmann, “Multilevel acceleration strategy for the robust estimation of primaries by sparse inversion”, In proceedings EAGE technical program, Amsterdam, 2014.

  25. Tristan van Leeuwen, Felix J. Herrmann, and Bas Peters, “A new take on FWI: Wavefield Reconstruction Inversion”, In proceedings EAGE technical program, Amsterdam, 2014.

  26. Rajiv Kumar, Aleksandr Y. Aravkin, Ernie Esser, Hassan Mansour, and Felix J. Herrmann, “SVD-free low-rank matrix factorization : wavefield reconstruction via jittered subsampling and reciprocity”, In proceedings EAGE technical program, Amsterdam, 2014.

  27. Felix Oghenekohwo, Ernie Esser, and Felix J. Herrmann, “Time-lapse seismic without repetition: reaping the benefits from randomized sampling and joint recovery”, In proceedings EAGE technical program, Amsterdam, 2014.

  28. Hassan Mansour and Ozgur Yilmaz, “A sparse randomized Kaczmarz algorithm”. In proceedings IEEE Global Conference on Signal and Information Processing, Austin, 2013.

  29. Navid Ghadermarzy and Ozgur Yilmaz, “Weighted approximate message passing algorithms for sparse recovery”. In proceedings SPIE: Wavelets and Sparsity XV, San Diego, 2013.

  30. Bas Peters, Felix J. Herrmann, and Tristan van Leeuwen, “Wave-equation based inversion with the penalty method: adjoint-state versus wavefield-reconstruction inversion”, In proceedings EAGE technical program, Amsterdam, 2014.

  31. Tim T.Y. Lin and Felix J. Herrmann, “Cosparse seismic data interpolation”, In proceedings EAGE technical program, London, 2013.

  32. Ning Tu, Aleksandr Y. Aravkin, Tristan van Leeuwen, and Felix J. Herrmann, “Fast least-squares migration with multiples and source estimation”, In proceedings EAGE technical program, London, 2013.

  33. Curt Da Silva and Felix J. Herrmann, “Hierarchical Tucker Tensor Optimization - Applications to 4D Seismic Data Interpolation”, In proceedings EAGE technical program, London, 2013.

  34. Tristan van Leeuwen, Aleksandr Y. Aravkin, Henri Calandra, and Felix J. Herrmann, “In which domain should we measure the misfit for robust full waveform inversion?”, In proceedings EAGE technical program, London, 2013.

  35. Haneet Wason and Felix J. Herrmann, “Ocean bottom seismic acquisition via jittered sampling”, In proceedings EAGE technical program, London, 2013.

  36. Rajiv Kumar, Aleksandr Y. Aravkin, Hassan Mansour, Ben Recht, and Felix J. Herrmann, “Seismic data interpolation and denoising using SVD-free low-rank matrix factorization”, In proceedings EAGE technical program, London, 2013.

  37. Felix J. Herrmann,“FWI Hits the Target”, in EAGE Workshop - Robust FWI: From the Arcane to Mundane, London, 2013.

  38. Rajiv Kumar, Tristan van Leeuwen, and Felix J. Herrmann, “AVA analysis and geological dip estimation via two-way wave-equation based extended images”, In proceedings SEG technical program, Houston, 2013.

  39. Ning Tu, Xiang Li, and Felix J. Herrmann, “Controlling linearization errors in \(\ell_1\) regularized inversion by rerandomization”, In proceedings SEG technical program, Houston, 2013.

  40. Rajiv Kumar, Tristan van Leeuwen, and Felix J. Herrmann, “Efficient WEMVA using extended images”, In proceedings SEG technical program, Houston, 2013.

  41. Ning Tu, Tristan van Leeuwen, and Felix J. Herrmann, “Limitations of the deconvolutional imaging condition for two-way propagators”, In proceedings SEG technical program, Houston, 2013.

  42. Xiang Li, Anais Tamalet, Tristan van Leeuwen, and Felix J. Herrmann, “Optimization driven model-space versus data-space approaches to invert elastic data with the acoustic wave equation”, In proceedings SEG technical program, Houston, 2013.

  43. Rajiv Kumar, Hassan Mansour, Aleksandr Y. Aravkin, and Felix J. Herrmann, “Reconstruction of seismic wavefields via low-rank matrix factorization in the hierarchical-separable matrix representation”, In proceedings SEG technical program, Houston, 2013.

  44. Curt Da Silva and Felix J. Herrmann, “Structured tensor missing-trace interpolation in the Hierarchical Tucker format”, In proceedings SEG technical program, Houston, 2013.

  45. Haneet Wason and Felix J. Herrmann, “Time-jittered ocean bottom seismic acquisition”, In proceedings SEG technical program, Houston, 2013.

  46. Felix J. Herrmann, “Recent developments in wave-equation based inversion technology”, In SEG Workshop - Advances in Computational Mathematics for Geophysics, Houston, 2013.

  47. Felix J. Herrmann and Ning Tu, “Fast RTM with multiples and source estimation”, In proceedings EAGE technical program/SEG Forum - Turning noise into geological information: The next big step?, Lisbon, 2013.

  48. Tim T.Y. Lin, Haneet Wason, and Felix J. Herrmann, “Dense shot-sampling via time-jittered marine sources”, In proceedings SEG technical program Simultaneous Sources Workshop, Houston, 2013.

  49. Tristan van Leeuwen and Felix J. Herrmann, “A penalty method for PDE-constrained optimization with applications to wave-equation based seismic inversion”, In SEG Workshop - Computational Mathematics for Geophysics, Houston, 2013.

  50. Felix J. Herrmann, “Recent developments in wave-equation based inversion technology”, In proceedings SEG technical program FWI Workshop, Oman, 2013.

  51. Felix J. Herrmann, “Randomized sampling in exploration seismology”, in KAUST International Workshop on Multiscale Modeling, Simulation and Inversion, Jeddah, Saudi Arabia, 2013.

  52. Keynote Speaker Felix J. Herrmann, “Randomized sampling in exploration seismology”, in SPIE Optics and Photonics: Wavelets and Sparsity XV, San Diego, 2013.

  53. Navid Ghadermarzy, “Using prior support information in compressed sensing”, Applied Harmonic Analysis Conference, Calgary, 2013.

  54. Curt Da Silva and Felix J. Herrmann, “Hierarchical Tucker Tensor Optimization - Applications to Tensor Completion”, in proceedings SAMPTA, Bremen, 2013.

  55. Felix Oghenekohwo and Felix J. Herrmann, “Assessing the need for repeatability in acquisition of time-lapse data”, in CSEG, Calgary, 2013

  56. Lina Miao and Felix J. HerrmannAcceleration on sparse promoting seismic applications”," in CSEG, Calgary, 2013.

  57. Art Petrenko, Tristan van Leeuwen, and Felix J. Herrmann, “Software acceleration of CARP, an iterative linear solver and preconditioner”“, in HPCS, Ottawa, 2013

  58. Art Petrenko, Felix J. Herrmann, Diego Oriato, Simon Tilbury, and Tristan van Leeuwen, “Accelerating an iterative Helmholtz solver with FPGAs”, in Rice University Oil and Gas High Performance Computing Workshop, Houston, 2014.

  59. Felix J. Herrmann, “Randomized sampling in exploration seismology”, in NIPS, Lake Tahoe, 2013

  60. Aleksandr Y. Aravkin, Tristan van Leeuwen, and Ning Tu, “Sparse seismic imaging using variable projection”, in proceedings ICASSP, Vancouver, 2013.

  61. Rafael Lago, Art Petrenko, Zhilong Fang, and Felix J. Herrmann, “CRMN} method for solving time-harmonic wave equation”, Copper Mountain Conference, 2014.

  62. Michael P. Friedlander, “The merits of keeping it smooth”, SIAM Conference on Optimization, May, 2014

  63. Michael P. Friedlander, “Gauges, bundles, and sparsity”, Simon’s Institute Workshop on “Succinct Data Representations and Applications”, UC Berkeley, September 2013

  64. Michael P. Friedlander, “Gauge Optimization”, International Conference on Continuous Optimization, Portugal, July 2014

Other (Including Technical Reports, Non-Refereed Articles, etc.):

Theses:

  1. Art Petrenko, “Accelerating an iterative Helmholtz solver using reconfigurable hardware”, 2014. MSc Dissertation, University of British Columbia. [

  2. Lina Miao, “Efficient seismic imaging with spectral projector and joint sparsity”, 2014. MSc Dissertation, University of British Columbia.

  3. Brock Hargreaves, “Sparse signal recovery: analysis and synthesis formulations with prior support information”, 2014. MSc Dissertation, University of British Columbia.

  4. James Johnson, “Seismic wavefield reconstruction using reciprocity”, 2013. MSc Dissertation, University of British Columbia.

  5. Navid Ghadermarzy, “Using prior support information in compressed sensing”, 2013. MSc Dissertation, University of British Columbia.

Presentations to Industry, Academia or Other Audience

  1. Felix J. Herrmann, “ Randomized Sampling in Exploration Seismology”, Earth Sciences Department, University of Auckland, Dec, 2013, Auckland, New Zealand.

  2. Felix J. Herrmann, “Imaging with multiples”, “Full-waveform inversion”, “Extended images”, Total, Sept, 2013, Houston, TX.

  3. Felix J. Herrmann, “Extended Images, AVA & MVA”, Statoil, Sept, 2013, Houston, TX.

SINBAD Presentations

  1. Ning Tu, “Controlling linearization errors with rerandomization”, SINBAD Spring consortium talks. 2013.

  2. Tim T.Y. Lin, “Cosparse seismic data interpolation”, SINBAD Spring consortium talks. 2013.

  3. Haneet Wason and Felix J. Herrmann, “Time-jittered ocean bottom seismic acquisition”, SINBAD Spring consortium talks. 2013.

  4. Felix J. Herrmann, “Frugal FWI”, SINBAD Spring consortium talks. 2013.

  5. Rajiv Kumar, Hassan Mansour, Aleksandr Y. Aravkin, and Felix J. Herrmann, “Seismic data interpolation via low-rank matrix factorization in the hierarchical semi-separable representation”, SINBAD Spring consortium talks. 2013.

  6. Felix J. Herrmann, “Extended images in action”, SINBAD Spring consortium talks. 2013.

  7. Ning Tu, “Fast imaging with multiples and source estimation”, SINBAD Spring consortium talks. 2013.

  8. Curt Da Silva and Felix J. Herrmann, “Hierarchical Tucker tensor optimization - applications to 4D seismic data interpolation”, SINBAD Spring consortium talks. 2013.

  9. Ives Macedo, “A dual approach to PhaseLift via gauge programming and bundle methods”, SINBAD Fall consortium talks. 2013.

  10. Felix Oghenekohwo, “Estimating 4D differences in time-lapse using randomized sampling techniques”, SINBAD Fall consortium talks. 2013.

  11. Art Petrenko, “Accelerating an iterative Helmholtz solver with FPGAs”, SINBAD Fall consortium talks. 2013.

  12. Ernie Esser, “Applications of phase retrieval methods to blind seismic deconvolution”, SINBAD Fall consortium talks. 2013.

  13. Tim T.Y. Lin, “Bootstrapping Robust EPSI with coarsely sampled data”, SINBAD Fall consortium talks. 2013.

  14. Brock Hargreaves, “The bridge from orthogonal to redundant transforms and weighted \(\ell_1\) optimization”, SINBAD Fall consortium talks. 2013.

  15. Enrico Au-Yeung, “Compressed sensing, recovery of signals using random Turbo matrices”, SINBAD Fall consortium talks. 2013.

  16. Bas Peters, “Examples from the Penalty-method”, SINBAD Fall consortium talks. 2013.

  17. Rajiv Kumar, “Extended images in action: efficient AVA via probing”, SINBAD Fall consortium talks. 2013.

  18. Rajiv Kumar, “Extended images in action: efficient WEMVA via randomized probing”, SINBAD Fall consortium talks. 2013.

  19. Lina Miao, “Fast imaging via depth stepping with the two-way wave equation”, SINBAD Fall consortium talks. 2013.

  20. Ning Tu, “Fast imaging with multiples and source estimation”, SINBAD Fall consortium talks. 2013.

  21. Felix J. Herrmann, “Frugal FWI”, SINBAD Fall consortium talks. 2013.

  22. Polina Zheglova, “Imaging with hierarchical semi separable matrices”, SINBAD Fall consortium talks. 2013.

  23. Rafael Lago, “Krylov solvers in frequency domain FWI”, SINBAD Fall consortium talks. 2013.

  24. Felix J. Herrmann, “Latest developments on the Chevron GOM and other datasets”, SINBAD Spring consortium talks. 2013.

  25. Xiang Li, “Lessons learned from Chevron Gulf of Mexico data set”, SINBAD Fall consortium talks. 2013.

  26. Okan Akalin, “Matrix and tensor completion for large-scale seismic interpolation: a comparative study”, SINBAD Fall consortium talks. 2013.

  27. Felix J. Herrmann, “Mitigating local minima in full-waveform inversion by expanding the search space with the penalty method”, SINBAD Spring consortium talks. 2013.

  28. Xiang Li, “Model-space versus data-space FWI with the acoustic wave equation”, SINBAD Fall consortium talks. 2013.

  29. Rongrong Wang, “Noise reduction by using interferometric measurements”, SINBAD Fall consortium talks. 2013.

  30. Zhilong Fang, “Parallel 3D FWI with simultaneous shots”, SINBAD Fall consortium talks. 2013.

  31. Brendan Smithyman, “Phase-residual based quality-control methods and techniques for mitigating cycle skips”, SINBAD Fall consortium talks. 2013.

  32. Ting Kei Pong, “The proximal-proximal gradient algorithm”, SINBAD Fall consortium talks. 2013.

  33. Julie Nutini, “Putting the curvature back into sparse solvers”, SINBAD Fall consortium talks. 2013.

  34. Mike Warner, “Reflection FWI with a poor starting model”, SINBAD Fall consortium talks. 2013.

  35. Tristan van Leeuwen, “Relaxing the physics: a penalty method for full-waveform inversion”, SINBAD Fall consortium talks. 2013.

  36. Ning Tu, “SLIM’s findings on the Machar dataset”, SINBAD Fall consortium talks. 2013.

  37. Tristan van Leeuwen, “Solving the data-augmented wave equation”, SINBAD Fall consortium talks. 2013.

  38. Curt Da Silva, “Structured tensor formats for missing-trace interpolation and beyond”, SINBAD Fall consortium talks. 2013.

  39. Zhilong Fang, “Swift FWI”, SINBAD Fall consortium talks. 2013.

  40. Gabriel Goh, “Taming time through tangents”, SINBAD Fall consortium talks. 2013.

  41. Haneet Wason, “Time-jittered marine sources”, SINBAD Fall consortium talks. 2013.

  42. Zhilong Fang, “Uncertainty analysis for FWI”, SINBAD Fall consortium talks. 2013.

  43. Navid Ghadermarzy, “Using prior support information in approximate message passing algorithms”, SINBAD Fall consortium talks. 2013.

  44. Rajiv Kumar, “Wavefield reconstruction with SVD-free low-rank matrix factorization”, SINBAD Fall consortium talks. 2013.

  45. Michael Friedlander, “Optimization”, SINBAD Fall consortium talks. 2013.

Conference/workshop Organisation:

  1. M. Friedlander, Co-organizer of the cluster on “Sparse optimization and information processing”, International Conference on Continuous Optimization, Lisbon, Portugal, July 2013.
  2. F. Herrmann, Organiser SINBAD 2013 Spring Consortium Meeting, June 2013, London, UK; SINBAD 2013 Fall Consortium Meeting, Whistler, BC; SINBAD 2014 Spring Consortium Meetig, June 2014, Amsterdam.
  3. F. Herrmann, Organizer of the International Full Waveform Inversion Workshop, April 2015, Sao Paolo, Brazil.
  4. O. Yilmaz, Co-organizer of BIRS 2-Day Workshop on Applied and Computational Harmonic Analysis, August 2013.
  5. O. Yilmaz, Co-organizer of BIRS 5-Day Workshop on “Sparse Representations, Numerical Linear Algebra, and Optimization”, October 2014 (scheduled)
  6. O. Yilmaz, Co-organizer, AMS 2014 Fall Eastern Section Meeting, Special Session on “Sampling Theory”

4.3 Patent and Licenses

Please provide in the table below the number of patents (filed and issued) and licences to date arising from the research project supported by the grant in the table below. (Provide details in 4.4.)

___ Not applicable

  • OR -

___ None Yet Filed/Issued

Number of Patents

Description CANADA U.S. EP OTHER TOTALS
# of Patent Applications Filed 1
# of Patents Issued

4.4. Patents

Patent Applications Filed:
“A penalty method for {PDE}-constrained optimization ({CONFIDENTIAL})”. Tristan van Leeuwen, Felix J. Herrmann, PCT International Application, filed April 22, 2014.

Licences:
Our industrial partners get in return for their financial contributions royalty-free access to the IP developed as part of the SINBAD project. This is part of the SINBAD agreement, which arranges the industrial contributions that are matched by DNOISE II. The full list of participating companies appears at top. DNOISE II was joined by 3 new companies in this reporting period, namely Statoil, Woodside and ION GXT.

4.5 Prospects for the Transfer of the Results to the User Sector

Describe how the results achieved to date are being transferred to the user sector and the prospects for their commercial/industrial exploitation.

Technology dissemination

Aside from presenting our results at professional meetings and at our twice-a-year Consortium meetings, we have been involved in the following initiatives to disseminate our research findings to our industrial partners:

  1. software releases with concrete parallel software implementations of our algorithms (see details below)
  2. development of our parallel development environment pSPOT and our datacontainer that gives us access to the parallel IO;
  3. internships during which our students help our industrial partners with the application of our technology to solve their problems. This internship program has been very successful and allowed companies to evaluate our technology.

Cumulative software releases:

  1. WRIm—Wavefield Reconstruction Imaging. Wave-equation based imaging derived from our Wavefield Reconstruction Inversion framework where images are created via cross-correlations of solutions of the data-augmented wave equation – where solutions are sought that fit both the data and the wave-equation — and the corresponding wave-equation residues. The advantage of this method is that it does not need to solve the adjoint wave equation. For questions contact Bas Peters. [Read more] [GitHub]

  2. Seismic data regularization, interpolation, and denoising using factorization based low-rank optimization. This application package demonstrate the simultaneous seismic data interpolation and denoising on a 2D seismic line from Gulf of Suez. For questions contact Rajiv Kumar. [Read more] [GitHub]

  3. Missing-receiver interpolation of 3D frequency slices using Hierarchical Tucker Tensor optimization. Missing receiver interpolation of frequency slices in 3D seismic acquisition using the Hierarchical Tucker tensor format. This interpolation scheme exploits the low-rank behavior of different reshapings of seismic data into matrices, in particular at low frequencies. We demonstrate this technique on a simple frequency slice of data. For questions contact Curt DaSilva. [Read more] [GitHub]

  4. Robust Estimation of Primaries with Sparse Inversion via one-norm minimization. An iterative prestack surface demultiple method built upon solving a L1-minimization for the surface-free Green’s function while simultaneously estimating the source wavelet.
  5. Migration from surface-related multiples. Fast imaging with surface-related multiples by sparse inversion [Demo]
  6. 2D constant-density acoustic frequency-domain modeling, linearized modeling and imaging. A parallel matrix-free framework for (linearized) acoustic modelling with the time-harmonic Helmholtz equation.
  7. Modified Gauss-Newton full-waveform inversion. Fast full-waveform inversion with sparse updates
  8. Fast Robust Waveform inversion. Fast full-waveform inversion with robust penalties and source estimation.
  9. 2D ocean-bottom marine acquisition via jittered sampling – a new type of marine acquisition scheme including curvelet-based recover with one-norm minimization;
  10. Efficient least-squares imaging with sparsity promotion and compressive sensing – fast reverse-time migration with phase encoding and curvelet-domain sparsity promotion;
  11. Fast imaging with wavelet estimation by variable projection – reverse-time migration with curvelet-domain sparsity promotion and source-wavelet estimation;
  12. Seismic trace interpolation using weighted one-norm minimization – recovery of missing traces by weighted curvelet-domain sparsity promotion
  13. 3D acoustic full-waveform inversion – extension of our full-waveform inversion to 3-D;
  14. Application of simultaneous seismic data interpolation and denoising using factorization based low-rank optimization – missing-trace interpolation using low-rank matrix factrorizations and nuclear norm minimization;
  15. SPOT: A linear-operator toolbox for Matlab - enhancements to the core objected-oriented operator library;

Consortium sponsor meetings:

We had avid participation from our sponsors at our 2013 Spring SINBAD Consortium Meeting held Jun 14 at Royal School of Mines, Imperial College London in association with EAGE conference. There was record 32 representatives from our sponsor R&D personnel, as listed below. We also hosted representatives of prospective new sponsors DONG Energy, Statoil, and Saudi Aramco at this meeting. Statoil have joined us subsequently. This meeting was hosted by our collaborator Mike Warner at Imperial College London and featured a full day of presentations by 4 of the group’s students, Felix Herrmann and collaborator Mike Warner.

Sponsor Participants: Raymond Abma (BP America); Aria Abubakar (Schlumberger-Doll Research) Paul Brettwood (ION Geophysical); John Brittan (ION Geophysical); Paul Childs (Schlumberger Gould Research); Nanxun Dai (BGP); Kristof DeMeersman (CGG); Carlos Eduardo Theodoro (Petrobras); Camila Franca (BG Group); David Fraser Halliday (Schlumberger Gould Research); Rob Hegge (Petroleum Geo-Services); James Hobro (Schlumberger); Jon-Fredrik Hopperstad (Schlumberger Gould Research); Rodney Johnston (BP Canada); Charles Jones (BG Group); Andreas Klaedtke (Petroleum Geo-Services); Clement Kostov (WesternGeco GeoSolutions, Schlumberger); Andrew Long (Petroleum Geo-Services); Hamish Macintyre (BG Group); Fabio Mancini (Woodside); Ian Moore (WesternGeco); Jeremy Neep (BP); Einar Otnes (Petroleum Geo-Services); Gordon Poole (CGG); James Rickett (Schlumberger Gould Research); Jonas Rohnke (CGG); Risto Siliqi (CGG); Djalma Soares (Petrobras); James Selvage (BG Group); Paulo Terenghi (Petroleum Geo-Services); John Washbourne (Chevron); Yu Zhang (CGG)

Guests: Kent Broadhead, Thierry-Laurent Tonellot (Saudi Aramco); Christian Hidalgo (DONG Energy); Andrew James Carter (Statoil)

Our 2013 Fall SINBAD Consortium Meeting, held Dec 1-4 in Whistler, also attained record participation from our sponsors. 27 industry geophysicists made the journey to BC to attend 3 ½ days of lectures presented by our students and postdocs.

Sponsor Participants: Sverre Brandsberg-Dahl (Petroleum Geo-Services); Adriana Citlali Ramirez (Statoil); Richard Coates (Schlumberger); Graham Conroy (CGG); Alan Dewar (CGG); Carlos Eduardo Theodoro (PETROBRAS); Fuchun Gao (Total); Rob Hegge (Petroleum Geo-Services); Charles Jones (BG Group); Stefan Kaculini (CGG); Sam Kaplan (Chevron); Andreas Klaedtke (Petroleum Geo-Services); Chengbo Li (ConocoPhillips); Hamish Macintyre (BG Group); Xiao-Gui Miao (CGG); Larry Morley (ConocoPhillips); Jaime Ramos Martinez (Petroleum Geo-Services); Tom Ridsdill-Smith (Woodside Energy); Paolo Terenghi (Petroleum Geo-Services); Daniel Trad (CGG); Madhav Vyas (BP); Chao Wang (ION Geophysical); Wei Wu (BGP); David Yingst (IonGEO Corp.); Zhou Yu (BP); Wei Zhang (BGP); Chaoguang Zhou (Petroleum Geo-Services)

Our 2014 Spring SINBAD Consortium meeting will be held in Amsterdam June 20 in association with the 2014 EAGE conference. Our trainees have a record acceptance rate this year at this internationally prolific conference, with 10 oral presentations accepted to the EAGE conference from our group. Our Consortium Meeting has good registration at time of submission of this report, and we are expecting guest attendees from INPEX and Petronas, who are considering membership in the consortium.

Sponsor Participants Registered (at time of writing): Aria Abubakar (Schlumberger); Hojjat Akhondi-Asi (Schlumberger); John Brittan (ION Geophysical); Luis D’Afonseca (CGG); Rob Hegge (PGS); Thomas Hertweck (CGG); Henning Hoeber (CGG); Charles Jones (BG Group); Ian Jones (ION Geophysical); Simon King (CGG); Thomas Mensch (CGG); Nick Moldoveanu (Schlumberger); Ian Moore (Schlumberger); Kawin Nimsaila (CGG); James Selvage (BG Group); Michel Schonewille (Schlumberger); Risto Siliqi (CGG); Gert-Jan van Groenestijn (PGS); Nuno Vieira Da Silva (CGG); Stephen Xin (CGG); Dong Zheng (CGG)

Guests: Shogo Masaya (INPEX); Kenichi Nakaoka (INPEX); Katsumi Nakasuka (INPEX); Abdul Aziz Muhamad (PETRONAS)

Outreach

To disseminate our research findings to the broader academic and industrial communities, we have written review articles for International Innovation and in IEEE Signal Processing Magazine. We also participated in conferences in the field of signal processing (ICASSP and SSP), machine learning (NIPS) and computational mathematics (SIAM) to get more involvement from these communities, and F. Herrmann presented a CSEG Working Lunch Series lecture to a 500-member audience of Calgary-area geophysics professionals. Outreach was also described throughout section 1.


5. Problems Encountered

Identify the main problems encountered during this installment of the grant from the list below (select all that apply):

X Technical or scientific problems
___ Problems with direction of research or findings
___ Equipment and facilities
X Staffing issues (including students)
___ Funding problems
X Partner withdrew from project
___ Partner interaction issues
___ Other (specify)

Technical or scientific problems

Aside from the impending leave of absence of Michael Friedlander, the project DNOISE II has not suffered any major setbacks. We have been making substantial progress on many fronts and we will soon be empowered with the new computer and access to high-performance computing in Brazil, which put us in an outstanding position to further establish our leadership role in seismic data acquisition and computational exploration seismology.

If there is one problem worth mentioning then this would be the withdrawal of BP from the program. While we certainly appreciate BP’s support, especially during the difficult times after the stock exchange crash, I regret the collateral impact this withdrawal is having on the students who spent time, not in the least encouraged by BP, on the Machar field data set. Because of their withdrawal, we can not continue working on this dataset on which we obtained very encouraging results.

This experience exposes difficulties working with field data in an academic setting. Not only is obtaining field data challenging, the legalistic framework to make this happen is difficult to manage. On the one hand, industry is understandably pushing for formal legal agreements with the university to protect their data while the university is trying to minimize the legal risks these agreements may bring and the students are caught in the middle.

I have and will continue to work with the University Liaison Office and our industrial partners towards towards arrangements that guarantee

  • access to the data for the duration of the student’s research program (duration of the degree + extensive period to get publications out afterwards);
  • reasonable protection of data to appease concerns from industry.

With this approach, I hope to reduce the risk of a recurrence of what happened with BP. As far as I understand, BP is planning to rejoin the consortium as soon as they are out of the financial aftermath of the deep-sea horizon disaster.

Staffing Issues

One of our Phd students withdraw prior to completion to pursue other opportunities. On the positive side, we experienced unprecedented interest from graduate student applicants this year and had an extremely high quality of applicant. We will have three new students begin in September, Ben Bougher from Nova Scotia, Canada, Phillipe Witte from Germany, and Shashin Sharan from India. Ben Bougher has extensive prior experience in geophysical software consultancy and is a previous NSERC USRA recipient. Philippe Witte is an exceptional student in our classical areas of numerical modeling and seismics from the Wave Inversion Technology Consortium at University of Hamburg. Shashin Sharan brings a wealth of experience in Omega platform and seismic data handling from a period of employment as geophysicist at Schlumberger India. We expect these students to broaden our research and bring some unique skillsets of their own.

6. Collaboration with Supporting Organizations

6.1 Who initiated this CRD project?

X The university researcher
__ The industry partner
__ Shared initiation (university/industry)
__ Other (specify)

6.2 In what way were the partners directly involved in the project (select all that apply)?

X Partners were available for consultation
X Partners provided facilities
X Partners participated in the training
X Partners received training from university personnel
X Partners discussed the project regularly with the university team
__ Number of meetings during the period covered by this report: 2
X Partners were involved in the research

6.3 Describe the partner’s involvement and comment on the collaboration.

The industry’s collaboration involves the following:

  • Attendance to our Consortium meetings that we organize twice a year, once in BC and once in Europe. During these meetings, the industry partners give us extensive feedback on our projects and provide formal feedback during conversations at the annual steering committee meeting at the end of the Fall Consortium meeting held in BC;
  • Releases of datasets to test our algorithms. We worked on very sophisticated datasets provided by BG, BGP, BP, Chevron, PGS, and WesternGeco;
  • Provision of in-kind donation of Omega seismic data processing software from WesternGeco. We expect this donation to be a significant tool in our future research plans;
  • Hiring our students as interns or employees. Brock Hargreaves completed a internship with Total. Haneet Wason completed an internship with PGS. Ning Tu completed a short visit (2 weeks) with PGS. Lina Miao was hired by CGG (Calgary office) subsequent to defending her MSc;
  • Informal feedback and consultation in conversations throughout the year;
  • Meetings and discussions at geophysical industry conferences;
  • Case studies on synthetic and field data

6.4 Was the total amount of cash committed by the partner during the period covered by this report received?

X Yes
__ No

6.5 Was any in-kind received from the partner during the period covered by this report?

X Yes
__ No

6.6

For cash and in-kind received, please enter the amounts below, along with the amount of cash and in-kind committed in the original proposal. If no in-kind was received, please enter “0”. Where in-kind was not committed enter “n/a”.

Committed Amount Received to Date
Cash $623,000 $623,000
In-Kind $485,420 $200,000

6.7

Describe the in-kind received and explain variations between commitment and actual cash and in-kind contribution if applicable.

WesternGeco generously donated the full commercial Omega Seismic Data processing system, which we have hosted on a IBM multithreaded workstation. We have installed and are currently configuring this package to work with the Chevron Challenge dataset and will be reporting and presenting the results over the next few months, including the SEG Denver conference in October 2014. This software will reach full implementation with the installation of our new High Performance Computing system.

We are expecting to receive in-kind contributions from the vendor ultimately selected to provide the HPC equipment. This was originally quantified as $285,420 based on quotes received from IBM and was understood to be dependent upon the vendor agreement ultimately negotiated following the equipment bid process. We are on the verge of final selection of this vendor and will be able to quantify and report this in-kind contribution by July 31.

WesternGeco have provided land and marine datasets which will be used by SLIM researchers to explore complex scenarios in data processing.

BP - provided marine data from the Machar region of the North sea which was used by Tim Lin to test EPSI and Ning Tu to improve the migration image.

PGS - provided broadband marine streamer data for Haneet Wason to test acquisition parameters vital to her PhD work.

Vecta - provided land data from the Texas Permian basin which was used by our PDF Brendan Smithyman for FWI velocity modelling.

Chevron - provided synthetic marine dataset created as an international benchmark for Full Waveform Inversion of acoustic data. The SLIM team working on this data includes Felix, Ian, Tim, Haneet, Xinag Li, Ning Tu, Rajiv, Brendan, Mathias and MengMeng - to be presented at the 2014 SEG in Denver.


7. Financial Information

Figure10DNOISE Financial Report 2013-14.

The purpose of this section is to provide additional project-specific detail; it cannot be substituted with a Statement of Account (Form 300).

Our Yr4 budget is anticipated to be fully spent at total project expenditures of $1,935,961, with an anticipated overexpenditure of $34,800. This total project spending includes $200,000 in kind donation of software from WesternGeco, and $285,420 anticipated in-kind form our HPC vendor. Actual cash expenditures for the project in Yr 4 will total $1,450,541.

Sponsor monies in support of the project were fully received. This includes the additional monies committed from new sponsors Woodside and ION GXT in support of the new HPC equipment.

Our investment in trainees through Yr4 of the project was substantial. We gained 4 new PhD students in Sept 2013. Our Expenditure on students is therefore 20% over budget but this is detracted by underspending in Technical/Professional assistants category due to our continued inability to recruit a second scientific programmer, and we have instead taken the tack of bringing in undergraduate co-ops to assist our scientific programmer/IT support lead Henryk Modzelewski. We have brought 8 undergraduate students in for co-op periods ranging from 3-8 months.

Our expenditure on Technical/Professional assistants is 30% under budget due to the aforementioned inability to recruit a second scientific programmer, but as noted above we have moved this allocation into student salaries. We will have three new graduate students joining in Sept 2014 with significant strengths in these areas which should further facilitate our scientific programming environment.

Our HPC equipment award is anticipated to be fully spent in this period as we are in the final stages of selecting the vendor and anticipate that the equipment will be ordered and funds encumbered in July, with installation expected by September of 2014. In hand with this, we expect to realize significant in-kind contribution from the ultimately selected vendor, which will take the form of in-kind donation of equipment or service, or a combination. Our cash contributions towards this purchase from new members ION and Woodside was received and will be applied to the purchase.

Our in-kind donation of the Omega data platform was received in February 2014. We have installed and are currently configuring this package to work with the Chevron Challenge dataset and will be reporting and presenting the results over the next few months, including the SEG Denver conference in October 2014.

Our materials and supplies budget was larger than anticipated. Part of this expense relates to food and beverages provided for our DNOISE seminar series, which is held weekly and brings together our collaborators in Math and CS and web participants from a group we are collaborating with at UFRN in Natal, Brazil.

We have incurred significant travel costs above the approved budget. This was a result of three factors, namely

  1. Increasing sponsorship participation has resulted in a greatly increased scope of our consortium sponsor meetings, thus the cost for catering/venue (Report + meeting) for the meetings is 30% above the original budget;

  2. Travel to attend the project meetings is significantly over budget. All travel costs are incurred by DNOISE members only.

  3. Travel to attend conferences is significantly over budget. This is a result of unprecedented success by our trainees in acceptance of papers to major international conferences.