# Acquisition

## Introduction

Leveraging ideas from the field of compressed sensing, we design pragmatic blended acquisition schemes to reduce the cost of seismic data acquisition by shortening marine survey time, by using randomly distributed (jittered) firing times. Deblending or recovery of complete, unblended seismic volumes is achieved via sparsity-promoting recovery strategies by applying large-scale optimization techniques. Furthermore, we study the impact of randomized sampling strategies on the quality of time-lapse seismic, wherein we address the current notion of insistence on strict repetition in acquisition of the baseline and monitor surveys.

## Time-jittered, blended marine acquisition

In principle, blended acquisition involves firing multiple sources at slightly dithered (or randomly-jittered) times, resulting in overlaps between shot records. In contrast, conventional (sequential) acquisition has no overlaps between shot records as the source vessel fires at regular times, which translates to regular spatial locations for a given speed. Blended acquisition improves the productivity and efficiency of acquisition surveys (Beasley, 2008; Berkhout, 2008; Mosher et al., 2014). However, the challenge with such schemes is to deblend the acquired data since many processing techniques, such as SRME, EPSI, RTM, FWI, etc., require full (regular) sampling.

Our approach is two-fold: i) design of time-jittered, blended acquisition scheme, and ii) deblending by curvelet-domain sparsity promotion using \(\ell_1\) constraints. What we aim to achieve with this approach is outlined in Figure 1. Wason and Herrmann (2013) successfully adapted this approach to single- and multiple-vessel, time-jittered (ocean bottom) acquisition, wherein airguns fire at randomly-jittered times which translate to randomly-jittered spatial locations for a given speed, with the receivers (ocean bottom cables/nodes) recording *continuously*. Figure 2 illustrates the conventional and jittered marine acquisition schemes.

### Deblending by sparse inversion via one-norm minimization

We formulate the deblending problem as an optimization problem \[ \begin{equation} \label{BP} \tag{BP} \min\limits_{\vector{x} \in \mathbb{C}^P} \|\vector{x}\|_1 \quad \textrm{subject to} \quad \vector{y} = \vector{Ax}, \end{equation} \] which aims to solve a linear system of underdetermined equations \(\vector{y} = \vector{Ax}\), where \(\vector{y} \in \mathbb{C}^n\) represents the randomly undersampled and blended measurements with \(n \ll N\), where \(N\) is the ambient dimension of the data. The vector \(\vector{x}\) represents a compressible representation of seismic data in a sparsifying domain, \(\vector{S}\). The measurement matrix \(\vector{A}\) is a combination of a sampling matrix (\(\vector{M}\)), and the sparsifying transform, such that, \(\vector{A} = \vector{MS^H}\). A seismic line with \(N_s\) sources, \(N_r\) receivers, and \(N_t\) time samples can be reshaped into an \(N\) dimensional vector \(\vector{f}\), where \(N = N_s \times N_r \times N_t\). Given the measurements \(\vector{y} = \vector{Mf}\), the aim is to recover a sparse approximation \(\vector{\tilde{f}}\) (= \(\vector{S^H} \tilde{\vector{x}}\)) by solving the \(\ref{BP}\) problem using the SPG\(\ell_1\) solver (Van Den Berg and Friedlander, 2008). We use curvelets as the sparsifying basis.

The time-jittered blended acquisition is simulated on a 2-D seismic line from the Gulf of Suez, with one and two source vessels. Figures 3a and 3b show the corresponding randomly undersampled and blended measurements for the two surveys. Note that only 30.0 s of the continuously recorded data is shown. If we simply apply the adjoint of the acquisition operator to the blended data—i.e., \(\vector{M}^H\vector{y}\), the interferences (or source crosstalk) due to overlaps in the shot records appear as random noise—i.e., incoherent and non-sparse, in the common-receiver domain, as illustrated in Figures 4a and 5a. The overlap between shot records appears in the common-shot domain (Figures 4d and 5d).

Our goal, however, is to recover the conventional, unblended shot records from the blended data by working with the *entire* (blended) data volume, and not on a shot-by-shot basis. For acquisition with one source vessel, the undersampling factor, \(\eta = 2\), hence, the recovery problem becomes a joint deblending and interpolation problem. For acquisition with two source vessels, the recovery problem is a deblending problem alone. Recovery via curvelet-based sparsity-promotion for both scenarios is shown in Figures 4 and 5.

### SLIM software implementations

Software for this application is provided here.

## Randomized 4-D acquisition and recovery methods

Ideally, 4-D acquisition design requires surveys to be highly repeatable (Landrø, 1999; Lumley and Behrens, 1998), where sources and receivers are designed to match spatially between the surveys. This is a challenge especially in marine streamer acquisition where the streamer locations are often controlled by ocean currents. Therefore, there has been more success with ocean bottom cables or ocean bottom nodes 4-D acquisition (Eggenberger et al., 2014; Johnston, 2013), where fair amount of control can be placed on the receivers. In all cases, there is still some degree of uncertainty in source/receiver locations. Here, we ask the question - is it really necessary to repeat the surveys?

We investigate randomized acquisition in marine where seismic source vessels fire at random locations onto a common receiver array, during the baseline survey. For the monitor survey, the same seismic sources are fired at different random locations onto the same common receiver array as in the baseline survey. This can be extended to randomized source locations with a randomized distribution of ocean bottom nodes.

### Independent recovery strategy

Our independent recovery approach to randomized 4-D acquisition involves solving a sparsity-promoting program independently, for the baseline and monitor data. In this formulation, the objective is to solve for a sparse representation \(\vector{x}_1\) and \(\vector{x}_2\) of the densely sampled data, match both onto a common grid, and compute the 4-D difference. Mathematically, this is formulated as \[ \begin{equation*} \begin{split} \nonumber \tilde{\vector{x}}_1 = \argmin_{\vector{x}_1}\|\vector{x}_1\|_1 \quad \textrm{subject to} \quad \vector{y}_1 = \vector{A}_1\vector{x}_1,\\ \nonumber \tilde{\vector{x}}_2 = \argmin_{\vector{x}_2}\|\vector{x}_2\|_1 \quad \textrm{subject to} \quad \vector{y}_2 = \vector{A}_2\vector{x}_2, \end{split} \end{equation*} \] where \(\vector{y}_1\) and \(\vector{y}_2\) are the observed randomly subsampled baseline and monitor data, respectively; \(\vector{A}_1\) and \(\vector{A}_2\) are the compressed sensing matrices formed from the sampling operator and the sparsifying transform (curvelet transform).

### Joint recovery method

Our joint recovery approach, adapted from Baron et al. (2005) involves carrying out a sparsity-promoting program which exploits the shared information between the baseline and monitor surveys. With source/receivers missing in both surveys, our approach recovers the part common to the baseline and monitor data, the individual vintages, and computes the 4-D difference. We let \(\vector{x}_1 = {\vector{z}_0} + \vector{z}_1\ \text{and}\ \vector{x}_2= {\vector{z}_0} + \vector{z}_2\).

Mathematically, we can formulate this method as a broader optimization problem: \[ \begin{equation*} \tilde{\vector{z}} = \argmin_{\vector{z}}\|\vector{z}\|_1 \quad \text{subject to} \quad \vector{y} = \vector{A}\vector{z}, \nonumber \end{equation*} \] where \(\vector{A} = \begin{bmatrix} {\vector{A}_1} & \vector{A}_{1} & \vector{0} \\ {\vector{A}_{2}} & \vector{0} & \vector{A}_{2} \end{bmatrix}\), \(\vector{z} = \begin{bmatrix} \vector{z}_0 \\ \vector{z}_1 \\ \vector{z}_2 \end{bmatrix}\), and \(\vector{y} = \begin{bmatrix} \vector{y}_1 \\ \vector{y}_2 \\ \end{bmatrix}\).

Consequently, computation of \(\tilde{\vector{x}}_1\), \(\tilde{\vector{x}}_2\), and \(\tilde{\vector{x}_1} - \tilde{\vector{x}_2}\) follows.

Figure 6 summarizes our results when compared with the idealized case. Figure 6b corresponds to the difference between the baseline and monitor data where the source/receivers exactly match between the surveys. Since this is a synthetic example, it is easy to exactly repeat source and receiver positions with probability equal 1. Therefore, the 4-D signal is ideal and it’s the true signal. When the sources and receivers do not match (i.e. when the acquisition is not repeated), the independent recovery results and joint recovery methods applied to the observed data, yields Figures 6c and 6d, respectively.

The joint recovery approach indicates three main results: *(i)* randomized sampling is good, and plausible when designing 4-D surveys. *(ii)* joint recovery of 4-D data is efficient, and gives a better 4-D signal compared to independent recovery of each data. *(iii)* randomized sampling, without necessarily repeating surveys in time-lapse seismic, is feasible.

For details, see Oghenekohwo et al. (2014).

## References

Baron, D., Duarte, M. F., Sarvotham, S., Wakin, M. B., and Baraniuk, R. G., 2005, An information-theoretic approach to distributed compressed sensing: In Proc. 45rd conference on communication, control, and computing.

Beasley, C. J., 2008, A new look at marine simultaneous sources: The Leading Edge, **27**, 914–917. doi:10.1190/1.2954033

Berkhout, A. J. `., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, **27**, 924–938. doi:10.1190/1.2954035

Eggenberger, K., Christie, P., Manen, D.-J. van, and Vassallo, M., 2014, Multisensor streamer recording and its implications for time-lapse seismic and repeatability: The Leading Edge, **33**, 150–162.

Johnston, D. H., 2013, Practical applications of time-lapse seismic data: Society of Exploration Geophysicists.

Landrø, M., 1999, Repeatability issues of 3-d vSP data: Geophysics, **64**, 1673–1679.

Lumley, D., and Behrens, R., 1998, Practical issues of 4D seismic reservoir monitoring: What an engineer needs to know: SPE Reservoir Evaluation & Engineering, **1**, 528–538.

Mosher, C., Li, C., Morley, L., Ji, Y., Janiszewski, F., Olson, R., and Brewer, J., 2014, Increasing the efficiency of seismic data acquisition via compressive sensing: The Leading Edge, **33**, 386–391. doi:10.1190/tle33040386.1

Oghenekohwo, F., Wason, H., Esser, E., and Herrmann, F. J., 2014, Foregoing repetition in time-lapse seismic — reaping benefits of randomized sampling and joint recovery: UBC. Retrieved from https://slim.gatech.edu/Publications/Private/Submitted/2014/oghenekohwo2014GEOPfrt/oghenekohwo2014GEOPfrt.html

Van Den Berg, E., and Friedlander, M. P., 2008, Probing the pareto frontier for basis pursuit solutions: SIAM Journal on Scientific Computing, **31**, 890–912.

Wason, H., and Herrmann, F. J., 2013, Time-jittered ocean bottom seismic acquisition: SEG technical program expanded abstracts. doi:10.1190/segam2013-1391.1