next up previous [pdf]

Next: Motivation Up: Hennenfent and Herrmann: Jittered Previous: Hennenfent and Herrmann: Jittered

Introduction

While the argument has been made that there is no real theoretical requirement for regular spatial sampling of seismic data (Bednar, 1996), most of the commonly-used multi-trace processing algorithms, e.g., Surface-Related Multiple Elimination (SRME - Verschuur et al., 1992) and wave-equation migration (WEM - Claerbout, 1971), need a dense and regular coverage of the survey area. Field datasets, however, are typically irregularly and/or coarsely sampled along one or more spatial coordinates and need to be interpolated before being processed.

For regularly-undersampled data along one or more spatial coordinates, i.e., data spatially sampled below Nyquist rate, there exists a wide variety of wavefield reconstruction techniques. Filter-based methods interpolate by convolution with a filter designed such that the error is white noise. The most common of these filters are the prediction error filters (PEF's) that can handle aliased events (Spitz, 1991). Wavefield-operator-based methods represent another type of interpolation approaches that explicitly include wave propagation (Stolt, 2002; Canning and Gardner, 1996; Biondi et al., 1998). Finally, transform-based methods also provide efficient algorithms for seismic data regularization (Sacchi et al., 1998; Herrmann and Hennenfent, 2007; Zwartjes and Sacchi, 2007; Trad et al., 2003). However, for irregularly-sampled data, e.g., binned data with some of the bins that are empty, or data that are continuous random undersampled, the performance of most of the aforementioned interpolation methods deteriorates.

The objective of this paper is to demonstrate that irregular/random undersampling is not a drawback for particular transform-based interpolation methods and for many other advanced processing algorithms as was already observed by other authors (Trad and Ulrych, 1999; Xu et al., 2005; Zhou and Schuster, 1995; Abma and Kabir, 2006; Zwartjes and Sacchi, 2007; Sun et al., 1997). We explain why random undersampling is an advantage and how it can be used to our benefit when designing coarse sampling schemes. To keep the discussion as clear and concise as possible, we focus on regular sampling with randomly missing data points, i.e., discrete random (under)sampling. Our conclusions extend to continuous random undersampling though. Unless otherwise specified, the term random is used in the remaining of the text in the discrete sense.


Subsections
next up previous [pdf]

Next: Motivation Up: Hennenfent and Herrmann: Jittered Previous: Hennenfent and Herrmann: Jittered

2007-11-27