Dense sampling of seismic data is traditionally understood as
evenly-distributed time and space measurements of the reflected
wavefield. Moreover, the sampling rate along each axis must be equal
to or above twice the highest frequency/wavenumber of the continuous
signal being discretized (Shannon/Nyquist sampling theorem). In
practice, however, seismic data is often randomly and/or sparsely
sampled along spatial coordinates, which is generally considered as a
challenge since it breaks one or both previously-stated conditions of
dense sampling. It turns out that these acquisition geometries are not
necessarily a source of adversity to accurately reconstruct
densely-sampled data when using nonlinear optimization promoting
sparsity. This new insight, developed in the information theory
community, is referred to in the literature by the terms ``compressed
sensing'' or ``compressive sampling'' (see e.g. Donoho, 2006; Candes et al., 2005, and references
therein).