next up previous [pdf]

Next: The separation algorithm Up: Theory Previous: The forward model


Bayesian signal separation

Given the above formulation of the forward model, ( cf. Equation 3), we now derive conditional probabilities for the unknown curvelet coefficient vectors, which we initially assume to be given by realizations of two independent random processes with weighted Laplacian-type probability density functions. Such a choice serves as a sparsity-promoting prior (see e.g. Li et al. (2004); Zibulevsky and Pearlmutter (2001), and Taylor et al. (1979); Oldenburg et al. (1981); Ulrych and Walker (1982) in the geophysical literature), and is consistent with the high compression rates that curvelets attain on seismic data (Herrmann et al., 2007a; Candes et al., 2006; Hennenfent and Herrmann, 2006). Given the SRME predictions--i.e., $ \mathbf{b}_1$ and $ \mathbf{b}_2$ , our goal is to estimate the curvelet coefficients for the two signal components--i.e., $ \mathbf{x}_1$ and $ \mathbf{x}_2$ . Probabilistically, this means that our objective is to find the vectors $ \mathbf{x}_1$ and $ \mathbf{x}_2$ that maximize the conditional probability $ P(\mathbf{x}_1,\mathbf{x}_2\vert \mathbf{b}_1,
\mathbf{b}_2)$ . In other words, using Bayes' rule we need to maximize

$\displaystyle P(\mathbf{x}_1,\mathbf{x}_2 \vert \mathbf{b}_1, \mathbf{b}_2) = \...
...}_1,\mathbf{x}_2 )P(\mathbf{n})P(\mathbf{n}_2)}{P(\mathbf{b}_1,\mathbf{b}_2 )}.$ (4)

Since both $ \mathbf{b}_1$ and $ \mathbf{b}_2$ are known, we try to find $ {\widetilde{\mathbf{x}}}_1$ and $ {\widetilde{\mathbf{x}}}_2$ , the curvelet coefficients for the primaries and multiples, that maximize the posterior probability in Equation 4 under the assumptions: (i) $ \mathbf{n}$ and $ \mathbf{n}_2$ are independent white Gaussian noise vectors with possibly different variances as described above, and (ii) $ \mathbf{x}_1$ and $ \mathbf{x}_2$ have weighted Laplacian prior distributions. More precisely, $ {\widetilde{\mathbf{x}}}_1$ and $ {\widetilde{\mathbf{x}}}_2$ solve the optimization problem
$\displaystyle \max_{\mathbf{x_1,x}_2}P(\mathbf{x}_1,\mathbf{x}_2 \vert \mathbf{b}_1, \mathbf{b}_2)$      
    $\displaystyle \hspace{-3cm}=\max_{\mathbf{x_1,x}_2} \ P(\mathbf{x}_1,\mathbf{x}_2
)P(\mathbf{n})P(\mathbf{n}_2)$  
    $\displaystyle \hspace{-3cm}=\max_{\mathbf{x_1,x}_2}\
\exp\left(-\alpha_1\Vert\...
...thbf{x}_1+\mathbf{x}_2)-(\mathbf{b}_1+\mathbf{b}_2)\Vert _2^2}{\sigma^2}\right)$  
    $\displaystyle \hspace{-3cm} =\max_{\mathbf{x_1,x}_2}-\left({\alpha_1\Vert\mathb...
...hbf{x}_1+\mathbf{x}_2)-(\mathbf{b}_1+\mathbf{b}_2)\Vert _2^2}{\sigma^2}\right),$ (5)

yielding estimates $ {\widetilde{\mathbf{s}}}_1=\tensor{A}{\widetilde{\mathbf{x}}}_1$ for the primaries, and $ {\widetilde{\mathbf{s}}}_2=\tensor{A}{\widetilde{\mathbf{x}}}_2$ for the multiples. With appropriate rescaling, Equation 5 reduces to $ \displaystyle\max_{\mathbf{x_1,x}_2}P(\mathbf{x}_1,\mathbf{x}_2 \vert
\mathbf{b}_1, \mathbf{b}_2) = \min_{\mathbf{x_1,x}_2}
f(\mathbf{x}_1,\mathbf{x}_2)$ with

$\displaystyle f(\mathbf{x}_1,\mathbf{x}_2) ={\lambda_1\Vert\mathbf{x}_1\Vert}_{...
...t _2^2} +\eta{\Vert\tensor{A}(\mathbf{x}_1+\mathbf{x}_2)-\mathbf{b}\Vert _2^2}.$ (6)

Here $ \Vert\mathbf{x}_i\Vert _{1,\mathbf{w}_i}=\sum_{\mu\in
{\mathcal{M}}}\vert w_{i,\mu}x_{i,\mu}\vert$ is the weighted $ \ell^1$ -norm of the curvelet coefficients $ \mathbf{x}_i$ (and $ {\mathcal{M}}$ is the index set for the curvelet coefficients). Heuristically, the parameters $ \lambda_1$ and $ \lambda_2$ control the tradeoff between the sparsity of the coefficient vectors and the misfit with respect to the SRME-predictions (the total data and the multiple predictions). On the other hand, $ \eta $ controls the tradeoff between the confidence in the total data and in the SRME-predicted multiples.


next up previous [pdf]

Next: The separation algorithm Up: Theory Previous: The forward model

2008-03-13