next up previous [pdf]

Next: Favorable recovery conditions Up: Basics of compressive sampling Previous: Basics of compressive sampling

Recovery by sparsity-promoting inversion

Consider the following linear forward model for the recovery problem

$\displaystyle \ensuremath{\mathbf{y}} = \tensor{R}\ensuremath{\mathbf{f}}_0,$ (1)

where $ \ensuremath{\mathbf{y}}\in\mathbb{R}^n$ represents the acquired data, $ \ensuremath{\mathbf{f}}_0\in\mathbb{R}^N$ with $ N \gg n$ the unaliased signal to be recovered, i.e., the model, and $ \tensor{R}\in\mathbb{R}^{n\times
N}$ the restriction operator that collects the acquired samples from the model. Assume that $ \ensuremath{\mathbf{f}}_0$ has a sparse representation $ \ensuremath{\mathbf{x}}_0\in\mathbb{C}^N$ in some known transform domain $ \tensor{S}$ , equation 1 can now be reformulated as

$\displaystyle \ensuremath{\mathbf{y}} = \tensor{A}\ensuremath{\mathbf{x}}_0$   with$\displaystyle \quad\tensor{A}\; {\buildrel\rm def\over=}\;\tensor{RS}^H,$ (2)

where the symbol   $ ^H$ represents the conjugate transpose. As a result, the sparsity of $ \ensuremath{\mathbf{x}}_0$ can be used to overcome the singular nature of $ \tensor{A}$ when estimating $ \ensuremath{\mathbf{f}}_0$ from $ \ensuremath{\mathbf{y}}$ . After sparsity-promoting inversion, the recovered signal is given by $ \tilde{\ensuremath{\mathbf{f}}} = \tensor{S}^H\tilde{\ensuremath{\mathbf{x}}}$ with

$\displaystyle \tilde{\ensuremath{\mathbf{x}}} = \arg\min_{\ensuremath{\mathbf{x}}}\vert\vert\ensuremath{\mathbf{x}}\vert\vert _1$   s.t.$\displaystyle \quad\ensuremath{\mathbf{y}}=\tensor{A}\ensuremath{\mathbf{x}}.$ (3)

In these expressions, the symbol $ \,\,\tilde{\mbox{}}\,\,$ represents estimated quantities and the $ \ell_1$ norm is defined as $ \Vert\ensuremath{\mathbf{x}}\Vert _1\; {\buildrel\rm def\over=}\;
\sum_{i=1}^{N}\left\vert\ensuremath{\mathbf{x}}[i]\right\vert$ , where $ \ensuremath{\mathbf{x}}[i]$ is the $ i^{th}$ entry of the vector $ \ensuremath{\mathbf{x}}$ .

Minimizing the $ \ell_1$ norm in equation 3 promotes sparsity in $ \ensuremath{\mathbf{x}}$ and the equality constraint ensures that the solution honors the acquired data. Among all possible solutions of the (severely) underdetermined system of linear equations ($ n\ll N$ ) in equation 2, the optimization problem in equation 3 finds a sparse or, under certain conditions, the sparsest (Donoho and Huo, 2001) possible solution that explains the data.


next up previous [pdf]

Next: Favorable recovery conditions Up: Basics of compressive sampling Previous: Basics of compressive sampling

2007-11-27