## ÆäËû

µ±Ç°Î»ÖÃ : Ê×Ò³ > ÆäËû > Ñ§Êõ»î¶¯

## Exact Sparse Signal Recovery via Orthogonal Matching Pursuit with Prior Information

±à¼­£ºwfy Ê±¼ä£º2018Äê11ÔÂ14ÈÕ ·ÃÎÊ´ÎÊý:439

ÌâÄ¿£ºExact Sparse Signal Recovery via Orthogonal Matching Pursuit with Prior Information
ÕªÒª£ºExact recovery of $K$-sparse signals $\x\in \mathbb{R}^{n}$ from linear measurements $\y=\A\x$, where $\A\in \mathbb{R}^{m\times n}$ is a sensing matrix, arises from many applications. The orthogonal matching pursuit (OMP) algorithm is a widely used algorithm for reconstructing the $\x$ based on $\y$ and $\A$ due to its excellent recovery performance and high efficiency. A fundamental question in the performance analysis of OMP is the characterizations of the probability that it can exactly recover $\x$ for random matrix $\A$ and the minimal $m$ to guarantee a satisfactory recovery performance. Although in many practical applications, in addition to the sparsity, $\x$ usually also has some additional properties (for example, the nonzero entries of $\x$ independently and identically follow the Gaussian distribution, and $\x$ has exponential decaying property), as far as we know, none of existing analysis uses these properties to answer the above question. In this talk, we first show that the prior distribution information of $\x$ can be used to provide an upper bound on $\|\x\|_1^2/\|\x\|_2^2$. Then, we explore this upper bound to develop a better lower bound on the probability of exact recovery with OMP in $K$ iterations. Furthermore, we develop a lower bound on $m$ to guarantee that the exact recovery probability of $K$ iterations of OMP is not lower than a given probability. We further show that, if $K$ is sufficiently small compared with $n$, when $K$ approaches infinity, $m\approx 2K\ln(n)$, $m\approx K$ and $m\approx 1.6K\ln(n)$ are enough to ensure that OMP has a satisfactory recovery performance for recovering any $K$-sparse $\x$, $K$-sparse $\x$ with exponential decaying property and $K$-sparse $\x$ whose nonzero entries independently and identically follow the Gaussian distribution, respectively. This significantly improves Tropp {\em{et. al.}}'s result which requires $m\approx4K\ln(n/\delta)$.