Английская Википедия:Convolutional sparse coding

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description

Шаблон:Multiple issues

The convolutional sparse coding paradigm is an extension of the global sparse coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> as a linear combination of a few atoms in the redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, usually expressed as <math display="inline">\mathbf{x}=\mathbf{D}\mathbf{\Gamma}</math> for a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, the alternative dictionary structure adopted by the convolutional sparse coding model allows the sparsity prior to be applied locally instead of globally: independent patches of <math display="inline">\mathbf{x}</math> are generated by "local" dictionaries operating over stripes of <math display="inline">\mathbf{\Gamma}</math>.

The local sparsity constraint allows stronger uniqueness and stability conditions than the global sparsity prior, and has shown to be a versatile tool for inverse problems in fields such as image understanding and computer vision. Also, a recently proposed multi-layer extension of the model has shown conceptual benefits for more complex signal decompositions, as well as a tight connection the convolutional neural networks model, allowing a deeper understanding of how the latter operates.

Overview

Given a signal of interest <math display="inline">\mathbf{x}\in \mathbb{R}^{N}</math> and a redundant dictionary <math display="inline">\mathbf{D}\in\mathbb{R}^{N\times M}, M\gg N</math>, the sparse coding problem consist of retrieving a sparse vector <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^{M}</math>, denominated the sparse representation of <math display="inline">\mathbf{x}</math>, such that <math display="inline">\mathbf{x}= \mathbf{D}\mathbf{\Gamma}</math>. Intuitively, this implies <math display="inline">\mathbf{x}</math> is expressed as a linear combination of a small number of elements in <math display="inline">\mathbf{D}</math>. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.[1][2][3] It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred [4][5][6]

As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.[6] Interestingly, by imposing a local sparsity prior in <math display="inline">\mathbf{\Gamma}</math>, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in <math display="inline">\mathbf{D}</math> can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for <math display="inline">\mathbf{\Gamma}</math> to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion[5]

Besides its versatility in inverse problems, recent efforts have focused on the multi-layer version of the model and provided evidence of its reliability for recovering multiple underlying representations.[7] Moreover, a tight connection between such a model and the well-established convolutional neural network model (CNN) was revealed, providing a new tool for a more rigurous understanding of its theoretical conditions.

The convolutional sparse coding model provides a very efficient set of tools to solve a wide range of inverse problems, including image denoising, image inpainting, and image superresolution. By imposing local sparsity constraints, it allows to efficiently tackle the global coding problem by iteratively estimating disjoint patches and assembling them into a global signal. Furthermore, by adopting a multi-layer sparse model, which results from imposing the sparsity constraint to the signal inherent representations themselves, the resulting "layered" pursuit algorithm keeps the strong uniqueness and stability conditions from the single-layer model. This extension also provides some interesting notions about the relation between its sparsity prior and the forward pass of the convolutional neural network, which allows to understand how the theoretical benefits of the CSC model can provide a strong mathematical meaning of the CNN structure.

Sparse coding paradigm

Basic concepts and models are presented to explain into detail the convolutional sparse representation framework. On the grounds that the sparsity constraint has been proposed under different models, a short description of them is presented to show its evolution up to the model of interest. Also included are the concepts of mutual coherence and restricted isometry property to establish uniqueness stability guarantees.

Global sparse coding model

Allow signal <math display="inline">\mathbf{x}\in \mathbb{R}^N</math> to be expressed as a linear combination of a small number of atoms from a given dictionary <math display="inline">\mathbf{D}\in \mathbb{R}^{N \times M}, M>N</math>. Alternatively, the signal can be expressed as <math display="inline">\mathbf{x}=\mathbf{D}\mathbf{\Gamma}</math>, where <math display="inline">\mathbf{\Gamma}\in \mathbb{R}^M</math> corresponds to the sparse representation of <math display="inline">\mathbf{x}</math>, which selects the atoms to combine and their weights. Subsequently, given <math display="inline">\mathbf{D}</math>, the task of recovering <math display="inline">\mathbf{\Gamma}</math> from either the noise-free signal itself or an observation is denominated sparse coding. Considering the noise-free scenario, the coding problem is formulated as follows: <math display="block">\begin{aligned}

     \hat{\mathbf{\Gamma}}_{\text{ideal}}&= \underset{\mathbf{\Gamma}}{\text{argmin}}\; \| \mathbf{\Gamma}\|_{0}\; \text{s.t.}\; \mathbf{D}\mathbf{\Gamma}=\mathbf{x}.\end{aligned}</math> The effect of the <math display="inline">\ell_{0}</math> norm is to favor solutions with as much zero elements as possible. Furthermore, given an observation affected by bounded energy noise: <math display="inline">\mathbf{Y}= \mathbf{D}\mathbf{\Gamma}+ \mathbf{E},\|\mathbf{E}\|_{2}<\varepsilon</math>, the pursuit problem is reformulated as: <math display="block">\begin{aligned}
     \hat{\mathbf{\Gamma}}_{\text{noise}}&= \underset{\mathbf{\Gamma}}{\text{argmin}}\; \| \mathbf{\Gamma}\|_{0}\; \text{ s.t. } \|\mathbf{D}\mathbf{\Gamma}-\mathbf{Y}\|_{2}<\varepsilon.\end{aligned}</math>

Stability and uniqueness guarantees for the global sparse model

Let the spark of <math display="inline">\mathbf{\mathbf{D}}</math> be defined as the minimum number of linearly independent columns: <math display="block">\begin{aligned}

     \sigma(\mathbf{D})=\underset{\mathbf{\Gamma}}{\text{min}} \quad \|\mathbf{\Gamma}\|_{0} \quad \text{s.t.}\quad \mathbf{D \Gamma}=0, \quad \mathbf{\Gamma}\neq 0.\end{aligned}</math>

Then, from the triangular inequality, the sparsest vector <math display="inline">\mathbf{\Gamma}</math> satisfies: <math display="inline">\|\mathbf{\Gamma}\|_{0}<\frac{\sigma(\mathbf{D})}{2}</math>. Although the spark provides an upper bound, it is unfeasible to compute in practical scenarios. Instead, let the mutual coherence be a measure of similarity between atoms in <math display="inline">\mathbf{D}</math>. Assuming <math display="inline">\ell_{2}</math>-norm unit atoms, the mutual coherence of <math display="inline">\mathbf{D}</math> is defined as: <math display="inline">\mu(\mathbf{D})= \max_{i\neq j} \|\mathbf{d_i^T}\mathbf{d_j}\|_2</math>, where <math display="inline">\mathbf{d}_{i}</math> are atoms. Based on this metric, it can be proven that the true sparse representation <math display="inline">\mathbf{\Gamma}^{*}</math> can be recovered if and only if <math display="inline">\|\mathbf{\Gamma}^{*}\|_0 < \frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})} \big)</math>.

Similarly, under the presence of noise, an upper bound for the distance between the true sparse representation <math display="inline">\mathbf{\Gamma^{*}}</math> and its estimation <math display="inline">\hat{\mathbf{\Gamma}}</math> can be established via the restricted isometry property (RIP). A k-RIP matrix <math display="inline">\mathbf{D}</math> with constant <math display="inline">\delta_{k}</math> corresponds to: <math display="inline">(1-\delta_k)\|\mathbf{\Gamma}\|_2^2 \leq \|\mathbf{D\Gamma}\|_2^2 \leq (1+\delta_k)\|\mathbf{\Gamma}\|_2^2</math>, where <math display="inline">\delta_k</math> is the smallest number that satisfies the inequality for every <math display="inline">\|\mathbf{\Gamma}\|_{0}=k</math>. Then, assuming <math display="inline">\|\mathbf{\Gamma}\|_0<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})} \big)</math>, it is guaranteed that <math display="inline">\|\mathbf{\hat{\Gamma}-\Gamma^{*}}\|_{2}^{2}\leq \frac{4\varepsilon^2}{1-\mu(\mathbf{D})(2\|\mathbf{\Gamma}\|_0-1)}</math>.

Solving such a general pursuit problem is a hard task if no structure is imposed on dictionary <math display="inline">\mathbf{D}</math>. This implies learning large, highly overcomplete representations, which is extremely expensive. Assuming such a burden has been met and a representative dictionary has been obtained for a given signal <math display="inline">\mathbf{x}</math>, typically based on prior information, <math display="inline">\mathbf{\Gamma}^{*}</math> can be estimated via several pursuit algorithms.

Pursuit algorithms for the global sparse model

Two basic methods for solving the global sparse coding problem are orthogonal matching pursuit (OMP) and basis pursuit (BP). OMP is a greedy algorithm that iteratively selects the atom best correlated with the residual between <math display="inline">\mathbf{x}</math> and a current estimation, followed by a projection onto a subset of pre-selected atoms. On the other hand, basis pursuit is a more sophisticated approach that replaces the original coding problem by a linear programming problem. Based on this algorithms, the global sparse coding provides considerably loose bounds for the uniqueness and stability of <math display="inline">\hat{\mathbf{\Gamma}}</math>. To overcome this, additional priors are imposed over <math display="inline">\mathbf{D}</math> to guarantee tighter bounds and uniqueness conditions. The reader is referred to (,[5] section 2) for details regarding this properties.

Convolutional sparse coding model

A local prior is adopted such that each overlapping section of <math display="inline">\mathbf{\Gamma}</math> is sparse. Let <math display="inline">\mathbf{D}\in \mathbb{R}^{N \times Nm}</math> be constructed from shifted versions of a local dictionary <math display="inline">\mathbf{D_{L}}\in\mathbb{R}^{n \times m}, m\ll M</math>. Then, <math display="inline">\mathbf{x}</math> is formed by products between <math display="inline">\mathbf{D_{L}}</math> and local patches of <math display="inline">\mathbf{\Gamma}\in\mathbb{R}^{mN}</math>.

Файл:Structure of the Convolutional Sparse Coding Paradigm.svg
The global dictionary is expressed in terms of a stride convolutional matrix. So, signals can be generated in terms of stripes of the sparse representation multiplies by a shift-invariant local dictionary.

From the latter, <math display="inline">\mathbf{\Gamma}</math> can be re-expressed by <math display="inline">N</math> disjoint sparse vectors <math display="inline">\alpha_{i}\in \mathbb{R}^{m}</math>: <math display="inline">\mathbf{\Gamma}\in \{\alpha_{1},\alpha_{2},\dots, \alpha_{N}\}^{T}</math>. Similarly, let <math display="inline">\gamma</math> be a set of <math display="inline">(2n-1)</math> consecutive vectors <math display="inline">\alpha_{i}</math>. Then, each disjoint segment in <math display="inline">\mathbf{x}</math> is expressed as: <math display="inline">\mathbf{x}_{i}=\mathbf{R}_{i}\mathbf{D}\mathbf{\Gamma}</math>, where operator <math display="inline">\mathbf{R}_{i}\in \mathbb{R}^{n\times N}</math> extracts overlapping patches of size <math display="inline">n</math> starting at index <math display="inline">i</math>. Thus, <math display="inline">\mathbf{R}_{i}\mathbf{D}</math> contains only <math display="inline">(2n-1)m</math> nonzero columns. Hence, by introducing operator <math display="inline">\mathbf{S}_{i}\in \mathbf{R}^{(2n-1)m \times Nm}</math> which exclusively preserves them: <math display="block">\begin{aligned}

     \mathbf{x}_{i}&= \underset{\Omega}{\underbrace{\mathbf{R}_{i}\mathbf{D}\mathbf{S}_{i}^{T}}}\underset{\gamma_{i}}{\underbrace{(S_{i}\mathbf{\Gamma})}},\end{aligned}</math> where <math display="inline">\Omega</math> is known as the stripe dictionary, which is independent of <math display="inline">i</math>, and <math display="inline">\gamma_{i}</math> is denominated the i-th stripe. So, <math display="inline">\mathbf{x}</math> corresponds to a patch aggregation or convolutional interpretation: <math display="block">\begin{aligned}
     \mathbf{x}&= \sum_{i=1}^{N}\mathbf{R}_{i}^{T}\mathbf{D}_{L}\alpha_{i}= \sum_{i=1}^{m}\mathbf{d}_{i}\ast \mathbf{z_{i}}.\end{aligned}</math> Where <math display="inline">\mathbf{d}_{i}</math> corresponds to the i-th atom from the local dictionary <math display="inline">\mathbf{D}_{L}</math> and <math display="inline">\mathbf{z_{i}}</math> is constructed by elements of patches <math display="inline">\alpha</math>: <math display="inline">\mathbf{z_{i}}\triangleq (\alpha_{1,i}, \alpha_{2,i},\dots, \alpha_{N,i})^{T}</math>. Given the new dictionary structure, let the <math display="inline">\ell_{0,\infty}</math> pseudo-norm be defined as: <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}\triangleq \underset{i}{\text{ max}}\; \|\gamma_{i}\|_{0}</math>. Then, for the noise-free and noise-corrupted scenarios, the problem can be respectively reformulated as: <math display="block">\begin{aligned}
     \hat{\mathbf{\Gamma}}_{\text{ideal}}&= \underset{\mathbf{\Gamma}}{\text{argmin}}\; \| \mathbf{\Gamma}\|_{0,\infty}\; \text{s.t.}\; \mathbf{D}\mathbf{\Gamma}=\mathbf{x},\\
     \hat{\mathbf{\Gamma}}_{\text{noise}}&= \underset{\mathbf{\Gamma}}{\text{argmin}}\; \| \mathbf{\Gamma}\|_{0,\infty}\; \text{s.t.}\; \|\mathbf{Y}-\mathbf{D}\mathbf{\Gamma}\|_{2}<\varepsilon.\end{aligned}</math>

Stability and uniqueness guarantees for the convolutional sparse model

For the local approach, <math display="inline">\mathbf{D}</math> mutual coherence satisfies: <math display="inline">\mu(\mathbf{D})\geq \big(\frac{m-1}{m(2n-1)-1}\big)^{1/2}.</math> So, if a solution obeys <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}< \frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})}\big)</math>, then it is the sparsest solution to the <math display="inline">\ell_{0,\infty}</math> problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector!

Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the iterative shrinkage thresholding algorithm (ISTA)[8] for splitting the pursuit into smaller problems. Based on the <math display="inline">\ell_{0,\infty}</math> pseudonorm, if a solution <math display="inline">\mathbf{\Gamma}</math> exists satisfying <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D})} \big)</math>, then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the <math display="inline">\ell_{0}</math> prior. Stability conditions for OMP and BP are also guaranteed if its exact recovery condition (ERC) is met for a support <math display="inline">\mathcal{T}</math> with a constant <math display="inline">\theta</math>. The ERC is defined as: <math display="inline">\theta= 1-\underset{i\notin \mathcal{T}}{\text{max}} \|\mathbf{D}_{\mathcal{T}}^{\dagger}\mathbf{d}_{i}\|_{1}>0</math>, where <math display="inline">\dagger</math> denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA.

Algorithm 1: 1D CSC via local iterative soft-thresholding.

Input:

<math display="inline">\mathbf{D}_{L}</math>: Local Dictionary,

<math display="inline">\mathbf{y}</math>: observation,

<math display="inline">\lambda</math>: Regularization parameter,

<math display="inline">c</math>: step size for ISTA,

tol: tolerance factor,

maxiters: maximum number of iterations.

<math display="inline">\{\boldsymbol{\alpha}_{i}\}^{(0)}\gets \{\mathbf{0}_{N\times 1}\}</math> (Initialize disjoint patches.)
<math display="inline">\{\mathbf{r}_{i}\}^{(0)}\gets \{\mathbf{R}_{i}\mathbf{y}\}</math> (Initialize residual patches.)
<math display="inline">k\gets 0</math>

Repeat

<math display="inline">\{\boldsymbol{\alpha}_i\}^{(k)}\gets \mathcal{S}_{\frac{\lambda}{c}}\big( \{\boldsymbol{\alpha}_i\}^{(k-1)}+\frac{1}{c}\{\mathbf{D}_{L}^{T}\mathbf{r}_i\}^{(k-1)} \big)</math> (Coding along disjoint patches)
<math display="inline">\boldsymbol{\alpha}_i</math> <math display="inline">\hat{\mathbf{x}}^{(k)}\gets \sum_{i}\mathbf{R}_{i}^{T}\mathbf{D}_{L}\boldsymbol{\alpha}_{i}^{(k)}</math> (Patch Aggregation)
<math display="inline">\{\mathbf{r}_{i}\}^{(k)}\gets \mathbf{R}_{i}\big( \mathbf{y}-\hat{\mathbf{x}}^{(k)} \big)</math> (Update residuals)
<math display="inline">k \gets k+ 1</math>

Until <math display="inline">\|\hat{\mathbf{x}}^{(k)}- \hat{\mathbf{x}}^{(k-1)}\|_{2}<</math> tol or <math display="inline">k></math> maxiters.

Multi-layered convolutional sparse coding model

By imposing the sparsity prior in the inherent structure of <math display="inline">\mathbf{x}</math>, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries.

Based on these criteria, yet another extension denominated mlti-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries <math display="inline">\{\mathbf{D}\}_{k=1}^{K}</math> can be efficiently designed, where sparse representations at each layer <math display="inline">\{\mathbf{\Gamma}\}_{k=1}^{K}</math> are guaranteed by imposing the sparsity prior over the dictionaries themselves.[7] In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift <math display="inline">m</math> elements instead of a single one, where <math display="inline">m</math> corresponds to the number of channels in the previous layer, it is guaranteed that the <math display="inline">\|\mathbf{\Gamma}\|_{0,\infty}</math> norm of the representations along layers is bounded.

For example, given the dictionaries <math display="inline">\mathbf{D}_{1} \in \mathbb{R}^{N\times Nm_{1}}, \mathbf{D}_{2} \in \mathbb{R}^{Nm_{1}\times Nm_{2}}</math>, the signal is modeled as <math display="inline">\mathbf{D}_{1}\mathbf{\Gamma}_{1}= \mathbf{D}_{1}(\mathbf{D}_{2}\mathbf{\Gamma}_{2})</math>, where <math display="inline">\mathbf{\Gamma}_{1}</math> is its sparse code, and <math display="inline">\mathbf{\Gamma}_{2}</math> is the sparse code of <math display="inline">\mathbf{\Gamma}_{1}</math>. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming <math display="inline">\mathbf{\Gamma}_{0}=\mathbf{x}</math>: <math display="block">\begin{aligned}

     \text{Find}\; \{\mathbf{\Gamma}_{i}\}_{i=1}^{K}\;\text{s.t.}&\; \mathbf{\Gamma}_{i-1}=\mathbf{D}_{i}\mathbf{\Gamma}_{i},\; \|\mathbf{\Gamma}_{i}\|_{0,\infty}\leq \lambda_{i}\\
     \text{Find}\; \{\mathbf{\Gamma}_{i}\}_{i=1}^{K}\; \text{s.t.} &\;\|\mathbf{\Gamma}_{i-1}-\mathbf{D}_{i}\mathbf{\Gamma}_{i}\|_{2}\leq \varepsilon_{i},\; \|\mathbf{\Gamma}_{i}\|_{0,\infty}\leq \lambda_{i}\end{aligned}</math>

In what follows, theoretical guarantees for the uniqueness and stability of this extended model are described.

Theorem 1: (Uniqueness of sparse representations) Consider signal <math display="inline">\mathbf{x}</math> satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> with mutual coherence <math display="inline">\{\mu(\mathbf{D}_{i})\}_{i=1}^{K}</math>. If the true sparse representations satisfy <math display="inline">\{\mathbf{\Gamma}\}_{i=1}^{K}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math>, then a solution to the problem <math display="inline">\{\hat{\mathbf{\Gamma}_{i}}\}_{i=1}^{K}</math> will be its unique solution if the thresholds are chosen to satisfy: <math display="inline">\lambda_{i}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})} \big)</math>.

Theorem 2: (Global stability of the noise-corrupted scenario) Consider signal <math display="inline">\mathbf{x}</math> satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> is contaminated with noise <math display="inline">\mathbf{E}</math>, where <math display="inline">\|\mathbf{E}\|_{2}\leq \varepsilon_{0}</math>. resulting in <math display="inline">\mathbf{Y=X+E}</math>. If <math display="inline">\lambda_{i}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math> and <math display="inline">\varepsilon_{i}^{2}=\frac{4\varepsilon_{i-1}^{2}}{1-(2\|\mathbf{\Gamma}_{i}\|_{0,\infty}-1)\mu(\mathbf{D}_{i})}</math>, then the estimated representations <math display="inline">\{\mathbf{\Gamma}_{i}\}_{i=1}^{K}</math> satisfy the following: <math display="inline">\|\mathbf{\Gamma}_{i}-\hat{\mathbf{\Gamma}}_{i}\|_{2}^{2}\leq \varepsilon_{i}^{2}</math>.

Projection-based algorithms

As a simple approach for solving the ML-CSC problem, either via the <math display="inline">\ell_{0}</math> or <math display="inline">\ell_{1}</math> norms, is by computing inner products between <math display="inline">\mathbf{x}</math> and the dictionary atoms to identify the most representatives ones. Such a projection is described as: <math display="block">\begin{aligned} \hat{\mathbf{\Gamma}}_{\ell_p}&= \underset{\mathbf{\Gamma}}{\operatorname{argmin}} \frac{1}{2}\|\mathbf{\Gamma}-\mathbf{D}^{T}\mathbf{x}\|_2^2 +\beta\|\mathbf{\Gamma}\|_p & p\in\{0,1\},\end{aligned}</math>

which have closed-form solutions via the hard-thresholding <math display="inline">\mathcal{H}_{\beta}(\mathbf{D}^{T}\mathbf{x})</math> and soft-thresholding algorithms <math display="inline">\mathcal{S}_{\beta}(\mathbf{D}^{T}\mathbf{x})</math>, respectively. If a nonnegative constraint is also contemplated, the problem can be expressed via the <math display="inline">\ell_{1}</math> norm as: <math display="block">\begin{aligned} \hat{\mathbf{\Gamma}}&= \underset{\mathbf{\Gamma}}{\text{argmin}}\; \frac{1}{2}\|\mathbf{\Gamma}-\mathbf{D}^T\mathbf{x}\|_2^2+\beta\|\mathbf{\Gamma}\|_1,\; \text{ s.t. } \mathbf{\Gamma}\geq 0,\end{aligned}</math> which closed-form solution corresponds to the soft nonnegative thresholding operator <math display="inline">\mathcal{S}_{\beta}^{+}(\mathbf{D}^{T}\mathbf{x})</math>, where <math display="inline">\mathcal{S}_{\beta}^{+}(z)\triangleq \max(z-\beta,0)</math>. Guarantees for the Layered soft-thresholding approach are included in the Appendix (Section 6.2).

Theorem 3: (Stable recovery of the multi-layered soft-thresholding algorithm) Consider signal <math display="inline">\mathbf{x}</math> that satisfies the (ML-CSC) model for a set of convolutional dictionaries <math display="inline">\{\mathbf{D}_i\}_{i=1}^K</math> with mutual coherence <math display="inline">\{\mu(\mathbf{D}_i)\}_{i=1}^K</math> is contaminated with noise <math display="inline">\mathbf{E}</math>, where <math display="inline">\|\mathbf{E}\|_2\leq \varepsilon_0</math>. resulting in <math display="inline">\mathbf{Y=X+E}</math>. Denote by <math display="inline">|\mathbf{\Gamma}_i^{\min}|</math> and <math display="inline">|\mathbf{\Gamma}_i^{\max}|</math> the lowest and highest entries in <math display="inline">\mathbf{\Gamma}_i</math>. Let <math display="inline">\{\hat{\mathbf{\Gamma}}_i\}_{i=1}^K</math> be the estimated sparse representations obtained for <math display="inline">\{\beta_i\}_{i=1}^K</math>. If <math display="inline">\|\mathbf{\Gamma}_i\|_{0,\infty}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\frac{|\mathbf{\Gamma}_i^{\min}|}{|\mathbf{\Gamma}_i^{\min}|}\big)-\frac{1}{\mu(\mathbf{D}_{i})} \frac{\varepsilon_{i-1}}{|\mathbf{\Gamma}_i^{\max}|}</math> and <math display="inline">\beta_i</math> is chosen according to: <math display="block">\begin{aligned} \|\mathbf{\Gamma}_i\|_{0,\infty}^s<\frac{1}{2}\big( 1+\frac{1}{\mu(\mathbf{D}_i)} frac{|\mathbf{\Gamma}_i^{\min}|}{|\mathbf{\Gamma}_i^{\max}|} \big)-\frac{1}{\mu(\mathbf{D}_i)}\frac{\varepsilon_{i-1}}{|\mathbf{\Gamma}_i^{\max}|} \end{aligned}</math> Then, <math display="inline">\hat{\mathbf{\Gamma}}_{i}</math> has the same support as <math display="inline">\mathbf{\Gamma}_{i}</math>, and <math display="inline">\|\mathbf{\Gamma}_{i}-\hat{\mathbf{\Gamma}_i}\|_{2,\infty}\leq \varepsilon_i</math>, for <math display="inline">\varepsilon_i=\sqrt{\|\mathbf{\Gamma}_i\|_{0,\infty}}(\varepsilon_{i-1}+\mu(\mathbf{D}_i)(\|\mathbf{\Gamma}_i\|_{0,\infty}-1)|\mathbf{\Gamma}_i^{\max}|+\beta_{i})</math>

Connections to convolutional neural networks

Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let <math display="inline">\mathbf{x}\in \mathbb{R}^{Mm_{1}}</math> be its input and <math display="inline">\mathbf{W}_{k}\in\mathbb{R}^{N\times m_{1}}</math> the filters at layer <math display="inline">k</math>, which are followed by the rectified linear unit (RLU) <math display="inline">\text{ReLU}(\mathbf{x})= \max(0, x)</math>, for bias <math display="inline">\mathbf{b}\in \mathbb{R}^{Mm_{1}}</math>. Based on this elementary block, taking <math display="inline">K=2</math> as example, the CNN output can be expressed as: <math display="block">\begin{aligned}

     \mathbf{Z}_{2}&= \text{ReLU}\big(\mathbf{W}_{2}^{T}\; \text{ReLU}(\mathbf{W}_{1}^{T}\mathbf{x})+\mathbf{b}_{1})+\mathbf{b}_{2}\;\big).\end{aligned}</math> Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative constraint, it is straightforward to show that both are equivalent: <math display="block">\begin{aligned}
     \hat{\mathbf{\Gamma}}&= \mathcal{S}^{+}_{\beta_{2}}\big(\mathbf{D}_{2}^{T}\mathcal{S}^{+}_{\beta_{1}}(\mathbf{D}_{1}^{T}\mathbf{x}) \big)\\
     &= \text{ReLU}\big(\mathbf{W}_{2}^{T} \text{ReLU}(\mathbf{W}_{1}^{T}\mathbf{x}+\beta_{1})+\beta_{2}\big).\end{aligned}</math>
Файл:CNN Convolutional Layers.svg
Convolutional Layers from the Forward Pass Algorithm
Файл:ReLU and Nonnegative Soft Thresholding Functions.svg
Contrast between the rectified linear unit function and the nonnegative soft thresholding pointwise nonlinearities

As explained in what follows, this naive approach of solving the coding problem is a particular case of a more stable projected gradient descent algorithm for the ML-CSC model. Equipped with the stability conditions of both approaches, a more clear understanding about the class of signals a CNN can recover, under what noise conditions can an estimation be accurately attained, and how can its structure be modified to improve its theoretical conditions. The reader is referred to (,[7] section 5) for details regarding their connection.

Pursuit algorithms for the multi-layer CSC model

A crucial limitation of the forward pass is it being unable to recover the unique solution of the DCP problem, which existence has been demonstrated. So, instead of using a thresholding approach at each layer, a full pursuit method is adopted, denominated layered basis pursuit (LBP). Considering the projection onto the <math display="inline">\ell_{1}</math> ball, the following problem is proposed: <math display="block">\begin{aligned}

     \hat{\mathbf{\Gamma}}_i & =\underset{\mathbf{\Gamma}_{i}}{\text{argmin}}\; \frac{1}{2}\|\mathbf{D}_{i}\mathbf{\Gamma}_{i}-\hat{\mathbf{\Gamma}}_{i}\|_{2}^{2}+\; \xi_{i}\|\mathbf{\Gamma}_{i}\|_{1},\end{aligned}</math> where each layer is solved as an independent CSC problem, and <math display="inline">\xi_{i}</math> is proportional to the noise level at each layer. Among the methods for solving the layered coding problem, ISTA is an efficient decoupling alternative. In what follows, a short summary of the guarantees for the LBP are established.

Theorem 4: (Recovery guarantee) Consider a signal <math display="inline">\mathbf{x}</math> characterized by a set of sparse vectors <math display="inline">\{\mathbf{\Gamma}_{i}\}_{i=1}^{K}</math>, convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math> and their corresponding mutual coherences <math display="inline">\{\mu\big(\mathbf{D}_{i}\big)\}_{i=1}^{K}</math>. If <math display="inline">\|\mathbf{\Gamma}_{i}\|_{0,\infty}<\frac{1}{2}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math>, then the LBP algorithm is guaranteed to recover the sparse representations.

Theorem 5: (Stability in the presence of noise) Consider the contaminated signal <math display="inline">\mathbf{Y}=\mathbf{X+E}</math>, where <math display="inline">\|\mathbf{E}\|_{0,\infty}\leq \varepsilon_{0}</math> and <math display="inline">\mathbf{x}</math> is characterized by a set of sparse vectors <math display="inline">\{\mathbf{\Gamma}_{i}\}_{i=1}^{K}</math> and convolutional dictionaries <math display="inline">\{\mathbf{D}_{i}\}_{i=1}^{K}</math>. Let <math display="inline">\{\hat{\mathbf{\Gamma}}_{i}\}_{i=1}^{K}</math> be solutions obtained via the LBP algorithm with parameters <math display="inline">\{\xi\}_{i=1}^{K}</math>. If <math display="inline">\|\mathbf{\Gamma}_{i}\|_{0,\infty}<\frac{1}{3}\big(1+\frac{1}{\mu(\mathbf{D}_{i})}\big)</math> and <math display="inline">\xi_{i}=4\varepsilon_{i-1}</math>, then: (i) The support of the solution <math display="inline">\hat{\mathbf{\Gamma}}_i</math> is contained in that of <math display="inline">\mathbf{\Gamma}_{i}</math>, (ii) <math display="inline">\|\mathbf{\Gamma}_{i}-\hat\mathbf{\Gamma}_i\|_{2,\infty}\leq \varepsilon_{i}</math>, and (iii) Any entry greater in absolute value than <math display="inline">\frac{\varepsilon_{i}}{\sqrt{\|\mathbf{\Gamma}_{i}\|_{0\infty}}}</math> is guaranteed to be recovered.

Applications of the convolutional sparse coding model: image inpainting

As a practical example, an efficient image inpainting method for color images via the CSC model is shown.[6] Consider the three-channel dictionary <math display="inline">\mathbf{D} \in \mathbb{R}^{N \times M \times 3}</math>, where <math display="inline">\mathbf{d}_{c,m}</math> denotes the <math display="inline">m</math>-th atom at channel <math display="inline">c</math>, represents signal <math display="inline">\mathbf{x}</math> by a single cross-channel sparse representation <math display="inline">\mathbf{\Gamma}</math>, with stripes denoted as <math display="inline">\mathbf{z}_{i}</math>. Given an observation <math display="inline">\mathbf{y}=\{\mathbf{y}_{r}, \mathbf{y}_{g}, \mathbf{y}_{b}\}</math>, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: <math display="block">\begin{aligned}

     \{\mathbf{\hat{z}}_{i}\}&=\underset{\{\mathbf{z}_{i}\}}{\text{argmin}}\frac{1}{2}\sum_{c}\bigg\|\sum_{i}\mathbf{d}_{c,i}\ast \mathbf{z}_{i} -\mathbf{y}_{c}\bigg\|_{2}^{2}+\lambda \sum_{i}\|\mathbf{z}_{i}\|_{1}.\end{aligned}</math> By means of ADMM,[9] the cost function is decoupled into simpler sub-problems, allowing an efficient <math display="inline">\mathbf{\Gamma}</math> estimation. Algorithm 2 describes the procedure, where <math display="inline">\hat{D}_{c,m}</math> is the DFT representation of <math display="inline">D_{c,m}</math>, the convolutional matrix for the term <math display="inline">\mathbf{d}_{c,i}\ast \mathbf{z}_{i}</math>. Likewise, <math display="inline">\hat{\mathbf{x}}_{m}</math> and <math display="inline">\hat{\mathbf{z}}_{m}</math> correspond to the DFT representations of <math display="inline">\mathbf{x}_{m}</math> and <math display="inline">\mathbf{z}_{m}</math>, respectively, <math display="inline">\mathcal{S}_{\beta}(.)</math> corresponds to the Soft-thresholding function with argument <math display="inline">\beta</math>, and the <math display="inline">\ell_{1,2}</math> norm is defined as the <math display="inline">\ell_{2}</math> norm along the channel dimension <math display="inline">c</math> followed by the <math display="inline">\ell_{1}</math> norm along the spatial dimension <math display="inline">m</math>. The reader is referred to (,[6] Section II) for details on the ADMM implementation and the dictionary learning procedure.

Algorithm 2: Color image inpainting via the convolutional sparse coding model.

Input:

<math display="inline">\hat{\mathbf{D}}_{c,m}</math>: DFT of convolutional matrices <math display="inline">\mathbf{D}_{c,m}</math>,

<math display="inline">\mathbf{y}=\{\mathbf{y}_{r},\mathbf{y}_{g},\mathbf{y}_{b}\}</math>: Color observation,

<math display="inline">\lambda</math>: Regularization parameter,

<math display="inline">\{\mu, \rho\}</math>: step sizes for ADMM,

tol: tolerance factor,

maxiters: maximum number of iterations.

<math display="inline">k\gets k+1</math>

Repeat

<math display="inline">\{\hat{\mathbf{z}}_{m}\}^{(k+1)}\gets\underset{\{\hat{\mathbf{x}}_{m}\}}{\text{argmin}}\;\frac{1}{2}\sum_{c}\big\|\sum_{m}\hat{\mathbf{D}}_{c,m} \hat{\mathbf{z}}_{m}-\hat{\mathbf{y}}_{c} \big\|+\frac{\rho}{2}\sum_{m}\|\hat{\mathbf{z}}_{m}- (\hat{\mathbf{y}}_{m}+\hat{\mathbf{u}}_{m}^{(k)})\|_{2}^{2}.</math>
<math display="inline">\{\mathbf{y}_{c,m}\}^{(k+1)}\gets \underset{\{\mathbf{y}_{c,m}\}}{\text{argmin}}\;\lambda \sum_{c}\sum_{m}\|\mathbf{y}_{c,m}\|_{1}+\mu\|\{\mathbf{x}_{c,m}^{(k+1)}\}\|_{2,1}+\frac{\rho}{2}\sum_{m}\|\mathbf{z}_{m}^{(k+1)}- (\mathbf{y}+\mathbf{u}_{m}^{(k)})\|_{2}^{2}.</math>
<math display="inline">\mathbf{y}_{m}^{(k+1)}=\mathcal{S}_{\lambda/\rho}\big( \mathbf{x}_{m}^{(k+1)}+\mathbf{u}_{m}^{(k)} \big).</math>
<math display="inline">k \gets k+1</math>

Until <math display="inline">\|\{\mathbf{z}_{m}\}^{(k+1)}-\{\mathbf{z}_{m}\}^{(k)}\|_{2}< </math>tol or <math display="inline">i></math> maxiters.

References

Шаблон:Reflist

External links