Английская Википедия:Information projection

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

In information theory, the information projection or I-projection of a probability distribution q onto a set of distributions P is

<math>p^* = \underset{p \in P}{\arg\min} \operatorname{D}_{\mathrm{KL}}(p||q)</math>.

where <math>D_{\mathrm{KL}}</math> is the Kullback–Leibler divergence from q to p. Viewing the Kullback–Leibler divergence as a measure of distance, the I-projection <math>p^*</math> is the "closest" distribution to q of all the distributions in P.

The I-projection is useful in setting up information geometry, notably because of the following inequality, valid when P is convex:[1]

<math>\operatorname{D}_{\mathrm{KL}}(p||q) \geq \operatorname{D}_{\mathrm{KL}}(p||p^*) + \operatorname{D}_{\mathrm{KL}}(p^*||q)</math>.

This inequality can be interpreted as an information-geometric version of Pythagoras' triangle-inequality theorem, where KL divergence is viewed as squared distance in a Euclidean space.

It is worthwhile to note that since <math> \operatorname{D}_{\mathrm{KL}}(p||q) \geq 0 </math> and continuous in p, if P is closed and non-empty, then there exists at least one minimizer to the optimization problem framed above. Furthermore, if P is convex, then the optimum distribution is unique.

The reverse I-projection also known as moment projection or M-projection is

<math>p^* = \underset{p \in P}{\arg\min} \operatorname{D}_{\mathrm{KL}}(q||p)</math>.

Since the KL divergence is not symmetric in its arguments, the I-projection and the M-projection will exhibit different behavior. For I-projection, <math> p(x) </math> will typically under-estimate the support of <math> q(x) </math> and will lock onto one of its modes. This is due to <math> p(x)=0 </math>, whenever <math> q(x)=0 </math> to make sure KL divergence stays finite. For M-projection, <math> p(x) </math> will typically over-estimate the support of <math> q(x) </math>. This is due to <math> p(x) > 0 </math> whenever <math> q(x) > 0 </math> to make sure KL divergence stays finite.

The reverse I-projection plays a fundamental role in the construction of optimal e-variables.


The concept of information projection can be extended to arbitrary f-divergences and other divergences.[2]

See also

References

Шаблон:Reflist

  • K. Murphy, "Machine Learning: a Probabilistic Perspective", The MIT Press, 2012.


Шаблон:Probability-stub