Английская Википедия:Brascamp–Lieb inequality

Материал из Онлайн справочника
Версия от 13:18, 11 февраля 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} In mathematics, the '''Brascamp–Lieb inequality''' is either of two inequalities. The first is a result in geometry concerning integrable functions on ''n''-dimensional Euclidean space <math>\mathbb{R}^{n}</math>. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentratio...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n-dimensional Euclidean space <math>\mathbb{R}^{n}</math>. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.

The geometric inequality

Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that

<math>\sum_{i = 1}^m c_i n_i = n.</math>

Choose non-negative, integrable functions

<math>f_i \in L^1 \left( \mathbb{R}^{n_i} ; [0, + \infty] \right)</math>

and surjective linear maps

<math>B_i : \mathbb{R}^n \to \mathbb{R}^{n_i}.</math>

Then the following inequality holds:

<math>\int_{\mathbb{R}^n} \prod_{i = 1}^m f_i \left( B_i x \right)^{c_i} \, \mathrm{d} x \leq D^{- 1/2} \prod_{i = 1}^m \left( \int_{\mathbb{R}^{n_i}} f_i (y) \, \mathrm{d} y \right)^{c_i},</math>

where D is given by

<math>D = \inf \left\{ \left. \frac{\det \left( \sum_{i = 1}^m c_i B_i^{*} A_i B_i \right)}{\prod_{i = 1}^m ( \det A_i )^{c_i}} \right| A_i \text{ is a positive-definite } n_i \times n_i \text{ matrix} \right\}.</math>

Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each <math>f_{i}</math> is a centered Gaussian function, namely <math>f_{i}(y) = \exp \{-(y,\, A_{i}\, y)\}</math>.[1]

Alternative forms

Consider a probability density function <math>p(x)=\exp(-\phi(x))</math>. This probability density function <math>p(x)</math> is said to be a log-concave measure if the <math> \phi(x) </math> function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of <math> p(x) </math>. The Brascamp–Lieb inequality gives another characterization of the compactness of <math> p(x) </math> by bounding the mean of any statistic <math> S(x)</math>.

Formally, let <math> S(x) </math> be any derivable function. The Brascamp–Lieb inequality reads:

<math> \operatorname{var}_p (S(x)) \leq E_p (\nabla^T S(x) [H \phi(x)]^{-1} \nabla S(x)) </math>

where H is the Hessian and <math>\nabla</math> is the Nabla symbol.[2]

BCCT inequality

The inequality is generalized in 2008[3] to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant.

Definition: the Brascamp-Lieb datum (BL datum)

  • <math>d, n\geq 1</math>.
  • <math>d_1, ..., d_n \in \{1, 2, ..., d\}</math>.
  • <math>p_1, ..., p_n \in [0, \infty)</math>.
  • <math>B_i: \R^d \to \R^{d_i}</math> are linear surjections, with zero common kernel: <math>\cap_i ker(B_i) = \{0\}</math>.
  • Call <math>(B, p) = (B_1, ..., B_n, p_1, ..., p_n)</math> a Brascamp-Lieb datum (BL datum).

For any <math>f_i \in L^1(R^{d_i})</math> with <math>f_i \geq 0</math>, define<math display="block">BL(B, p, f) := \frac{\int_H \prod_{j=1}^m\left(f_j \circ B_j\right)^{p_j}}{\prod_{j=1}^m\left(\int_{H_j} f_j\right)^{p_j}}</math>


Now define the Brascamp-Lieb constant for the BL datum:<math display="block">BL(B, p) = \max_{f }BL(B, p, f)</math>

Шаблон:Math theorem{\prod_{j=1}^m\left(\int_{H_j} f_j\right)^{p_j}} \to \infty </math> }}

Discrete case

Setup:

  • BL datum defined as <math>(G, G_1, ..., G_n, \phi_1, ... \phi_n)</math>
  • <math>T(G)</math> is the torsion subgroup, that is, the subgroup of finite-order elements.

With this setup, we have (Theorem 2.4,[4] Theorem 3.12 [5])

Шаблон:Math theorem

Note that the constant <math>|T(G)|</math> is not always tight.

BL polytope

Given BL datum <math>(B, p)</math>, the conditions for <math>BL(B, p) < \infty</math> are

  • <math>d = \sum_i p_i d_i</math>, and
  • for all subspace <math>V</math> of <math>\R^d</math>,<math display="block">dim(V) \leq\sum_i p_i dim(B_i(V)) </math>

Thus, the subset of <math>p\in [0, \infty)^n</math> that satisfies the above two conditions is a closed convex polytope defined by linear inequalities. This is the BL polytope.

Note that while there are infinitely many possible choices of subspace <math>V</math> of <math>\R^d</math>, there are only finitely many possible equations of <math>dim(V) \leq\sum_i p_i dim(B_i(V)) </math>, so the subset is a closed convex polytope.

Similarly we can define the BL polytope for the discrete case.

Relationships to other inequalities

The geometric Brascamp–Lieb inequality

The geometric Brascamp–Lieb inequality, first derived in 1976,[6] is a special case of the general inequality. It was used by Keith Ball, in 1989, to provide upper bounds for volumes of central sections of cubes.[7]

For i = 1, ..., m, let ci > 0 and let ui ∈ Sn−1 be a unit vector; suppose that ci and ui satisfy

<math>x = \sum_{i = 1}^m c_i (x \cdot u_i) u_i</math>

for all x in Rn. Let fi ∈ L1(R; [0, +∞]) for each i = 1, ..., m. Then

<math>\int_{\mathbb{R}^n} \prod_{i = 1}^m f_i (x \cdot u_i)^{c_i} \, \mathrm{d} x \leq \prod_{i = 1}^m \left( \int_{\mathbb{R}} f_i (y) \, \mathrm{d} y \right)^{c_i}.</math>

The geometric Brascamp–Lieb inequality follows from the Brascamp–Lieb inequality as stated above by taking ni = 1 and Bi(x) = x · ui. Then, for zi ∈ R,

<math>B_i^{*} (z_i) = z_i u_i.</math>

It follows that D = 1 in this case.

Hölder's inequality

Take ni = n, Bi = id, the identity map on <math>\mathbb{R}^{n}</math>, replacing fi by fШаблон:Su, and let ci = 1 / pi for 1 ≤ i ≤ m. Then

<math>\sum_{i = 1}^m \frac{1}{p_i} = 1</math>

and the log-concavity of the determinant of a positive definite matrix implies that D = 1. This yields Hölder's inequality in <math>\mathbb{R}^{n}</math>:

<math>\int_{\mathbb{R}^n} \prod_{i = 1}^m f_{i} (x) \, \mathrm{d} x \leq \prod_{i = 1}^{m} \| f_i \|_{p_i}.</math>

Poincaré inequality

The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions.[8]

Cramér–Rao bound

The Brascamp–Lieb inequality is also related to the Cramér–Rao bound.[8] While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance of <math>\operatorname{var}_p (S(x))</math>. The Cramér–Rao bound states

<math> \operatorname{var}_p (S(x)) \geq E_p (\nabla^T S(x) ) [ E_p( H \phi(x) )]^{-1} E_p( \nabla S(x) )\!</math>.

which is very similar to the Brascamp–Lieb inequality in the alternative form shown above.

References

Шаблон:Reflist