Английская Википедия:Bussgang theorem

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

In mathematics, the Bussgang theorem is a theorem of stochastic analysis. The theorem states that the cross-correlation between a Gaussian signal before and after it has passed through a nonlinear operation are equal to the signals auto-correlation up to a constant. It was first published by Julian J. Bussgang in 1952 while he was at the Massachusetts Institute of Technology.[1]

Statement

Let <math> \left\{X(t)\right\} </math> be a zero-mean stationary Gaussian random process and <math> \left \{ Y(t) \right\} = g(X(t)) </math> where <math> g(\cdot) </math> is a nonlinear amplitude distortion.

If <math> R_X(\tau) </math> is the autocorrelation function of <math> \left\{ X(t) \right\}</math>, then the cross-correlation function of <math> \left\{ X(t) \right\}</math> and <math> \left\{ Y(t) \right\}</math> is

<math> R_{XY}(\tau) = CR_X(\tau), </math>

where <math>C</math> is a constant that depends only on <math> g(\cdot) </math>.

It can be further shown that

<math> C = \frac{1}{\sigma^3\sqrt{2\pi}}\int_{-\infty}^\infty ug(u)e^{-\frac{u^2}{2\sigma^2}} \, du. </math>

Derivation for One-bit Quantization

It is a property of the two-dimensional normal distribution that the joint density of <math> y_1 </math> and <math>y_2</math> depends only on their covariance and is given explicitly by the expression

<math> p(y_1,y_2) = \frac{1}{2 \pi \sqrt{1-\rho^2}} e^{-\frac{y_1^2 + y_2^2 - 2 \rho y_1 y_2}{2(1-\rho^2)}} </math>

where <math> y_1 </math> and <math> y_2 </math> are standard Gaussian random variables with correlation <math> \phi_{y_1y_2}=\rho </math>.

Assume that <math> r_2 = Q(y_2) </math>, the correlation between <math> y_1 </math> and <math> r_2 </math> is,

<math> \phi_{y_1r_2} = \frac{1}{2 \pi \sqrt{1-\rho^2}} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} y_1 Q(y_2) e^{-\frac{y_1^2 + y_2^2 - 2 \rho y_1 y_2}{2(1-\rho^2)}} \, dy_1 dy_2 </math>.

Since

<math> \int_{-\infty}^{\infty} y_1 e^{-\frac{1}{2(1-\rho^2)} y_1^2 + \frac{\rho y_2}{1-\rho^2} y_1 } \, dy_1 = \rho \sqrt{2 \pi (1-\rho^2)} y_2 e^{ \frac{\rho^2 y_2^2}{2(1-\rho^2)} } </math>,

the correlation <math>\phi_{y_1 r_2}</math> may be simplified as

<math> \phi_{y_1 r_2} = \frac{\rho}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} y_2 Q(y_2) e^{-\frac{y_2^2}{2}} \, dy_2 </math>.

The integral above is seen to depend only on the distortion characteristic <math>Q()</math> and is independent of <math>\rho</math>.

Remembering that <math>\rho=\phi_{y_1 y_2}</math>, we observe that for a given distortion characteristic <math>Q()</math>, the ratio <math>\frac{\phi_{y_1 r_2}}{\phi_{y_1 y_2}}</math> is <math>K_Q=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} y_2 Q(y_2) e^{-\frac{y_2^2}{2}} \, dy_2</math>.

Therefore, the correlation can be rewritten in the form

<math>\phi_{y_1 r_2} = K_Q \phi_{y_1 y_2}</math>.

The above equation is the mathematical expression of the stated "Bussgang‘s theorem".

If <math>Q(x) = \text{sign}(x)</math>, or called one-bit quantization, then <math>K_Q= \frac{2}{\sqrt{2\pi}} \int_{0}^{\infty} y_2 e^{-\frac{y_2^2}{2}} \, dy_2 = \sqrt{\frac{2}{\pi}}</math>.

[2][3][1][4]

Arcsine law

If the two random variables are both distorted, i.e., <math>r_1 = Q(y_1), r_2 = Q(y_2)</math>, the correlation of <math>r_1</math> and <math>r_2</math> is

<math>\phi_{r_1 r_2}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} Q(y_1) Q(y_2) p(y_1, y_2) \, dy_1 dy_2</math>.

When <math>Q(x) = \text{sign}(x)</math>, the expression becomes,

<math>\phi_{r_1 r_2}=\frac{1}{2\pi \sqrt{1-\rho^2}} \left[ \int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 + \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 - \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 - \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 \right]</math>

where <math>\alpha = \frac{y_1^2 + y_2^2 - 2\rho y_1 y_2}{2 (1-\rho^2)}</math>.

Noticing that

<math id="4-part">\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p(y_1,y_2) \, dy_1 dy_2 = \frac{1}{2\pi \sqrt{1-\rho^2}} \left[ \int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 + \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 + \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 + \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 \right]=1</math>,

and <math>\int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 = \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2</math>, <math>\int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 = \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2</math>,

we can simplify the expression of <math>\phi_{r_1r_2}</math> as

<math>\phi_{r_1 r_2}=\frac{4}{2\pi \sqrt{1-\rho^2}} \int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2-1 </math>

Also, it is convenient to introduce the polar coordinate <math id="4-part">y_1 = R \cos \theta, y_2 = R \sin \theta </math>. It is thus found that

<math id="4-part">\phi_{r_1 r_2} =\frac{4}{2\pi \sqrt{1-\rho^2}} \int_{0}^{\pi/2} \int_{0}^{\infty} e^{-\frac{R^2 - 2R^2 \rho \cos \theta \sin \theta \ }{2(1-\rho^2)}} R \, dR d\theta-1=\frac{4}{2\pi \sqrt{1-\rho^2}} \int_{0}^{\pi/2} \int_{0}^{\infty} e^{-\frac{R^2 (1-\rho \sin 2\theta )}{2(1-\rho^2)}} R \, dR d\theta -1

  </math>.

Integration gives

<math>\phi_{r_1 r_2}=\frac{2\sqrt{1-\rho^2}}{\pi} \int_{0}^{\pi/2} \frac{d\theta}{1-\rho \sin 2\theta} - 1= - \frac{2}{\pi} \arctan \left( \frac{\rho-\tan\theta} {\sqrt{1-\rho^2}} \right) \Bigg|_{0}^{\pi/2} -1 =\frac{2}{\pi} \arcsin(\rho) </math>,

This is called "Arcsine law", which was first found by J. H. Van Vleck in 1943 and republished in 1966.[2][3] The "Arcsine law" can also be proved in a simpler way by applying Price's Theorem.[4][5]

The function <math> f(x)=\frac{2}{\pi} \arcsin x </math> can be approximated as <math> f(x) \approx \frac{2}{\pi} x </math> when <math> x </math> is small.

Price's Theorem

Given two jointly normal random variables <math>y_1</math> and <math>y_2</math> with joint probability function

<math>{\displaystyle p(y_{1},y_{2})={\frac {1}{2\pi {\sqrt {1-\rho ^{2}}}}}e^{-{\frac {y_{1}^{2}+y_{2}^{2}-2\rho y_{1}y_{2}}{2(1-\rho ^{2})}}}}</math>,

we form the mean

<math>I(\rho)=E(g(y_1,y_2))=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} g(y_1, y_2) p(y_1, y_2) \, dy_1 dy_2</math>

of some function <math>g(y_1,y_2)</math> of <math>(y_1, y_2)</math>. If <math>g(y_1, y_2) p(y_1, y_2) \rightarrow 0</math> as <math>(y_1, y_2) \rightarrow 0</math>, then

<math>\frac{\partial^n I(\rho)}{\partial \rho^n}=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}

\frac{\partial ^{2n} g(y_1, y_2)}{\partial y_1^n \partial y_2^n} p(y_1, y_2) \, dy_1 dy_2

=E \left(\frac{\partial ^{2n} g(y_1, y_2)}{\partial y_1^n \partial y_2^n} \right)</math>.

Proof. The joint characteristic function of the random variables <math>y_1</math> and <math>y_2</math> is by definition the integral

<math>\Phi(\omega_1, \omega_2)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} p(y_1, y_2)

e^{j (\omega_1 y_1 + \omega_2 y_2 )} \, dy_1 dy_2

= \exp \left\{-\frac{\omega_1^2 + \omega_2^2 + 2\rho \omega_1 \omega_2}{2} \right\}</math>.

From the two-dimensional inversion formula of Fourier transform, it follows that

<math>p(y_1, y_2) = \frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \Phi(\omega_1, \omega_2)

e^{-j (\omega_1 y_1 + \omega_2 y_2)} \, d\omega_1 d\omega_2 =\frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \exp \left\{-\frac{\omega_1^2 + \omega_2^2 + 2\rho \omega_1 \omega_2}{2} \right\}

e^{-j (\omega_1 y_1 + \omega_2 y_2)} \, d\omega_1 d\omega_2</math>.

Therefore, plugging the expression of <math>p(y_1, y_2)</math> into <math>I(\rho)</math>, and differentiating with respect to <math>\rho</math>, we obtain

<math>\begin{align}

\frac{\partial^n I(\rho)}{\partial \rho^n} & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) p(y_1, y_2) \, dy_1 dy_2 \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) \left(\frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \frac{\partial^ {n}\Phi(\omega_1, \omega_2)}{\partial \rho^n} e^{-j(\omega_1 y_1 + \omega_2 y_2)} \, d\omega_1 d\omega_2 \right) \, dy_1 dy_2 \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) \left(\frac{(-1)^n}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \omega_1^n \omega_2^n \Phi(\omega_1, \omega_2) e^{-j(\omega_1 y_1 + \omega_2 y_2)} \, d\omega_1 d\omega_2 \right) \, dy_1 dy_2 \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) \left(\frac{1}{4 \pi^2} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \Phi(\omega_1, \omega_2) \frac{\partial^{2n} e^{-j(\omega_1 y_1 + \omega_2 y_2)}}{\partial y_1^n \partial y_2^n} \, d\omega_1 d\omega_2 \right) \, dy_1 dy_2 \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) \frac{\partial^{2n} p(y_1, y_2)}{\partial y_1^n \partial y_2^n} \, dy_1 dy_2 \\

\end{align}</math>

After repeated integration by parts and using the condition at <math>\infty</math>, we obtain the Price's theorem.

<math>\begin{align}

\frac{\partial^n I(\rho)}{\partial \rho^n} & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(y_1, y_2) \frac{\partial^{2n} p(y_1, y_2)}{\partial y_1^n \partial y_2^n} \, dy_1 dy_2 \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \frac{\partial^{2} g(y_1, y_2)}{\partial y_1 \partial y_2} \frac{\partial^{2n-2} p(y_1, y_2)}{\partial y_1^{n-1} \partial y_2^{n-1}} \, dy_1 dy_2 \\ &=\cdots \\ &=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \frac{\partial ^{2n} g(y_1, y_2)}{\partial y_1^n \partial y_2^n} p(y_1, y_2) \, dy_1 dy_2

\end{align}</math>

[4][5]

Proof of Arcsine law by Price's Theorem

If <math>g(y_1, y_2) = \text{sign}(y_1) \text{sign} (y_2)</math>, then <math>\frac{\partial^2 g(y_1, y_2)}{\partial y_1 \partial y_2} = 4 \delta(y_1) \delta(y_2)</math> where <math>\delta()</math> is the Dirac delta function.

Substituting into Price's Theorem, we obtain,

<math>\frac{\partial E(\text{sign} (y_1) \text{sign}(y_2))}{\partial \rho} = \frac{\partial I(\rho)}{\partial \rho}= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} 4 \delta(y_1) \delta(y_2) p(y_1, y_2) \, dy_1 dy_2=\frac{2}{\pi \sqrt{1-\rho^2}}</math>.

When <math>\rho=0</math>, <math>I(\rho)=0</math>. Thus

<math>E \left(\text{sign}(y_1) \text{sign}(y_2) \right) = I(\rho)=\frac{2}{\pi} \int_{0}^{\rho} \frac{1}{\sqrt{1-\rho^2}} \, d\rho=\frac{2}{\pi} \arcsin(\rho)</math>,

which is Van Vleck's well-known result of "Arcsine law".

[2][3]

Application

This theorem implies that a simplified correlator can be designed.Шаблон:Clarify Instead of having to multiply two signals, the cross-correlation problem reduces to the gatingШаблон:Clarify of one signal with another.Шаблон:Citation needed

References

Шаблон:Reflist

Further reading

  1. 1,0 1,1 J.J. Bussgang,"Cross-correlation function of amplitude-distorted Gaussian signals", Res. Lab. Elec., Mas. Inst. Technol., Cambridge MA, Tech. Rep. 216, March 1952.
  2. 2,0 2,1 2,2 Шаблон:Cite journal
  3. 3,0 3,1 3,2 Шаблон:Cite journal
  4. 4,0 4,1 4,2 Шаблон:Cite journal
  5. 5,0 5,1 Шаблон:Cite book