Английская Википедия:Complex random vector

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If <math>Z_1,\ldots,Z_n</math> are complex-valued random variables, then the n-tuple <math>\left( Z_1,\ldots,Z_n \right)</math> is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.

Some concepts of real random vectors have a straightforward generalization to complex random vectors. For example, the definition of the mean of a complex random vector. Other concepts are unique to complex random vectors.

Applications of complex random vectors are found in digital signal processing.

Шаблон:Probability fundamentals

Definition

A complex random vector <math> \mathbf{Z} = (Z_1,\ldots,Z_n)^T </math> on the probability space <math>(\Omega,\mathcal{F},P)</math> is a function <math> \mathbf{Z} \colon \Omega \rightarrow \mathbb{C}^n </math> such that the vector <math>(\Re{(Z_1)},\Im{(Z_1)},\ldots,\Re{(Z_n)},\Im{(Z_n)})^T </math> is a real random vector on <math>(\Omega,\mathcal{F},P)</math> where <math>\Re{(z)}</math> denotes the real part of <math>z</math> and <math>\Im{(z)}</math> denotes the imaginary part of <math>z</math>.[1]Шаблон:Rp

Cumulative distribution function

The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form <math> P(Z \leq 1+3i) </math> make no sense. However expressions of the form <math> P(\Re{(Z)} \leq 1, \Im{(Z)} \leq 3) </math> make sense. Therefore, the cumulative distribution function <math>F_{\mathbf{Z}} : \mathbb{C}^n \mapsto [0,1]</math> of a random vector <math>\mathbf{Z}=(Z_1,...,Z_n)^T </math> is defined as

Шаблон:Equation box 1 |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

where <math>\mathbf{z} = (z_1,...,z_n)^T</math>.

Expectation

As in the real case the expectation (also called expected value) of a complex random vector is taken component-wise.[1]Шаблон:Rp

Шаблон:Equation box 1

Covariance matrix and pseudo-covariance matrix

Шаблон:See also

The covariance matrix (also called second central moment) <math> \operatorname{K}_{\mathbf{Z}\mathbf{Z}}</math> contains the covariances between all pairs of components. The covariance matrix of an <math>n \times 1</math> random vector is an <math>n \times n</math> matrix whose <math>(i,j)</math>th element is the covariance between the i th and the j th random variables.[2]Шаблон:Rp Unlike in the case of real random variables, the covariance between two random variables involves the complex conjugate of one of the two. Thus the covariance matrix is a Hermitian matrix.[1]Шаблон:Rp

Шаблон:Equation box 1 |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

<math>

\operatorname{K}_{\mathbf{Z}\mathbf{Z}}= \begin{bmatrix}

\mathrm{E}[(Z_1 - \operatorname{E}[Z_1])\overline{(Z_1 - \operatorname{E}[Z_1])}] & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])\overline{(Z_2 - \operatorname{E}[Z_2])}] & \cdots & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])\overline{(Z_n - \operatorname{E}[Z_n])}] \\ \\
\mathrm{E}[(Z_2 - \operatorname{E}[Z_2])\overline{(Z_1 - \operatorname{E}[Z_1])}] & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])\overline{(Z_2 - \operatorname{E}[Z_2])}] & \cdots & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])\overline{(Z_n - \operatorname{E}[Z_n])}] \\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
\mathrm{E}[(Z_n - \operatorname{E}[Z_n])\overline{(Z_1 - \operatorname{E}[Z_1])}] & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])\overline{(Z_2 - \operatorname{E}[Z_2])}] & \cdots & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])\overline{(Z_n - \operatorname{E}[Z_n])}]

\end{bmatrix} </math>

The pseudo-covariance matrix (also called relation matrix) is defined replacing Hermitian transposition by transposition in the definition above.

Шаблон:Equation box 1] = \operatorname{E}[(\mathbf{Z}-\operatorname{E}[\mathbf{Z}]){(\mathbf{Z}-\operatorname{E}[\mathbf{Z}])}^T] = \operatorname{E}[\mathbf{Z}\mathbf{Z}^T]-\operatorname{E}[\mathbf{Z}]\operatorname{E}[\mathbf{Z}^T] </math> |Шаблон:EquationRef}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

<math>

\operatorname{J}_{\mathbf{Z}\mathbf{Z}}= \begin{bmatrix}

\mathrm{E}[(Z_1 - \operatorname{E}[Z_1])(Z_1 - \operatorname{E}[Z_1])] & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])(Z_2 - \operatorname{E}[Z_2])] & \cdots & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])(Z_n - \operatorname{E}[Z_n])] \\ \\
\mathrm{E}[(Z_2 - \operatorname{E}[Z_2])(Z_1 - \operatorname{E}[Z_1])] & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])(Z_2 - \operatorname{E}[Z_2])] & \cdots & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])(Z_n - \operatorname{E}[Z_n])] \\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
\mathrm{E}[(Z_n - \operatorname{E}[Z_n])(Z_1 - \operatorname{E}[Z_1])] & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])(Z_2 - \operatorname{E}[Z_2])] & \cdots & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])(Z_n - \operatorname{E}[Z_n])]

\end{bmatrix} </math>

Properties

The covariance matrix is a hermitian matrix, i.e.[1]Шаблон:Rp

<math>\operatorname{K}_{\mathbf{Z}\mathbf{Z}}^H = \operatorname{K}_{\mathbf{Z}\mathbf{Z}}</math>.

The pseudo-covariance matrix is a symmetric matrix, i.e.

<math>\operatorname{J}_{\mathbf{Z}\mathbf{Z}}^T = \operatorname{J}_{\mathbf{Z}\mathbf{Z}}</math>.

The covariance matrix is a positive semidefinite matrix, i.e.

<math>\mathbf{a}^H \operatorname{K}_{\mathbf{Z}\mathbf{Z}} \mathbf{a} \ge 0 \quad \text{for all } \mathbf{a} \in \mathbb{C}^n</math>.

Covariance matrices of real and imaginary parts

Шаблон:See also

By decomposing the random vector <math>\mathbf{Z}</math> into its real part <math>\mathbf{X} = \Re{(\mathbf{Z})}</math> and imaginary part <math>\mathbf{Y} = \Im{(\mathbf{Z})}</math> (i.e. <math>\mathbf{Z}=\mathbf{X}+i\mathbf{Y}</math>), the pair <math> (\mathbf{X},\mathbf{Y})</math> has a covariance matrix of the form:

<math>
 \begin{bmatrix} 
   \operatorname{K}_{\mathbf{X}\mathbf{X}} & \operatorname{K}_{\mathbf{Y}\mathbf{X}} \\ 
   \operatorname{K}_{\mathbf{X}\mathbf{Y}} & \operatorname{K}_{\mathbf{Y}\mathbf{Y}} 
 \end{bmatrix} 

</math>

The matrices <math>\operatorname{K}_{\mathbf{Z}\mathbf{Z}}</math> and <math>\operatorname{J}_{\mathbf{Z}\mathbf{Z}}</math> can be related to the covariance matrices of <math>\mathbf{X}</math> and <math>\mathbf{Y}</math> via the following expressions:

<math>\begin{align}
 & \operatorname{K}_{\mathbf{X}\mathbf{X}} = \operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{X}-\operatorname{E}[\mathbf{X}])^\mathrm T] = \tfrac{1}{2}\operatorname{Re}(\operatorname{K}_{\mathbf{Z}\mathbf{Z}} + \operatorname{J}_{\mathbf{Z}\mathbf{Z}}) \\
 & \operatorname{K}_{\mathbf{Y}\mathbf{Y}} = \operatorname{E}[(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])^\mathrm T] = \tfrac{1}{2}\operatorname{Re}(\operatorname{K}_{\mathbf{Z}\mathbf{Z}} - \operatorname{J}_{\mathbf{Z}\mathbf{Z}}) \\
 & \operatorname{K}_{\mathbf{Y}\mathbf{X}} = \operatorname{E}[(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])(\mathbf{X}-\operatorname{E}[\mathbf{X}])^\mathrm T] = \tfrac{1}{2}\operatorname{Im}(\operatorname{J}_{\mathbf{Z}\mathbf{Z}} + \operatorname{K}_{\mathbf{Z}\mathbf{Z}}) \\
 & \operatorname{K}_{\mathbf{X}\mathbf{Y}} = \operatorname{E}[(\mathbf{X}-\operatorname{E}[\mathbf{X}])(\mathbf{Y}-\operatorname{E}[\mathbf{Y}])^\mathrm T] = \tfrac{1}{2}\operatorname{Im}(\operatorname{J}_{\mathbf{Z}\mathbf{Z}} -\operatorname{K}_{\mathbf{Z}\mathbf{Z}}) \\
 \end{align}</math>

Conversely:

<math>\begin{align}
 & \operatorname{K}_{\mathbf{Z}\mathbf{Z}} = \operatorname{K}_{\mathbf{X}\mathbf{X}} + \operatorname{K}_{\mathbf{Y}\mathbf{Y}} + i(\operatorname{K}_{\mathbf{Y}\mathbf{X}} - \operatorname{K}_{\mathbf{X}\mathbf{Y}}) \\
 & \operatorname{J}_{\mathbf{Z}\mathbf{Z}} = \operatorname{K}_{\mathbf{X}\mathbf{X}} - \operatorname{K}_{\mathbf{Y}\mathbf{Y}} + i(\operatorname{K}_{\mathbf{Y}\mathbf{X}} + \operatorname{K}_{\mathbf{X}\mathbf{Y}})
 \end{align}</math>

Cross-covariance matrix and pseudo-cross-covariance matrix

The cross-covariance matrix between two complex random vectors <math>\mathbf{Z},\mathbf{W}</math> is defined as:

Шаблон:Equation box 1 |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

<math>\operatorname{K}_{\mathbf{Z}\mathbf{W}} =

\begin{bmatrix}

\mathrm{E}[(Z_1 - \operatorname{E}[Z_1])\overline{(W_1 - \operatorname{E}[W_1])}] & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])\overline{(W_2 - \operatorname{E}[W_2])}] & \cdots & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])\overline{(W_n - \operatorname{E}[W_n])}] \\ \\
\mathrm{E}[(Z_2 - \operatorname{E}[Z_2])\overline{(W_1 - \operatorname{E}[W_1])}] & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])\overline{(W_2 - \operatorname{E}[W_2])}] & \cdots & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])\overline{(W_n - \operatorname{E}[W_n])}] \\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
\mathrm{E}[(Z_n - \operatorname{E}[Z_n])\overline{(W_1 - \operatorname{E}[W_1])}] & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])\overline{(W_2 - \operatorname{E}[W_2])}] & \cdots & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])\overline{(W_n - \operatorname{E}[W_n])}]

\end{bmatrix} </math>

And the pseudo-cross-covariance matrix is defined as:

Шаблон:Equation box 1] = \operatorname{E}[(\mathbf{Z}-\operatorname{E}[\mathbf{Z}]){(\mathbf{W}-\operatorname{E}[\mathbf{W}])}^T] = \operatorname{E}[\mathbf{Z}\mathbf{W}^T]-\operatorname{E}[\mathbf{Z}]\operatorname{E}[\mathbf{W}^T] </math>|Шаблон:EquationRef}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

<math>\operatorname{J}_{\mathbf{Z}\mathbf{W}} =

\begin{bmatrix}

\mathrm{E}[(Z_1 - \operatorname{E}[Z_1])(W_1 - \operatorname{E}[W_1])] & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])(W_2 - \operatorname{E}[W_2])] & \cdots & \mathrm{E}[(Z_1 - \operatorname{E}[Z_1])(W_n - \operatorname{E}[W_n])] \\ \\
\mathrm{E}[(Z_2 - \operatorname{E}[Z_2])(W_1 - \operatorname{E}[W_1])] & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])(W_2 - \operatorname{E}[W_2])] & \cdots & \mathrm{E}[(Z_2 - \operatorname{E}[Z_2])(W_n - \operatorname{E}[W_n])] \\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
\mathrm{E}[(Z_n - \operatorname{E}[Z_n])(W_1 - \operatorname{E}[W_1])] & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])(W_2 - \operatorname{E}[W_2])] & \cdots & \mathrm{E}[(Z_n - \operatorname{E}[Z_n])(W_n - \operatorname{E}[W_n])]

\end{bmatrix} </math>

Two complex random vectors <math>\mathbf{Z}</math> and <math>\mathbf{W}</math> are called uncorrelated if

<math>\operatorname{K}_{\mathbf{Z}\mathbf{W}}=\operatorname{J}_{\mathbf{Z}\mathbf{W}}=0</math>.

Independence

Шаблон:Main Two complex random vectors <math>\mathbf{Z}=(Z_1,...,Z_m)^T</math> and <math>\mathbf{W}=(W_1,...,W_n)^T</math> are called independent if

Шаблон:Equation box 1(\mathbf{z}) \cdot F_{\mathbf{W}}(\mathbf{w}) \quad \text{for all } \mathbf{z},\mathbf{w}</math>|Шаблон:EquationRef}} |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}}

where <math>F_{\mathbf{Z}}(\mathbf{z})</math> and <math>F_{\mathbf{W}}(\mathbf{w})</math> denote the cumulative distribution functions of <math>\mathbf{Z}</math> and <math>\mathbf{W}</math> as defined in Шаблон:EquationNote and <math>F_{\mathbf{Z,W}}(\mathbf{z,w})</math> denotes their joint cumulative distribution function. Independence of <math>\mathbf{Z}</math> and <math>\mathbf{W}</math> is often denoted by <math>\mathbf{Z} \perp\!\!\!\perp \mathbf{W}</math>. Written component-wise, <math>\mathbf{Z}</math> and <math>\mathbf{W}</math> are called independent if

<math>F_{Z_1,\ldots,Z_m,W_1,\ldots,W_n}(z_1,\ldots,z_m,w_1,\ldots,w_n) = F_{Z_1,\ldots,Z_m}(z_1,\ldots,z_m) \cdot F_{W_1,\ldots,W_n}(w_1,\ldots,w_n) \quad \text{for all } z_1,\ldots,z_m,w_1,\ldots,w_n</math>.

Circular symmetry

A complex random vector <math> \mathbf{Z} </math> is called circularly symmetric if for every deterministic <math> \varphi \in [-\pi,\pi) </math> the distribution of <math> e^{\mathrm i \varphi}\mathbf{Z} </math> equals the distribution of <math> \mathbf{Z} </math>.[3]Шаблон:Rp

Properties
  • The expectation of a circularly symmetric complex random vector is either zero or it is not defined.[3]Шаблон:Rp
  • The pseudo-covariance matrix of a circularly symmetric complex random vector is zero.[3]Шаблон:Rp

Proper complex random vectors

A complex random vector <math>\mathbf{Z}</math> is called proper if the following three conditions are all satisfied:[1]Шаблон:Rp

  • <math> \operatorname{E}[\mathbf{Z}] = 0 </math> (zero mean)
  • <math> \operatorname{var}[Z_1] < \infty , \ldots , \operatorname{var}[Z_n] < \infty </math> (all components have finite variance)
  • <math> \operatorname{E}[\mathbf{Z}\mathbf{Z}^T] = 0 </math>

Two complex random vectors <math>\mathbf{Z},\mathbf{W}</math> are called jointly proper is the composite random vector <math>(Z_1,Z_2,\ldots,Z_m,W_1,W_2,\ldots,W_n)^T</math> is proper.

Properties
  • A complex random vector <math>\mathbf{Z}</math> is proper if, and only if, for all (deterministic) vectors <math> \mathbf{c} \in \mathbb{C}^n</math> the complex random variable <math>\mathbf{c}^T \mathbf{Z}</math> is proper.[1]Шаблон:Rp
  • Linear transformations of proper complex random vectors are proper, i.e. if <math>\mathbf{Z}</math> is a proper random vectors with <math>n</math> components and <math>A</math> is a deterministic <math>m \times n</math> matrix, then the complex random vector <math>A \mathbf{Z}</math> is also proper.[1]Шаблон:Rp
  • Every circularly symmetric complex random vector with finite variance of all its components is proper.[1]Шаблон:Rp
  • There are proper complex random vectors that are not circularly symmetric.[1]Шаблон:Rp
  • A real random vector is proper if and only if it is constant.
  • Two jointly proper complex random vectors are uncorrelated if and only if their covariance matrix is zero, i.e. if <math>\operatorname{K}_{\mathbf{Z}\mathbf{W}} = 0</math>.

Cauchy-Schwarz inequality

The Cauchy-Schwarz inequality for complex random vectors is

<math>\left| \operatorname{E}[\mathbf{Z}^H \mathbf{W}] \right|^2 \leq \operatorname{E}[\mathbf{Z}^H \mathbf{Z}] \operatorname{E}[|\mathbf{W}^H \mathbf{W}|]</math>.

Characteristic function

The characteristic function of a complex random vector <math> \mathbf{Z} </math> with <math> n </math> components is a function <math> \mathbb{C}^n \to \mathbb{C} </math> defined by:[1]Шаблон:Rp

<math> \varphi_{\mathbf{Z}}(\mathbf{\omega}) = \operatorname{E} \left [ e^{i\Re{(\mathbf{\omega}^H \mathbf{Z})}} \right ] = \operatorname{E} \left [ e^{i( \Re{(\omega_1)}\Re{(Z_1)} + \Im{(\omega_1)}\Im{(Z_1)} + \cdots + \Re{(\omega_n)}\Re{(Z_n)} + \Im{(\omega_n)}\Im{(Z_n)} )} \right ]</math>

See also

References

Шаблон:Reflist