Английская Википедия:Centering matrix

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description In mathematics and multivariate statistics, the centering matrix[1] is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect as subtracting the mean of the components of the vector from every component of that vector.

Definition

The centering matrix of size n is defined as the n-by-n matrix

<math>C_n = I_n - \tfrac{1}{n}J_n </math>

where <math>I_n\,</math> is the identity matrix of size n and <math>J_n</math> is an n-by-n matrix of all 1's.

For example

<math>C_1 = \begin{bmatrix}

0 \end{bmatrix} </math>,

<math>C_2= \left[ \begin{array}{rrr}

1 & 0 \\ 0 & 1 \end{array} \right] - \frac{1}{2}\left[ \begin{array}{rrr} 1 & 1 \\ 1 & 1 \end{array} \right] = \left[ \begin{array}{rrr} \frac{1}{2} & -\frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{array} \right] </math> ,

<math>C_3 = \left[ \begin{array}{rrr}

1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] - \frac{1}{3}\left[ \begin{array}{rrr} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right]

= \left[ \begin{array}{rrr}

\frac{2}{3} & -\frac{1}{3} & -\frac{1}{3} \\ -\frac{1}{3} & \frac{2}{3} & -\frac{1}{3} \\ -\frac{1}{3} & -\frac{1}{3} & \frac{2}{3} \end{array} \right] </math>

Properties

Given a column-vector, <math>\mathbf{v}\,</math> of size n, the centering property of <math>C_n\,</math> can be expressed as

<math>C_n\,\mathbf{v} = \mathbf{v} - (\tfrac{1}{n}J_{n,1}^\textrm{T}\mathbf{v})J_{n,1}</math>

where <math>J_{n,1}</math> is a column vector of ones and <math>\tfrac{1}{n}J_{n,1}^\textrm{T}\mathbf{v}</math> is the mean of the components of <math>\mathbf{v}\,</math>.

<math>C_n\,</math> is symmetric positive semi-definite.

<math>C_n\,</math> is idempotent, so that <math>C_n^k=C_n</math>, for <math>k=1,2,\ldots</math>. Once the mean has been removed, it is zero and removing it again has no effect.

<math>C_n\,</math> is singular. The effects of applying the transformation <math>C_n\,\mathbf{v}</math> cannot be reversed.

<math>C_n\,</math> has the eigenvalue 1 of multiplicity n − 1 and eigenvalue 0 of multiplicity 1.

<math>C_n\,</math> has a nullspace of dimension 1, along the vector <math>J_{n,1}</math>.

<math>C_n\,</math> is an orthogonal projection matrix. That is, <math>C_n\mathbf{v}</math> is a projection of <math>\mathbf{v}\,</math> onto the (n − 1)-dimensional subspace that is orthogonal to the nullspace <math>J_{n,1}</math>. (This is the subspace of all n-vectors whose components sum to zero.)

The trace of <math>C_n</math> is <math>n(n-1)/n = n-1</math>.

Application

Although multiplication by the centering matrix is not a computationally efficient way of removing the mean from a vector, it is a convenient analytical tool. It can be used not only to remove the mean of a single vector, but also of multiple vectors stored in the rows or columns of an m-by-n matrix <math>X</math>.

The left multiplication by <math>C_m</math> subtracts a corresponding mean value from each of the n columns, so that each column of the product <math>C_m\,X</math> has a zero mean. Similarly, the multiplication by <math>C_n</math> on the right subtracts a corresponding mean value from each of the m rows, and each row of the product <math>X\,C_n</math> has a zero mean. The multiplication on both sides creates a doubly centred matrix <math>C_m\,X\,C_n</math>, whose row and column means are equal to zero.

The centering matrix provides in particular a succinct way to express the scatter matrix, <math>S=(X-\mu J_{n,1}^{\mathrm{T}})(X-\mu J_{n,1}^{\mathrm{T}})^{\mathrm{T}}</math> of a data sample <math>X\,</math>, where <math>\mu=\tfrac{1}{n}X J_{n,1}</math> is the sample mean. The centering matrix allows us to express the scatter matrix more compactly as

<math>S=X\,C_n(X\,C_n)^{\mathrm{T}}=X\,C_n\,C_n\,X\,^{\mathrm{T}}=X\,C_n\,X\,^{\mathrm{T}}.</math>

<math>C_n</math> is the covariance matrix of the multinomial distribution, in the special case where the parameters of that distribution are <math>k=n</math>, and <math>p_1=p_2=\cdots=p_n=\frac{1}{n}</math>.

References

Шаблон:Reflist

Шаблон:Matrix classes

  1. John I. Marden, Analyzing and Modeling Rank Data, Chapman & Hall, 1995, Шаблон:ISBN, page 59.