Английская Википедия:Hankel matrix

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example,

<math display=block>\qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}.</math>

More generally, a Hankel matrix is any <math>n \times n</math> matrix <math>A</math> of the form

<math display=block>A = \begin{bmatrix}

 a_0 & a_1 & a_2 & \ldots & a_{n-1}  \\
 a_1 & a_2 &  &  &\vdots \\
 a_2 &  &  & & a_{2n-4} \\ 
\vdots & &  & a_{2n-4} & a_{2n-3} \\

a_{n-1} & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}.</math>

In terms of the components, if the <math>i,j</math> element of <math>A</math> is denoted with <math>A_{ij}</math>, and assuming <math>i \le j</math>, then we have <math>A_{i,j} = A_{i+k,j-k}</math> for all <math>k = 0,...,j-i.</math>

Properties

  • Any Hankel matrix is symmetric.
  • Let <math>J_n</math> be the <math>n \times n</math> exchange matrix. If <math>H</math> is an <math>m \times n</math> Hankel matrix, then <math>H = T J_n</math> where <math>T</math> is an <math>m \times n</math> Toeplitz matrix.
    • If <math>T</math> is real symmetric, then <math>H = T J_n</math> will have the same eigenvalues as <math>T</math> up to sign.[1]
  • The Hilbert matrix is an example of a Hankel matrix.
  • The determinant of a Hankel matrix is called a catalecticant.

Hankel operator

Given a formal Laurent series <math display="block">f(z) = \sum_{n=-\infty}^N a_n z^n</math> the corresponding Hankel operator is defined as[2]

<math display="block">H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf C[[z^{-1}]],</math> This takes a polynomial <math>g \in \mathbf C[z]</math> and sends it to the product <math>fg</math>, but discards all powers of <math>z</math> with a non-negative exponent, so as to give an element in <math>z^{-1} \mathbf C[[z^{-1}]]</math>, the formal power series with strictly negative exponents. The map <math>H_f</math> is in a natural way <math>\mathbf C[z]</math>-linear, and its matrix with respect to the elements <math>1, z, z^2, \dots \in \mathbf C[z]</math> and <math>z^{-1}, z^{-2}, \dots \in z^{-1}\mathbf C[[z^{-1}]]</math> is the Hankel matrix

<math display=block>\begin{bmatrix}
 a_1 & a_2 & \ldots \\
 a_2 & a_3 & \ldots \\
 a_3 & a_4 & \ldots \\ 
\vdots & \vdots & \ddots

\end{bmatrix}.</math> Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if <math>f</math> is a rational function; that is, a fraction of two polynomials

<math display="block">f(z) = \frac{p(z)}{q(z)}.</math>

Approximations

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix <math>A</math> does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

Hankel matrix transform

Шаблон:Distinguish

The Hankel matrix transform, or simply Hankel transform, of a sequence <math>b_k</math> is the sequence of the determinants of the Hankel matrices formed from <math>b_k</math>. Given an integer <math>n> 0</math>, define the corresponding <math>n\times n</math>–dimensional Hankel matrix <math>B_n</math> as having the matrix elements <math>[B_n]_{i,j}=b_{i+j}.</math> Then, the sequence <math>h_n</math> given by <math display="block">h_n = \det B_n</math>

is the Hankel transform of the sequence <math>b_k.</math> The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes <math display="block">c_n = \sum_{k=0}^n {n \choose k} b_k</math> as the binomial transform of the sequence <math>b_n</math>, then one has <math>\det B_n = \det C_n.</math>

Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]

Positive Hankel matrices and the Hamburger moment problems

Шаблон:Further

See also

Notes

Шаблон:Reflist

References

  • Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
  • Шаблон:Cite book

Шаблон:Matrix classes Шаблон:Authority control

  1. Шаблон:Cite journal
  2. Шаблон:Harvnb
  3. Шаблон:Cite book
  4. Шаблон:Cite book
  5. J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573