Английская Википедия:Entropy rate
Шаблон:Short description Шаблон:Information theory In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process.
For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value.
Definition
A process <math>X</math> with a countable index gives rise to the sequence of its joint entropies <math>H_n(X_1, X_2, \dots X_n)</math>. If the limit exists, the entropy rate is defined as
- <math>H(X) := \lim_{n \to \infty} \tfrac{1}{n} H_n.</math>
Note that given any sequence <math>(a_n)_n</math> with <math>a_0=0</math> and letting <math>\Delta a_k := a_k - a_{k-1}</math>, by telescoping one has <math>a_n={\textstyle \sum_{k=1}^n}\Delta a_k</math>. The entropy rate thus computes the mean of the first <math>n</math> such entropy changes, with <math>n</math> going to infinity. The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy.
Discussion
While <math>X</math> may be understood as a sequence of random variables, the entropy rate <math>H(X)</math> represents the average entropy change per one random variable, in the long term.
It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property.
For strongly stationary processes
A stochastic process also gives rise to a sequence of conditional entropies, comprising more and more random variables. For strongly stationary stochastic processes, the entropy rate equals the limit of that sequence
- <math>H(X) = \lim_{n \to \infty} H(X_n|X_{n-1}, X_{n-2}, \dots X_1)</math>
The quantity given by the limit on the right is also denoted <math>H'(X)</math>, which motivated to the extend that here this is then again a rate associated with the process, in the above sense.
For Markov chains
Since a stochastic process defined by a Markov chain that is irreducible, aperiodic and positive recurrent has a stationary distribution, the entropy rate is independent of the initial distribution.
For example, consider a Markov chain defined on a countable number of states. Given its right stochastic transition matrix <math>P_{ij}</math> and an entropy
- <math>h_i := -\sum_{j} P_{ij} \log P_{ij}</math>
associated with each state, one finds
- <math>\displaystyle H(X) = \sum_{i} \mu_i h_i,</math>
where <math>\mu_i</math> is the asymptotic distribution of the chain.
In particular, it follows that the entropy rate of an i.i.d. stochastic process is the same as the entropy of any individual member in the process.
The entropy rate of hidden Markov models (HMM) has no known closed-form solution. However, it has known upper and lower bounds. Let the underlying Markov chain <math>X_{1:\infty}</math> be stationary, and let <math>Y_{1:\infty}</math> be the observable states, then we have<math display="block">H(Y_n|X_1 , Y_{1:n-1} ) \leq H(Y) \leq H(Y_n|Y_{1:n-1} )</math>and at the limit of <math>n \to \infty</math>, both sides converge to the middle.[1]
Applications
The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning.[2]
See also
- Information source (mathematics)
- Markov information source
- Asymptotic equipartition property
- Maximal entropy random walk - chosen to maximize entropy rate
References
- Cover, T. and Thomas, J. (1991) Elements of Information Theory, John Wiley and Sons, Inc., Шаблон:ISBN [1]