Английская Википедия:Approximate entropy
In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data.[1] For example, consider two series of data:
- Series A: (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, ...), which alternates 0 and 1.
- Series B: (0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, ...), which has either a value of 0 or 1, chosen randomly, each with probability 1/2.
Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series A is perfectly regular: knowing a term has the value of 1 enables one to predict with certainty that the next term will have the value of 0. In contrast, series B is randomly valued: knowing a term has the value of 1 gives no insight into what value the next term will have.
Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures.[1] However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise,[2] therefore it is not practical to apply these methods to experimental data. ApEn was developed by Steve M. Pincus to handle these limitations by modifying an exact regularity statistic, Kolmogorov–Sinai entropy. ApEn was initially developed to analyze medical data, such as heart rate,[1] and later spread its applications in finance,[3] physiology,[4] human factors engineering,[5] and climate sciences.[6]
Algorithm
A comprehensive step-by-step tutorial with an explanation of the theoretical foundations of Approximate Entropy is available.[7] The algorithm is:
- Step 1
- Assume a time series of data <math>u(1), u(2),\ldots, u(N)</math>. These are <math>N</math> raw data values from measurements equally spaced in time.
- Step 2
- Let <math>m \in \mathbb{Z}^+</math> be a positive integer, with <math>m \leq N</math>, which represents the length of a run of data (essentially a window).
Let <math>r \in \mathbb{R}^+</math> be a positive real number, which specifies a filtering level.
Let <math>n=N-m+1</math>. - Step 3
- Define <math>\mathbf{x}(i) = \big[u(i),u(i+1),\ldots,u(i+m-1)\big]</math> for each <math>i</math> where <math>1 \leq i \leq n</math>. In other words, <math>\mathbf{x}(i)</math> is an <math>m</math>-dimensional vector that contains the run of data starting with <math>u(i)</math>.
Define the distance between two vectors <math> \mathbf{x}(i) </math> and <math>\mathbf{x}(j) </math> as the maximum of the distances between their respective components, given by- <math> \begin{align}
d[\mathbf{x}(i),\mathbf{x}(j) ] & = \max_k \big(|\mathbf{x}(i)_k - \mathbf{x}(j)_k| \big) \\ & = \max_k \big(|u(i+k-1) - u(j+k-1)| \big) \\ \end{align}</math>
- for <math> 1 \leq k \leq m</math>.
- Step 4
- Define a count <math>C^m_i</math> as
- <math> C_i^m (r)={(\text{number of } j \text { such that } d[\mathbf{x}(i),\mathbf{x}(j)] \leq r) \over n}</math>
- for each <math>i</math> where <math>1 \leq i,j \leq n</math>. Note that since <math>j</math> takes on all values between 1 and <math>n</math>, the match will be counted when <math>j=i</math> (i.e. when the test subsequence, <math>\mathbf{x}(j)</math>, is matched against itself, <math>\mathbf{x}(i)</math>).
- Step 5
- Define
- <math> \phi ^m (r) = {1 \over n} \sum_{i=1}^{n}\log (C_i^m (r))</math>
- where <math>\log </math> is the natural logarithm, and for a fixed <math> m </math>, <math> r </math>, and <math> n </math> as set in Step 2.
- Step 6
- Define approximate entropy (<math>\mathrm{ApEn}</math>) as
- <math> \mathrm{ApEn}(m,r,N)(u) = \phi ^m (r) - \phi^{m+1} (r) </math>
- Parameter selection
- Typically, choose <math> m=2 </math> or <math> m=3 </math>, whereas <math> r </math> depends greatly on the application.
An implementation on Physionet,[8] which is based on Pincus,[2] use <math>d[\mathbf{x}(i), \mathbf{x}(j)] < r </math> instead of <math> d[\mathbf{x}(i), \mathbf{x}(j)] \le r </math> in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice.
Example
Consider a sequence of <math>N=51 </math> samples of heart rate equally spaced in time:
- <math> \ S_N = \{85, 80, 89, 85, 80, 89, \ldots\} </math>
Note the sequence is periodic with a period of 3. Let's choose <math>m=2 </math> and <math>r=3</math> (the values of <math>m </math> and <math>r </math> can be varied without affecting the result).
Form a sequence of vectors:
- <math>\begin{align}
\mathbf{ x}(1) & = [u(1) \ u(2)]=[85 \ 80]\\ \mathbf{ x}(2) & = [u(2) \ u(3)]=[80 \ 89]\\ \mathbf{ x}(3) & = [u(3) \ u(4)]=[89 \ 85]\\ \mathbf{ x}(4) & = [u(4) \ u(5)]=[85 \ 80]\\ & \ \ \vdots \end{align}</math>
Distance is calculated repeatedly as follows. In the first calculation,
- <math>\ d[\mathbf{x}(1), \mathbf{x}(1)]=\max_k |\mathbf{x}(1)_k - \mathbf{x}(1)_k|=0 </math> which is less than <math>r </math>.
In the second calculation, note that <math>|u(2)-u(3)| > |u(1)-u(2)|</math>, so
- <math>\ d[\mathbf{x}(1), \mathbf{x}(2)]=\max_k |\mathbf{x}(1)_k-\mathbf{x}(2)_k|=|u(2)-u(3)|=9 </math> which is greater than <math>r </math>.
Similarly,
- <math>\begin{align}
d[\mathbf{x}(1) &, \mathbf{x}(3)] = |u(2)-u(4)| = 5>r\\ d[\mathbf{x}(1) &, \mathbf{x}(4)] = |u(1)-u(4)| = |u(2)-u(5)| = 0<r\\ & \vdots \\ d[\mathbf{x}(1) &, \mathbf{x}(j)] = \cdots \\ & \vdots \\ \end{align} </math> The result is a total of 17 terms <math>\mathbf{ x}(j)</math> such that <math>d[\mathbf{x}(1), \mathbf{x}(j)]\le r </math>. These include <math> \mathbf{x}(1), \mathbf{x}(4), \mathbf{x}(7),\ldots,\mathbf{x}(49)</math>. In these cases, <math>C^m_i(r)</math> is
- <math>\ C_1^2 (3)=\frac{17}{50}</math>
- <math>\ C_2^2 (3)=\frac{17}{50}</math>
- <math>\ C_3^2 (3)=\frac{16}{50}</math>
- <math>\ C_4^2 (3)=\frac{17}{50}\ \cdots</math>
Note in Step 4, <math>1 \leq i \leq n </math> for <math> \mathbf{x}(i) </math>. So the terms <math>\mathbf{x}(j)</math> such that <math>d[\mathbf{x}(3), \mathbf{x}(j)] \leq r </math> include <math> \mathbf{x}(3), \mathbf{x}(6), \mathbf{x}(9),\ldots,\mathbf{x}(48)</math>, and the total number is 16.
At the end of these calculations, we have
- <math>\phi^2 (3) = {1 \over 50} \sum_{i=1}^{50}\log(C_i^2(3))\approx-1.0982 </math>
Then we repeat the above steps for <math>m=3 </math>. First form a sequence of vectors:
- <math>\begin{align}
\mathbf{x}(1) & = [u(1) \ u(2) \ u(3)]=[85 \ 80 \ 89]\\ \mathbf{x}(2) & = [u(2) \ u(3) \ u(4)]=[80 \ 89 \ 85]\\ \mathbf{x}(3) & = [u(3) \ u(4) \ u(5)]=[89 \ 85 \ 80]\\ \mathbf{x}(4) & = [u(4) \ u(5) \ u(6)]=[85 \ 80 \ 89]\\ &\ \ \vdots \end{align}</math>
By calculating distances between vector <math>\mathbf{x}(i), \mathbf{x}(j), 1 \le i \le 49 </math>, we find the vectors satisfying the filtering level have the following characteristic:
- <math>d[\mathbf{x}(i), \mathbf{x}(i+3)]=0<r </math>
Therefore,
- <math>\ C_1^3 (3)=\frac{17}{49}</math>
- <math>\ C_2^3 (3)=\frac{16}{49}</math>
- <math>\ C_3^3 (3)=\frac{16}{49}</math>
- <math>\ C_4^3 (3)=\frac{17}{49}\ \cdots</math>
At the end of these calculations, we have
- <math>\phi^3 (3)={1 \over 49} \sum_{i=1}^{49}\log(C_i^3(3))\approx-1.0982 </math>
Finally,
- <math> \mathrm{ ApEn}=\phi^2 (3)-\phi^3 (3)\approx0.000010997</math>
The value is very small, so it implies the sequence is regular and predictable, which is consistent with the observation.
Python implementation
import numpy as np
def ApEn(U, m, r) -> float:
"""Approximate_entropy."""
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [
len([1 for x_j in x if _maxdist(x_i, x_j) <= r]) / (N - m + 1.0)
for x_i in x
]
return (N - m + 1.0) ** (-1) * sum(np.log(C))
N = len(U)
return abs(_phi(m + 1) - _phi(m))
Usage example:
>>> U = np.array([85, 80, 89] * 17)
>>> print(ApEn(U, 2, 3))
1.0996541105257052e-05
>>> randU = np.random.choice([85, 80, 89], size=17*3)
>>> print(ApEn(randU, 2, 3))
0.8626664154888908
MATLAB implementation
- Fast Approximate Entropy from MatLab Central
- approximateEntropy
Interpretation
The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations.[9] A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
Advantages
The advantages of ApEn include:[2]
- Lower computational demand. ApEn can be designed to work for small data samples (<math> N < 50</math> points) and can be applied in real time.
- Less effect from noise. If data is noisy, the ApEn measure can be compared to the noise level in the data to determine what quality of true information may be present in the data.
Limitations
The ApEn algorithm counts each sequence as matching itself to avoid the occurrence of <math> \log(0)</math> in the calculations. This step might introduce bias in ApEn, which causes ApEn to have two poor properties in practice:[10]
- ApEn is heavily dependent on the record length and is uniformly lower than expected for short records.
- It lacks relative consistency. That is, if ApEn of one data set is higher than that of another, it should, but does not, remain higher for all conditions tested.
Applications
ApEn has been applied to classify electroencephalography (EEG) in psychiatric diseases, such as schizophrenia,[11] epilepsy,[12] and addiction.[13]
See also
References
- ↑ 1,0 1,1 1,2 Шаблон:Cite journal
- ↑ 2,0 2,1 2,2 Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal