Английская Википедия:Fisher transformation
Шаблон:Short description Шаблон:Redirect-distinguish
In statistics, the Fisher transformation (or Fisher z-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient r is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ.[1][2][3] The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of r.
Definition
Given a set of N bivariate sample pairs (Xi, Yi), i = 1, ..., N, the sample correlation coefficient r is given by
- <math>r = \frac{\operatorname{cov}(X,Y)}{\sigma_X \sigma_Y} = \frac{\sum ^N _{i=1}(X_i - \bar{X})(Y_i - \bar{Y})}{\sqrt{\sum ^N _{i=1}(X_i - \bar{X})^2} \sqrt{\sum ^N _{i=1}(Y_i - \bar{Y})^2}}.</math>
Here <math>\operatorname{cov}(X,Y)</math> stands for the covariance between the variables <math>X</math> and <math>Y</math> and <math>\sigma</math> stands for the standard deviation of the respective variable. Fisher's z-transformation of r is defined as
- <math>z = {1 \over 2}\ln\left({1+r \over 1-r}\right) = \operatorname{artanh}(r),</math>
where "ln" is the natural logarithm function and "artanh" is the inverse hyperbolic tangent function.
If (X, Y) has a bivariate normal distribution with correlation ρ and the pairs (Xi, Yi) are independent and identically distributed, then z is approximately normally distributed with mean
- <math>{1 \over 2}\ln\left({{1+\rho} \over {1-\rho}}\right),</math>
and standard deviation
- <math>{1 \over \sqrt{N-3}},</math>
where N is the sample size, and ρ is the true correlation coefficient.
This transformation, and its inverse
- <math>r = \frac{\exp(2z)-1}{\exp(2z)+1} = \operatorname{tanh}(z),</math>
can be used to construct a large-sample confidence interval for r using standard normal theory and derivations. See also application to partial correlation.
Derivation
Hotelling gives a concise derivation of the Fisher transformation.[4]
To derive the Fisher transformation, one starts by considering an arbitrary increasing, twice-differentiable function of <math>r</math>, say <math>G(r)</math>. Finding the first term in the large-<math>N</math> expansion of the corresponding skewness <math>\kappa_3</math> results[5] in
- <math>\kappa_3=\frac{6\rho -3(1-\rho ^{2})G^{\prime \prime }(\rho )/G^{\prime }(\rho )}{\sqrt{N}}+O(N^{-3/2}).</math>
Setting <math>\kappa_3=0</math> and solving the corresponding differential equation for <math>G</math> yields the inverse hyperbolic tangent <math>G(\rho)=\operatorname{artanh}(\rho)</math> function.
Similarly expanding the mean m and variance v of <math>\operatorname{artanh}(r)</math>, one gets
- m = <math>\operatorname{artanh}(\rho )+\frac{\rho }{2N}+O(N^{-2}) </math>
and
- v = <math>\frac{1}{N}+\frac{6-\rho ^{2}}{2N^{2}}+O(N^{-3}) </math>
respectively.
The extra terms are not part of the usual Fisher transformation. For large values of <math>\rho </math> and small values of <math>N</math> they represent a large improvement of accuracy at minimal cost, although they greatly complicate the computation of the inverse – a closed-form expression is not available. The near-constant variance of the transformation is the result of removing its skewness – the actual improvement is achieved by the latter, not by the extra terms. Including the extra terms, i.e., computing (z-m)/v1/2, yields:
- <math>\frac{z-\operatorname{artanh}(\rho )-\frac{\rho }{2N}}{\sqrt{\frac{1}{N}+\frac{6-\rho ^{2}}{2N^{2}}}}</math>
which has, to an excellent approximation, a standard normal distribution.[6]
Application
The application of Fisher's transformation can be enhanced using a software calculator as shown in the figure. Assuming that the r-squared value found is 0.80, that there are 30 data Шаблон:Clarify, and accepting a 90% confidence interval, the r-squared value in another random sample from the same population may range from 0.588 to 0.921. When r-squared is outside this range, the population is considered to be different.
Discussion
The Fisher transformation is an approximate variance-stabilizing transformation for r when X and Y follow a bivariate normal distribution. This means that the variance of z is approximately constant for all values of the population correlation coefficient ρ. Without the Fisher transformation, the variance of r grows smaller as |ρ| gets closer to 1. Since the Fisher transformation is approximately the identity function when |r| < 1/2, it is sometimes useful to remember that the variance of r is well approximated by 1/N as long as |ρ| is not too large and N is not too small. This is related to the fact that the asymptotic variance of r is 1 for bivariate normal data.
The behavior of this transform has been extensively studied since Fisher introduced it in 1915. Fisher himself found the exact distribution of z for data from a bivariate normal distribution in 1921; Gayen in 1951[8] determined the exact distribution of z for data from a bivariate Type A Edgeworth distribution. Hotelling in 1953 calculated the Taylor series expressions for the moments of z and several related statistics[9] and Hawkins in 1989 discovered the asymptotic distribution of z for data from a distribution with bounded fourth moments.[10]
An alternative to the Fisher transformation is to use the exact confidence distribution density for ρ given by[11][12] <math display="block">\pi (\rho | r) = \frac{\Gamma(\nu+1)}{\sqrt{2\pi}\Gamma(\nu + \frac{1}{2})} (1 - r^2)^{\frac{\nu - 1}{2}} \cdot (1 - \rho^2)^{\frac{\nu - 2}{2}} \cdot (1 - r \rho )^{\frac{1-2\nu}{2}} F\!\left(\frac{3}{2},-\frac{1}{2}; \nu + \frac{1}{2}; \frac{1 + r \rho}{2}\right)</math> where <math>F</math> is the Gaussian hypergeometric function and <math>\nu = N-1 > 1</math> .
Other uses
While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases.[13] A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the cited article for details.
See also
- Data transformation (statistics)
- Meta-analysis (this transformation is used in meta analysis for stabilizing the variance)
- Partial correlation
- Шаблон:Slink
References
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Rick Wicklin. Fisher's transformation of the correlation coefficient. September 20, 2017. https://blogs.sas.com/content/iml/2017/09/20/fishers-transformation-correlation.html. Accessed Feb 15,2022.
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ r-squared calculator
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite encyclopedia
External links