Английская Википедия:Asymptotic theory (statistics)

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short descriptionIn statistics, asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is often assumed that the sample size Шаблон:Math may grow indefinitely; the properties of estimators and tests are then evaluated under the limit of Шаблон:Math. In practice, a limit evaluation is considered to be approximately valid for large finite sample sizes too.[1]

Overview

Most statistical problems begin with a dataset of size Шаблон:Math. The asymptotic theory proceeds by assuming that it is possible (in principle) to keep collecting additional data, thus that the sample size grows infinitely, i.e. Шаблон:Math. Under the assumption, many results can be obtained that are unavailable for samples of finite size. An example is the weak law of large numbers. The law states that for a sequence of independent and identically distributed (IID) random variables Шаблон:Math, if one value is drawn from each random variable and the average of the first Шаблон:Math values is computed as Шаблон:Math, then the Шаблон:Math converge in probability to the population mean Шаблон:Math as Шаблон:Math.[2]

In asymptotic theory, the standard approach is Шаблон:Math. For some statistical models, slightly different approaches of asymptotics may be used. For example, with panel data, it is commonly assumed that one dimension in the data remains fixed, whereas the other dimension grows: Шаблон:Math and Шаблон:Math, or vice versa.[2]

Besides the standard approach to asymptotics, other alternative approaches exist:

In many cases, highly accurate results for finite samples can be obtained via numerical methods (i.e. computers); even in such cases, though, asymptotic analysis can be useful. This point was made by Шаблон:Harvtxt, as follows. Шаблон:Quote

Modes of convergence of random variables

Шаблон:Further

Asymptotic properties

Estimators

Consistency

A sequence of estimates is said to be consistent, if it converges in probability to the true value of the parameter being estimated:

<math>\hat\theta_n\ \xrightarrow{\overset{}p}\ \theta_0.</math>

That is, roughly speaking with an infinite amount of data the estimator (the formula for generating the estimates) would almost surely give the correct result for the parameter being estimated.[2]

Asymptotic distribution

If it is possible to find sequences of non-random constants Шаблон:Math}, Шаблон:Math} (possibly depending on the value of Шаблон:Math), and a non-degenerate distribution Шаблон:Math such that

<math>b_n(\hat\theta_n - a_n)\ \xrightarrow{d}\ G ,</math>

then the sequence of estimators <math>\textstyle\hat\theta_n</math> is said to have the asymptotic distribution G.

Most often, the estimators encountered in practice are asymptotically normal, meaning their asymptotic distribution is the normal distribution, with Шаблон:Math, Шаблон:Math, and Шаблон:Math:

<math>\sqrt{n}(\hat\theta_n - \theta_0)\ \xrightarrow{d}\ \mathcal{N}(0, V).</math>

Asymptotic confidence regions

Asymptotic theorems

See also

References

Шаблон:Reflist

Bibliography

Шаблон:Refbegin

Шаблон:Refend

Шаблон:Statistics

Шаблон:Authority control

  1. Höpfner, R. (2014), Asymptotic Statistics, Walter de Gruyter. 286 pag. Шаблон:ISBN, Шаблон:ISBN
  2. 2,0 2,1 2,2 A. DasGupta (2008), Asymptotic Theory of Statistics and Probability, Springer. Шаблон:ISBN, Шаблон:ISBN