Английская Википедия:Hypoexponential distribution

Материал из Онлайн справочника
Версия от 02:27, 24 марта 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{Short description|Concept in probability theory}} {{Probability distribution | name =Hypoexponential| type =density| pdf_image =| cdf_image =| parameters =<math>\lambda_{1},\dots,\lambda_{k} > 0\,</math> rates (real)| support =<math>x \in [0; \infty)\!</math>| pdf =Expressed as a phase-type distribution<br /><math>-\boldsymbol...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Probability distribution

In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one.

Overview

The Erlang distribution is a series of k exponential distributions all with rate <math>\lambda</math>. The hypoexponential is a series of k exponential distributions each with their own rate <math>\lambda_{i}</math>, the rate of the <math>i^{th}</math> exponential distribution. If we have k independently distributed exponential random variables <math>\boldsymbol{X}_{i}</math>, then the random variable,

<math>

\boldsymbol{X}=\sum^{k}_{i=1}\boldsymbol{X}_{i} </math>

is hypoexponentially distributed. The hypoexponential has a minimum coefficient of variation of <math>1/k</math>.

Relation to the phase-type distribution

As a result of the definition it is easier to consider this distribution as a special case of the phase-type distribution.[1] The phase-type distribution is the time to absorption of a finite state Markov process. If we have a k+1 state process, where the first k states are transient and the state k+1 is an absorbing state, then the distribution of time from the start of the process until the absorbing state is reached is phase-type distributed. This becomes the hypoexponential if we start in the first 1 and move skip-free from state i to i+1 with rate <math>\lambda_{i}</math> until state k transitions with rate <math>\lambda_{k}</math> to the absorbing state k+1. This can be written in the form of a subgenerator matrix,

<math>

\left[\begin{matrix}-\lambda_{1}&\lambda_{1}&0&\dots&0&0\\

                   0&-\lambda_{2}&\lambda_{2}&\ddots&0&0\\
                   \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\
                   0&0&\ddots&-\lambda_{k-2}&\lambda_{k-2}&0\\
                   0&0&\dots&0&-\lambda_{k-1}&\lambda_{k-1}\\
                   0&0&\dots&0&0&-\lambda_{k}

\end{matrix}\right]\; . </math>

For simplicity denote the above matrix <math>\Theta\equiv\Theta(\lambda_{1},\dots,\lambda_{k})</math>. If the probability of starting in each of the k states is

<math>

\boldsymbol{\alpha}=(1,0,\dots,0) </math>

then <math>Hypo(\lambda_{1},\dots,\lambda_{k})=PH(\boldsymbol{\alpha},\Theta).</math>

Two parameter case

Where the distribution has two parameters (<math>\lambda_1 \neq \lambda_2</math>) the explicit forms of the probability functions and the associated statistics are:[2]

CDF: <math>F(x) = 1 - \frac{\lambda_2}{\lambda_2-\lambda_1}e^{-\lambda_1x} + \frac{\lambda_1}{\lambda_2-\lambda_1}e^{-\lambda_2x}</math>

PDF: <math>f(x) = \frac{\lambda_1\lambda_2}{\lambda_1-\lambda_2}( e^{-x \lambda_2} - e^{-x \lambda_1} )</math>

Mean: <math>\frac{1}{\lambda_1}+\frac{1}{\lambda_2}</math>

Variance: <math>\frac{1}{\lambda_1^2}+\frac{1}{\lambda_2^2}</math>

Coefficient of variation: <math>\frac{\sqrt{\lambda_1^2 + \lambda_2^2}}{ \lambda_1 + \lambda_2 }</math>

The coefficient of variation is always less than 1.

Given the sample mean (<math>\bar{x}</math>) and sample coefficient of variation (<math>c</math>), the parameters <math>\lambda_1</math> and <math>\lambda_2</math> can be estimated as follows:

<math>\lambda_1= \frac{ 2}{ \bar{x} } \left[ 1 + \sqrt{ 1 + 2 ( c^2 - 1 ) } \right]^{-1}</math>

<math>\lambda_2 = \frac{ 2 }{ \bar{x} } \left[ 1 - \sqrt{ 1 + 2 ( c^2 - 1 ) } \right]^{-1}</math>

These estimators can be derived from the methods of moments by setting <math>\frac{1}{\lambda_1}+\frac{1}{\lambda_2}=\bar x

</math> and <math> \frac{\sqrt{\lambda_1^2+\lambda_2^2}}{\lambda_1+\lambda_2}=c

</math>.

The resulting parameters <math>\lambda_1</math> and <math>\lambda_2</math> are real values if <math>c^2\in[0.5,1]</math>.

Characterization

A random variable <math>\boldsymbol{X}\sim Hypo(\lambda_{1},\dots,\lambda_{k})</math> has cumulative distribution function given by,

<math>

F(x)=1-\boldsymbol{\alpha}e^{x\Theta}\boldsymbol{1} </math>

and density function,

<math>

f(x)=-\boldsymbol{\alpha}e^{x\Theta}\Theta\boldsymbol{1}\; , </math>

where <math>\boldsymbol{1}</math> is a column vector of ones of the size k and <math>e^{A}</math> is the matrix exponential of A. When <math>\lambda_{i} \ne \lambda_{j}</math> for all <math>i \ne j</math>, the density function can be written as

<math>

f(x) = \sum_{i=1}^k \lambda_i e^{-x \lambda_i} \left(\prod_{j=1, j \ne i}^k \frac{\lambda_j}{\lambda_j - \lambda_i}\right) = \sum_{i=1}^k \ell_i(0) \lambda_i e^{-x \lambda_i} </math> where <math>\ell_1(x), \dots, \ell_k(x)</math> are the Lagrange basis polynomials associated with the points <math>\lambda_1,\dots,\lambda_k</math>.

The distribution has Laplace transform of

<math>

\mathcal{L}\{f(x)\}=-\boldsymbol{\alpha}(sI-\Theta)^{-1}\Theta\boldsymbol{1} </math>

Which can be used to find moments,

<math>

E[X^{n}]=(-1)^{n}n!\boldsymbol{\alpha}\Theta^{-n}\boldsymbol{1}\; . </math>

General case

In the general case where there are <math>a</math> distinct sums of exponential distributions with rates <math>\lambda_1,\lambda_2,\cdots,\lambda_a</math> and a number of terms in each sum equals to <math>r_1,r_2,\cdots,r_a</math> respectively. The cumulative distribution function for <math>t\geq0</math> is given by

<math>F(t)

= 1 - \left(\prod_{j=1}^a \lambda_j^{r_j} \right) \sum_{k=1}^a \sum_{l=1}^{r_k} \frac{\Psi_{k,l}(-\lambda_k) t^{r_k-l} \exp(-\lambda_k t)} {(r_k-l)!(l-1)!} , </math>

with

<math>\Psi_{k,l}(x)

= -\frac{\partial^{l-1}}{\partial x^{l-1}} \left(\prod_{j=0,j\neq k}^a \left(\lambda_j+x\right)^{-r_j} \right) . </math> with the additional convention <math>\lambda_0 = 0, r_0 = 1</math>.[3]

Uses

This distribution has been used in population genetics,[4] cell biology,[5][6] and queuing theory.[7][8]

See also

References

Шаблон:Reflist

Further reading

  • M. F. Neuts. (1981) Matrix-Geometric Solutions in Stochastic Models: an Algorthmic Approach, Chapter 2: Probability Distributions of Phase Type; Dover Publications Inc.
  • G. Latouche, V. Ramaswami. (1999) Introduction to Matrix Analytic Methods in Stochastic Modelling, 1st edition. Chapter 2: PH Distributions; ASA SIAM,
  • Colm A. O'Cinneide (1999). Phase-type distribution: open problems and a few properties, Communication in Statistic - Stochastic Models, 15(4), 731–757.
  • L. Leemis and J. McQueston (2008). Univariate distribution relationships, The American Statistician, 62(1), 45—53.
  • S. Ross. (2007) Introduction to Probability Models, 9th edition, New York: Academic Press

Шаблон:ProbDistributions

zh:Erlang分布