Английская Википедия:Bernoulli distribution

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Use American English Шаблон:Probability distribution</math> |kurtosis =<math>\frac{1 - 6pq}{pq}</math> |entropy =<math>-q\ln q - p\ln p</math> |mgf =<math>q+pe^t</math> |char =<math>q+pe^{it}</math> |pgf =<math>q+pz</math> |fisher =<math> \frac{1}{pq} </math>| }} Шаблон:Probability fundamentals

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,[1] is the discrete probability distribution of a random variable which takes the value 1 with probability <math>p</math> and the value 0 with probability <math>q = 1-p</math>. Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and p would be the probability of tails). In particular, unfair coins would have <math>p \neq 1/2.</math>

The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1. [2]

Properties

If <math>X</math> is a random variable with a Bernoulli distribution, then:

<math>\Pr(X=1) = p = 1 - \Pr(X=0) = 1 - q.</math>

The probability mass function <math>f</math> of this distribution, over possible outcomes k, is

<math> f(k;p) = \begin{cases}
  p & \text{if }k=1, \\
  q = 1-p & \text {if } k = 0.
\end{cases}</math>[3]

This can also be expressed as

<math>f(k;p) = p^k (1-p)^{1-k} \quad \text{for } k\in\{0,1\}</math>

or as

<math>f(k;p)=pk+(1-p)(1-k) \quad \text{for } k\in\{0,1\}.</math>

The Bernoulli distribution is a special case of the binomial distribution with <math>n = 1.</math>[4]

The kurtosis goes to infinity for high and low values of <math>p,</math> but for <math>p=1/2</math> the two-point distributions including the Bernoulli distribution have a lower excess kurtosis, namely −2, than any other probability distribution.

The Bernoulli distributions for <math>0 \le p \le 1</math> form an exponential family.

The maximum likelihood estimator of <math>p</math> based on a random sample is the sample mean.

Файл:PMF and CDF of a bernouli distribution.png
The probability mass distribution function of a Bernoulli experiment along with its corresponding cumulative distribution function.

Mean

The expected value of a Bernoulli random variable <math>X</math> is

<math>\operatorname{E}[X]=p</math>

This is due to the fact that for a Bernoulli distributed random variable <math>X</math> with <math>\Pr(X=1)=p</math> and <math>\Pr(X=0)=q</math> we find

<math>\operatorname{E}[X] = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0

= p \cdot 1 + q\cdot 0 = p.</math>[3]

Variance

The variance of a Bernoulli distributed <math>X</math> is

<math>\operatorname{Var}[X] = pq = p(1-p)</math>

We first find

<math>\operatorname{E}[X^2] = \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p = \operatorname{E}[X] </math>

From this follows

<math>\operatorname{Var}[X] = \operatorname{E}[X^2]-\operatorname{E}[X]^2 = \operatorname{E}[X]-\operatorname{E}[X]^2 = p-p^2 = p(1-p) = pq</math>[3]

With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside <math>[0,1/4]</math>.

Skewness

The skewness is <math>\frac{q-p}{\sqrt{pq}}=\frac{1-2p}{\sqrt{pq}}</math>. When we take the standardized Bernoulli distributed random variable <math>\frac{X-\operatorname{E}[X]}{\sqrt{\operatorname{Var}[X]}}</math> we find that this random variable attains <math>\frac{q}{\sqrt{pq}}</math> with probability <math>p</math> and attains <math>-\frac{p}{\sqrt{pq}}</math> with probability <math>q</math>. Thus we get

<math>\begin{align}

\gamma_1 &= \operatorname{E} \left[\left(\frac{X-\operatorname{E}[X]}{\sqrt{\operatorname{Var}[X]}}\right)^3\right] \\ &= p \cdot \left(\frac{q}{\sqrt{pq}}\right)^3 + q \cdot \left(-\frac{p}{\sqrt{pq}}\right)^3 \\ &= \frac{1}{\sqrt{pq}^3} \left(pq^3-qp^3\right) \\ &= \frac{pq}{\sqrt{pq}^3} (q-p) \\ &= \frac{q-p}{\sqrt{pq}}. \end{align}</math>

Higher moments and cumulants

The raw moments are all equal due to the fact that <math>1^k=1</math> and <math>0^k=0</math>.

<math>\operatorname{E}[X^k] = \Pr(X=1)\cdot 1^k + \Pr(X=0)\cdot 0^k = p \cdot 1 + q\cdot 0 = p = \operatorname{E}[X].</math>

The central moment of order <math>k</math> is given by

<math>

\mu_k =(1-p)(-p)^k +p(1-p)^k. </math> The first six central moments are

<math>\begin{align}

\mu_1 &= 0, \\ \mu_2 &= p(1-p), \\ \mu_3 &= p(1-p)(1-2p), \\ \mu_4 &= p(1-p)(1-3p(1-p)), \\ \mu_5 &= p(1-p)(1-2p)(1-2p(1-p)), \\ \mu_6 &= p(1-p)(1-5p(1-p)(1-p(1-p))). \end{align}</math> The higher central moments can be expressed more compactly in terms of <math>\mu_2</math> and <math>\mu_3</math>

<math>\begin{align}

\mu_4 &= \mu_2 (1-3\mu_2 ), \\ \mu_5 &= \mu_3 (1-2\mu_2 ), \\ \mu_6 &= \mu_2 (1-5\mu_2 (1-\mu_2 )). \end{align}</math> The first six cumulants are

<math>\begin{align}

\kappa_1 &= p, \\ \kappa_2 &= \mu_2 , \\ \kappa_3 &= \mu_3 , \\ \kappa_4 &= \mu_2 (1-6\mu_2 ), \\ \kappa_5 &= \mu_3 (1-12\mu_2 ), \\ \kappa_6 &= \mu_2 (1-30\mu_2 (1-4\mu_2 )). \end{align}</math>

Related distributions

The Bernoulli distribution is simply <math>\operatorname{B}(1, p)</math>, also written as <math display="inline">\mathrm{Bernoulli} (p).</math>

See also

References

Шаблон:Reflist

Further reading

External links

Шаблон:Commons category

Шаблон:ProbDistributions