Английская Википедия:Copula (probability theory)

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:About In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval [0, 1]. Copulas are used to describe/model the dependence (inter-correlation) between random variables.[1] Their name, introduced by applied mathematician Abe Sklar in 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammatical copulas in linguistics. Copulas have been used widely in quantitative finance to model and minimize tail risk[2] and portfolio-optimization applications.[3]

Sklar's theorem states that any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copula which describes the dependence structure between the variables.

Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulae separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence. Some popular parametric copula models are outlined below.

Two-dimensional copulas are known in some other areas of mathematics under the name permutons and doubly-stochastic measures.

Mathematical definition

Consider a random vector <math>(X_1,X_2,\dots,X_d)</math>. Suppose its marginals are continuous, i.e. the marginal CDFs <math>F_i(x) = \Pr[X_i\leq x] </math> are continuous functions. By applying the probability integral transform to each component, the random vector

<math>(U_1,U_2,\dots,U_d)=\left(F_1(X_1),F_2(X_2),\dots,F_d(X_d)\right)</math>

has marginals that are uniformly distributed on the interval [0, 1].

The copula of <math>(X_1,X_2,\dots,X_d)</math> is defined as the joint cumulative distribution function of <math>(U_1,U_2,\dots,U_d)</math>:

<math>C(u_1,u_2,\dots,u_d)=\Pr[U_1\leq u_1,U_2\leq u_2,\dots,U_d\leq u_d].</math>

The copula C contains all information on the dependence structure between the components of <math>(X_1,X_2,\dots,X_d)</math> whereas the marginal cumulative distribution functions <math>F_i</math> contain all information on the marginal distributions of <math>X_i</math>.

The reverse of these steps can be used to generate pseudo-random samples from general classes of multivariate probability distributions. That is, given a procedure to generate a sample <math>(U_1,U_2,\dots,U_d)</math> from the copula function, the required sample can be constructed as

<math>(X_1,X_2,\dots,X_d) = \left(F_1^{-1}(U_1),F_2^{-1}(U_2),\dots,F_d^{-1}(U_d)\right).</math>

The generalized inverses <math>F_i^{-1}</math> are unproblematic almost surely, since the <math>F_i</math> were assumed to be continuous. Furthermore, the above formula for the copula function can be rewritten as:

<math>C(u_1,u_2,\dots,u_d)=\Pr[X_1\leq F_1^{-1}(u_1),X_2\leq F_2^{-1}(u_2),\dots,X_d\leq F_d^{-1}(u_d)] .</math>

Definition

In probabilistic terms, <math>C:[0,1]^d\rightarrow [0,1]</math> is a d-dimensional copula if C is a joint cumulative distribution function of a d-dimensional random vector on the unit cube <math>[0,1]^d</math> with uniform marginals.[4]

In analytic terms, <math>C:[0,1]^d\rightarrow [0,1]</math> is a d-dimensional copula if

  • <math>C(u_1,\dots,u_{i-1},0,u_{i+1},\dots,u_d)=0 </math>, the copula is zero if any one of the arguments is zero,
  • <math>C(1,\dots,1,u,1,\dots,1)=u </math>, the copula is equal to u if one argument is u and all others 1,
  • C is d-non-decreasing, i.e., for each hyperrectangle <math>B=\prod_{i=1}^{d}[x_i,y_i]\subseteq [0,1]^d </math> the C-volume of B is non-negative:
    <math> \int_B \mathrm{d} C(u) =\sum_{\mathbf z\in \prod_{i=1}^{d}\{x_i,y_i\}} (-1)^{N(\mathbf z)} C(\mathbf z)\ge 0,</math>
where the <math>N(\mathbf z)=\#\{k : z_k=x_k\}</math>.

For instance, in the bivariate case, <math>C:[0,1] \times[0,1]\rightarrow [0,1]</math> is a bivariate copula if <math>C(0,u) = C(u,0) = 0 </math>, <math>C(1,u) = C(u,1) = u </math> and <math>C(u_2,v_2)-C(u_2,v_1)-C(u_1,v_2)+C(u_1,v_1) \geq 0 </math> for all <math>0 \leq u_1 \leq u_2 \leq 1</math> and <math>0 \leq v_1 \leq v_2 \leq 1</math>.

Sklar's theorem

Файл:Gaussian copula gaussian marginals.png
Density and contour plot of a Bivariate Gaussian Distribution
Файл:Biv gumbel dist.png
Density and contour plot of two Normal marginals joint with a Gumbel copula

Sklar's theorem, named after Abe Sklar, provides the theoretical foundation for the application of copulas.[5][6] Sklar's theorem states that every multivariate cumulative distribution function

<math>H(x_1,\dots,x_d)=\Pr[X_1\leq x_1,\dots,X_d\leq x_d]</math>

of a random vector <math>(X_1,X_2,\dots,X_d)</math> can be expressed in terms of its marginals <math>F_i(x_i) = \Pr[X_i\leq x_i] </math> and a copula <math>C</math>. Indeed:

<math>H(x_1,\dots,x_d) = C\left(F_1(x_1),\dots,F_d(x_d) \right). </math>

If the multivariate distribution has a density <math>h</math>, and if this density is available, it also holds that

<math>h(x_1,\dots,x_d)= c(F_1(x_1),\dots,F_d(x_d))\cdot f_1(x_1)\cdot\dots\cdot f_d(x_d),</math>

where <math>c</math> is the density of the copula.

The theorem also states that, given <math>H</math>, the copula is unique on <math> \operatorname{Ran}(F_1)\times\cdots\times \operatorname{Ran}(F_d) </math> which is the cartesian product of the ranges of the marginal cdf's. This implies that the copula is unique if the marginals <math>F_i</math> are continuous.

The converse is also true: given a copula <math>C:[0,1]^d\rightarrow [0,1] </math> and marginals <math>F_i(x)</math> then <math>C\left(F_1(x_1),\dots,F_d(x_d) \right)</math> defines a d-dimensional cumulative distribution function with marginal distributions <math>F_i(x)</math>.

Stationarity condition

Copulas mainly work when time series are stationary[7] and continuous.[8] Thus, a very important pre-processing step is to check for the auto-correlation, trend and seasonality within time series.

When time series are auto-correlated, they may generate a non existing dependence between sets of variables and result in incorrect Copula dependence structure.[9]

Fréchet–Hoeffding copula bounds

Файл:Copule ord.svg
Graphs of the bivariate Fréchet–Hoeffding copula limits and of the independence copula (in the middle).

The Fréchet–Hoeffding theorem (after Maurice René Fréchet and Wassily Hoeffding[10]) states that for any Copula <math>C:[0,1]^d\rightarrow [0,1]</math> and any <math>(u_1,\dots,u_d)\in[0,1]^d</math> the following bounds hold:

<math>W(u_1,\dots,u_d) \leq C(u_1,\dots,u_d) \leq M(u_1,\dots,u_d).</math>

The function Шаблон:Mvar is called lower Fréchet–Hoeffding bound and is defined as

<math> W(u_1,\ldots,u_d) = \max\left\{1-d+\sum\limits_{i=1}^d {u_i} ,\, 0 \right\}.</math>

The function Шаблон:Mvar is called upper Fréchet–Hoeffding bound and is defined as

<math> M(u_1,\ldots,u_d) = \min \{u_1,\dots,u_d\}.</math>

The upper bound is sharp: Шаблон:Mvar is always a copula, it corresponds to comonotone random variables.

The lower bound is point-wise sharp, in the sense that for fixed u, there is a copula <math>\tilde{C}</math> such that <math>\tilde{C}(u) = W(u)</math>. However, Шаблон:Mvar is a copula only in two dimensions, in which case it corresponds to countermonotonic random variables.

In two dimensions, i.e. the bivariate case, the Fréchet–Hoeffding theorem states

<math>\max\{u+v-1, \,0\} \leq C(u,v) \leq \min\{u,v\}.</math>

Families of copulas

Several families of copulas have been described.

Gaussian copula

Файл:Copula gaussian.svg
Cumulative and density distribution of Gaussian copula with ρ = 0.4

The Gaussian copula is a distribution over the unit hypercube <math>[0,1]^d</math>. It is constructed from a multivariate normal distribution over <math>\mathbb{R}^d</math> by using the probability integral transform.

For a given correlation matrix <math>R\in[-1, 1]^{d\times d}</math>, the Gaussian copula with parameter matrix <math>R</math> can be written as

<math> C_R^{\text{Gauss}}(u) = \Phi_R\left(\Phi^{-1}(u_1),\dots, \Phi^{-1}(u_d) \right), </math>

where <math>\Phi^{-1}</math> is the inverse cumulative distribution function of a standard normal and <math>\Phi_R</math> is the joint cumulative distribution function of a multivariate normal distribution with mean vector zero and covariance matrix equal to the correlation matrix <math>R</math>. While there is no simple analytical formula for the copula function, <math>C_R^{\text{Gauss}}(u)</math>, it can be upper or lower bounded, and approximated using numerical integration.[11][12] The density can be written as[13]

<math> c_R^{\text{Gauss}}(u)

= \frac{1}{\sqrt{\det{R}}}\exp\left(-\frac{1}{2} \begin{pmatrix}\Phi^{-1}(u_1)\\ \vdots \\ \Phi^{-1}(u_d)\end{pmatrix}^T \cdot \left(R^{-1}-I\right) \cdot \begin{pmatrix}\Phi^{-1}(u_1)\\ \vdots \\ \Phi^{-1}(u_d)\end{pmatrix} \right), </math> where <math>\mathbf{I}</math> is the identity matrix.

Archimedean copulas

Archimedean copulas are an associative class of copulas. Most common Archimedean copulas admit an explicit formula, something not possible for instance for the Gaussian copula. In practice, Archimedean copulas are popular because they allow modeling dependence in arbitrarily high dimensions with only one parameter, governing the strength of dependence.

A copula C is called Archimedean if it admits the representation[14]

<math> C(u_1,\dots,u_d;\theta) = \psi^{[-1]}\left(\psi(u_1;\theta)+\cdots+\psi(u_d;\theta);\theta\right) </math>

where <math>\psi\!:[0,1]\times\Theta \rightarrow [0,\infty)</math> is a continuous, strictly decreasing and convex function such that <math>\psi(1;\theta)=0</math>, <math>\theta</math> is a parameter within some parameter space <math>\Theta</math>, and <math>\psi</math> is the so-called generator function and <math>\psi^{[-1]}</math> is its pseudo-inverse defined by

<math> \psi^{[-1]}(t;\theta) = \left\{\begin{array}{ll} \psi^{-1}(t;\theta) & \mbox{if }0 \leq t \leq \psi(0;\theta) \\ 0 & \mbox{if }\psi(0;\theta) \leq t \leq\infty. \end{array}\right. </math>

Moreover, the above formula for C yields a copula for <math>\psi^{-1}</math> if and only if <math>\psi^{-1}</math> is d-monotone on <math>[0,\infty)</math>.[15] That is, if it is <math>d-2</math> times differentiable and the derivatives satisfy

<math> (-1)^k\psi^{-1,(k)}(t;\theta) \geq 0 </math>

for all <math>t\geq 0</math> and <math>k=0,1,\dots,d-2</math> and <math>(-1)^{d-2}\psi^{-1,(d-2)}(t;\theta)</math> is nonincreasing and convex.

Most important Archimedean copulas

The following tables highlight the most prominent bivariate Archimedean copulas, with their corresponding generator. Not all of them are completely monotone, i.e. d-monotone for all <math>d\in\mathbb{N}</math> or d-monotone for certain <math>\theta \in \Theta</math> only.

Table with the most important Archimedean copulas[14]
Name of copula Bivariate copula <math>\;C_\theta(u,v)</math> parameter <math>\,\theta</math> generator <math>\,\psi_{\theta}(t)</math> generator inverse <math>\,\psi_{\theta}^{-1}(t)</math>
Ali–Mikhail–Haq[16]   <math>\frac{uv}{1-\theta (1-u)(1-v)}</math>   <math>\theta\in[-1,1]</math>    <math>\log\!\left[\frac{1-\theta (1-t)}{t}\right]</math>     <math>\frac{1-\theta}{\exp(t)-\theta}</math>
Clayton[17]   <math>\left[ \max\left\{ u^{-\theta} + v^{-\theta} -1 ; 0 \right\} \right]^{-1/\theta}</math>   <math>\theta\in[-1,\infty)\backslash\{0\}</math>     <math>\frac{1}{\theta}\,(t^{-\theta}-1)</math>     <math>\left(1+\theta t\right)^{-1/\theta}</math>    
Frank   <math>-\frac{1}{\theta} \log\!\left[ 1+\frac{(\exp(-\theta u)-1)(\exp(-\theta v)-1)}{\exp(-\theta)-1} \right]</math>     <math>\theta\in \mathbb{R}\backslash\{0\} </math>      <math display="inline">-\log\!\left(\frac{\exp(-\theta t)-1}{\exp(-\theta)-1}\right)</math>     <math>-\frac{1}{\theta}\,\log(1+\exp(-t)(\exp(-\theta)-1))</math>    
Gumbel   <math display="inline">\exp\!\left[ -\left( (-\log(u))^\theta + (-\log(v))^\theta \right)^{1/\theta} \right]</math>   <math>\theta\in[1,\infty)</math>    <math>\left(-\log(t)\right)^\theta</math>        <math>\exp\!\left(-t^{1/\theta}\right)</math>
Independence   <math display="inline">uv</math>       <math>-\log(t)</math>        <math>\exp(-t)</math>
Joe   <math display="inline">{1-\left[ (1-u)^\theta + (1-v)^\theta - (1-u)^\theta(1-v)^\theta \right]^{1/\theta}}</math>     <math>\theta\in[1,\infty)</math>     <math>-\log\!\left(1-(1-t)^\theta\right)</math>        <math>1-\left(1-\exp(-t)\right)^{1/\theta}</math>

Expectation for copula models and Monte Carlo integration

In statistical applications, many problems can be formulated in the following way. One is interested in the expectation of a response function <math>g:\mathbb{R}^d\rightarrow\mathbb{R}</math> applied to some random vector <math>(X_1,\dots,X_d)</math>.[18] If we denote the CDF of this random vector with <math>H</math>, the quantity of interest can thus be written as

<math> \operatorname{E}\left[ g(X_1,\dots,X_d) \right] = \int_{\mathbb{R}^d} g(x_1,\dots,x_d) \, \mathrm{d}H(x_1,\dots,x_d).</math>

If <math>H</math> is given by a copula model, i.e.,

<math>H(x_1,\dots,x_d)=C(F_1(x_1),\dots,F_d(x_d))</math>

this expectation can be rewritten as

<math>\operatorname{E}\left[g(X_1,\dots,X_d)\right]=\int_{[0,1]^d}g(F_1^{-1}(u_1),\dots,F_d^{-1}(u_d)) \, \mathrm{d}C(u_1,\dots,u_d).</math>

In case the copula C is absolutely continuous, i.e. C has a density c, this equation can be written as

<math>\operatorname{E}\left[g(X_1,\dots,X_d)\right]=\int_{[0,1]^d}g(F_1^{-1}(u_1),\dots,F_d^{-1}(u_d))\cdot c(u_1,\dots,u_d) \, du_1\cdots \mathrm{d}u_d,</math>

and if each marginal distribution has the density <math>f_i</math> it holds further that

<math>\operatorname{E}\left[g(X_1,\dots,X_d)\right]=\int_{\mathbb{R}^d}g(x_1,\dots x_d)\cdot c(F_1(x_1),\dots,F_d(x_d))\cdot f_1(x_1)\cdots f_d(x_d) \, \mathrm{d}x_1\cdots \mathrm{d}x_d.</math>

If copula and marginals are known (or if they have been estimated), this expectation can be approximated through the following Monte Carlo algorithm:

  1. Draw a sample <math>(U_1^k,\dots,U_d^k)\sim C\;\;(k=1,\dots,n)</math> of size n from the copula C
  2. By applying the inverse marginal cdf's, produce a sample of <math>(X_1,\dots,X_d)</math> by setting <math>(X_1^k,\dots,X_d^k)=(F_1^{-1}(U_1^k),\dots,F_d^{-1}(U_d^k))\sim H\;\;(k=1,\dots,n)</math>
  3. Approximate <math>\operatorname{E}\left[g(X_1,\dots,X_d)\right]</math> by its empirical value:
<math>\operatorname{E}\left[g(X_1,\dots,X_d)\right]\approx \frac{1}{n}\sum_{k=1}^n g(X_1^k,\dots,X_d^k)</math>

Empirical copulas

When studying multivariate data, one might want to investigate the underlying copula. Suppose we have observations

<math>(X_1^i,X_2^i,\dots,X_d^i), \, i=1,\dots,n</math>

from a random vector <math>(X_1,X_2,\dots,X_d)</math> with continuous marginals. The corresponding “true” copula observations would be

<math>(U_1^i,U_2^i,\dots,U_d^i)=\left(F_1(X_1^i),F_2(X_2^i),\dots,F_d(X_d^i)\right), \, i=1,\dots,n.</math>

However, the marginal distribution functions <math>F_i</math> are usually not known. Therefore, one can construct pseudo copula observations by using the empirical distribution functions

<math>F_k^n(x)=\frac{1}{n} \sum_{i=1}^n \mathbf{1}(X_k^i\leq x)</math>

instead. Then, the pseudo copula observations are defined as

<math>(\tilde{U}_1^i,\tilde{U}_2^i,\dots,\tilde{U}_d^i)=\left(F_1^n(X_1^i),F_2^n(X_2^i),\dots,F_d^n(X_d^i)\right), \, i=1,\dots,n.</math>

The corresponding empirical copula is then defined as

<math>C^n(u_1,\dots,u_d) = \frac{1}{n} \sum_{i=1}^n \mathbf{1}\left(\tilde{U}_1^i\leq u_1,\dots,\tilde{U}_d^i\leq u_d\right).</math>

The components of the pseudo copula samples can also be written as <math>\tilde{U}_k^i=R_k^i/n</math>, where <math>R_k^i</math> is the rank of the observation <math>X_k^i</math>:

<math>R_k^i=\sum_{j=1}^n \mathbf{1}(X_k^j\leq X_k^i)</math>

Therefore, the empirical copula can be seen as the empirical distribution of the rank transformed data.

The sample version of Spearman's rho:[19]

<math>r=\frac{12}{n^2-1}\sum_{i=1}^n\sum_{j=1}^n \left[C^n \left(\frac{i}{n},\frac{j}{n}\right)-\frac{i}{n}\cdot\frac{j}{n}\right]</math>

Applications

Quantitative finance

Examples of bivariate copulæ used in finance.
Examples of bivariate copulæ used in finance.
Typical finance applications:

In quantitative finance copulas are applied to risk management, to portfolio management and optimization, and to derivatives pricing.

For the former, copulas are used to perform stress-tests and robustness checks that are especially important during "downside/crisis/panic regimes" where extreme downside events may occur (e.g., the global financial crisis of 2007–2008). The formula was also adapted for financial markets and was used to estimate the probability distribution of losses on pools of loans or bonds.

During a downside regime, a large number of investors who have held positions in riskier assets such as equities or real estate may seek refuge in 'safer' investments such as cash or bonds. This is also known as a flight-to-quality effect and investors tend to exit their positions in riskier assets in large numbers in a short period of time. As a result, during downside regimes, correlations across equities are greater on the downside as opposed to the upside and this may have disastrous effects on the economy.[22][23] For example, anecdotally, we often read financial news headlines reporting the loss of hundreds of millions of dollars on the stock exchange in a single day; however, we rarely read reports of positive stock market gains of the same magnitude and in the same short time frame.

Copulas aid in analyzing the effects of downside regimes by allowing the modelling of the marginals and dependence structure of a multivariate probability model separately. For example, consider the stock exchange as a market consisting of a large number of traders each operating with his/her own strategies to maximize profits. The individualistic behaviour of each trader can be described by modelling the marginals. However, as all traders operate on the same exchange, each trader's actions have an interaction effect with other traders'. This interaction effect can be described by modelling the dependence structure. Therefore, copulas allow us to analyse the interaction effects which are of particular interest during downside regimes as investors tend to herd their trading behaviour and decisions. (See also agent-based computational economics, where price is treated as an emergent phenomenon, resulting from the interaction of the various market participants, or agents.)

The users of the formula have been criticized for creating "evaluation cultures" that continued to use simple copulæ despite the simple versions being acknowledged as inadequate for that purpose.[24][25] Thus, previously, scalable copula models for large dimensions only allowed the modelling of elliptical dependence structures (i.e., Gaussian and Student-t copulas) that do not allow for correlation asymmetries where correlations differ on the upside or downside regimes. However, the development of vine copulas[26] (also known as pair copulas) enables the flexible modelling of the dependence structure for portfolios of large dimensions.[27] The Clayton canonical vine copula allows for the occurrence of extreme downside events and has been successfully applied in portfolio optimization and risk management applications. The model is able to reduce the effects of extreme downside correlations and produces improved statistical and economic performance compared to scalable elliptical dependence copulas such as the Gaussian and Student-t copula.[28]

Other models developed for risk management applications are panic copulas that are glued with market estimates of the marginal distributions to analyze the effects of panic regimes on the portfolio profit and loss distribution. Panic copulas are created by Monte Carlo simulation, mixed with a re-weighting of the probability of each scenario.[29]

As regards derivatives pricing, dependence modelling with copula functions is widely used in applications of financial risk assessment and actuarial analysis – for example in the pricing of collateralized debt obligations (CDOs).[30] Some believe the methodology of applying the Gaussian copula to credit derivatives to be one of the reasons behind the global financial crisis of 2008–2009;[31][32][33] see Шаблон:Slink.

Despite this perception, there are documented attempts within the financial industry, occurring before the crisis, to address the limitations of the Gaussian copula and of copula functions more generally, specifically the lack of dependence dynamics. The Gaussian copula is lacking as it only allows for an elliptical dependence structure, as dependence is only modeled using the variance-covariance matrix.[28] This methodology is limited such that it does not allow for dependence to evolve as the financial markets exhibit asymmetric dependence, whereby correlations across assets significantly increase during downturns compared to upturns. Therefore, modeling approaches using the Gaussian copula exhibit a poor representation of extreme events.[28][34] There have been attempts to propose models rectifying some of the copula limitations.[34][35][36]

Additional to CDOs, Copulas have been applied to other asset classes as a flexible tool in analyzing multi-asset derivative products. The first such application outside credit was to use a copula to construct a basket implied volatility surface,[37] taking into account the volatility smile of basket components. Copulas have since gained popularity in pricing and risk management[38] of options on multi-assets in the presence of a volatility smile, in equity-, foreign exchange- and fixed income derivatives.

Civil engineering

Recently, copula functions have been successfully applied to the database formulation for the reliability analysis of highway bridges, and to various multivariate simulation studies in civil engineering,[39] reliability of wind and earthquake engineering,[40] and mechanical & offshore engineering.[41] Researchers are also trying these functions in the field of transportation to understand the interaction between behaviors of individual drivers which, in totality, shapes traffic flow.

Reliability engineering

Copulas are being used for reliability analysis of complex systems of machine components with competing failure modes. [42]

Warranty data analysis

Copulas are being used for warranty data analysis in which the tail dependence is analysed.[43]

Turbulent combustion

Copulas are used in modelling turbulent partially premixed combustion, which is common in practical combustors.[44][45]

Medicine

Copulæ have many applications in the area of medicine, for example,

  1. Copulæ have been used in the field of magnetic resonance imaging (MRI), for example, to segment images,[46] to fill a vacancy of graphical models in imaging genetics in a study on schizophrenia,[47] and to distinguish between normal and Alzheimer patients.[48]
  2. Copulæ have been in the area of brain research based on EEG signals, for example, to detect drowsiness during daytime nap,[49] to track changes in instantaneous equivalent bandwidths (IEBWs),[50] to derive synchrony for early diagnosis of Alzheimer's disease,[51] to characterize dependence in oscillatory activity between EEG channels,[52] and to assess the reliability of using methods to capture dependence between pairs of EEG channels using their time-varying envelopes.[53] Copula functions have been successfully applied to the analysis of neuronal dependencies[54] and spike counts in neuroscience .[55]
  3. A copula model has been developed in the field of oncology, for example, to jointly model genotypes, phenotypes, and pathways to reconstruct a cellular network to identify interactions between specific phenotype and multiple molecular features (e.g. mutations and gene expression change). Bao et al.[56] used NCI60 cancer cell line data to identify several subsets of molecular features that jointly perform as the predictors of clinical phenotypes. The proposed copula may have an impact on biomedical research, ranging from cancer treatment to disease prevention. Copula has also been used to predict the histological diagnosis of colorectal lesions from colonoscopy images,[57] and to classify cancer subtypes.[58]
  4. A Copula-based analysis model has been developed in the field of heart and cardiovascular disease, for example, to predict heart rate (HR) variation. Heart rate (HR) is one of the most critical health indicators for monitoring exercise intensity and load degree because it is closely related to heart rate. Therefore, an accurate short-term HR prediction technique can deliver efficient early warning for human health and decrease harmful events. Namazi (2022)[59] used a novel hybrid algorithm to predict HR.

Geodesy

The combination of SSA and copula-based methods have been applied for the first time as a novel stochastic tool for Earth Orientation Parameters prediction.[60][61]

Hydrology research

Copulas have been used in both theoretical and applied analyses of hydroclimatic data. Theoretical studies adopted the copula-based methodology for instance to gain a better understanding of the dependence structures of temperature and precipitation, in different parts of the world.[9][62][63] Applied studies adopted the copula-based methodology to examine e.g., agricultural droughts[64] or joint effects of temperature and precipitation extremes on vegetation growth.[65]

Climate and weather research

Copulas have been extensively used in climate- and weather-related research.[66][67]

Solar irradiance variability

Copulas have been used to estimate the solar irradiance variability in spatial networks and temporally for single locations.[68][69]

Random vector generation

Large synthetic traces of vectors and stationary time series can be generated using empirical copula while preserving the entire dependence structure of small datasets.[70] Such empirical traces are useful in various simulation-based performance studies.[71]

Ranking of electrical motors

Copulas have been used for quality ranking in the manufacturing of electronically commutated motors.[72]

Signal processing

Copulas are important because they represent a dependence structure without using marginal distributions. Copulas have been widely used in the field of finance, but their use in signal processing is relatively new. Copulas have been employed in the field of wireless communication for classifying radar signals, change detection in remote sensing applications, and EEG signal processing in medicine. In this section, a short mathematical derivation to obtain copula density function followed by a table providing a list of copula density functions with the relevant signal processing applications are presented.

Astronomy

Copulas have been used for determining the core radio luminosity function of Active galactic Nuclei (AGNs),[73] while this cannot be realized using traditional methods due to the difficulties in sample completeness.

Mathematical derivation of copula density function

For any two random variables X and Y, the continuous joint probability distribution function can be written as

<math>F_{XY}(x,y) = \Pr \begin{Bmatrix} X \leq{x},Y\leq{y} \end{Bmatrix}, </math>

where <math display="inline">F_X(x) = \Pr \begin{Bmatrix} X \leq{x} \end{Bmatrix} </math> and <math display="inline"> F_Y(y) = \Pr \begin{Bmatrix} Y \leq{y} \end{Bmatrix} </math> are the marginal cumulative distribution functions of the random variables X and Y, respectively.

then the copula distribution function <math>C(u, v)</math> can be defined using Sklar's theorem[74][75] as:

<math>F_{XY}(x,y) = C( F_X (x) , F_Y (y) ) \triangleq C( u, v ), </math>

where <math>u = F_X(x) </math> and <math>v = F_Y(y) </math> are marginal distribution functions, <math> F_{XY}(x,y) </math> joint and <math> u, v \in (0,1) </math>.

Assuming <math>F_{XY}(\cdot,\cdot) </math> is a.e. twice differentiable, we start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives.

<math>\begin{alignat}{6}

f_{XY}(x,y) = {} & {\partial^2 F_{XY}(x,y) \over\partial x\,\partial y } \\ \vdots \\ f_{XY}(x,y) = {} & {\partial^2 C(F_X(x),F_Y(y)) \over\partial x\,\partial y} \\ \vdots \\ f_{XY}(x,y) = {} & {\partial^2 C(u,v) \over\partial u\,\partial v} \cdot {\partial F_X(x) \over\partial x} \cdot {\partial F_Y(y) \over\partial y} \\ \vdots \\ f_{XY}(x,y) = {} & c(u,v) f_X(x) f_Y(y) \\ \vdots \\ \frac{f_{XY}(x,y)}{f_X(x) f_Y(y) } = {} & c(u,v) \end{alignat}

</math>

where <math>c(u,v)</math> is the copula density function, <math>f_X(x) </math> and <math>f_Y(y) </math> are the marginal probability density functions of X and Y, respectively. It is important to understand that there are four elements in this equation, and if any three elements are known, the fourth element can be calculated. For example, it may be used,

  • when joint probability density function between two random variables is known, the copula density function is known, and one of the two marginal functions are known, then, the other marginal function can be calculated, or
  • when the two marginal functions and the copula density function are known, then the joint probability density function between the two random variables can be calculated, or
  • when the two marginal functions and the joint probability density function between the two random variables are known, then the copula density function can be calculated.

List of copula density functions and applications

Various bivariate copula density functions are important in the area of signal processing. <math>u=F_X(x) </math> and <math>v=F_Y(y) </math> are marginal distributions functions and <math>f_X(x) </math> and <math>f_Y(y) </math> are marginal density functions. Extension and generalization of copulas for statistical signal processing have been shown to construct new bivariate copulas for exponential, Weibull, and Rician distributions.[76] Zeng et al.[77] presented algorithms, simulation, optimal selection, and practical applications of these copulas in signal processing.

Copula density: c(u, v) Use
Gaussian <math>\begin{align}

= {} & \frac{1}{\sqrt{1-\rho^2}} \exp\left (-\frac{(a^2+b^2)\rho^2-2 ab\rho}{ 2(1-\rho^2) } \right ) \\ & \text{where } \rho\in (-1,1)\\ & \text{where } a=\sqrt{2} \operatorname{erf}^{-1}(2u-1) \\ & \text{where } b =\sqrt{2}\operatorname{erf}^{-1}(2v-1) \\ & \text{where } \operatorname{erf}(z) = \frac{2}{\sqrt\pi} \int\limits_0^z \exp (-t^2) \, dt \end{align} </math>

supervised classification of synthetic aperture radar (SAR) images,[78]

validating biometric authentication,[79] modeling stochastic dependence in large-scale integration of wind power,[80] unsupervised classification of radar signals[81]

Exponential <math>\begin{align}

= {} & \frac{1}{1-\rho} \exp\left ( \frac{\rho(\ln(1-u)+\ln(1-v))}{1-\rho} \right ) \cdot I_0\left ( \frac{2\sqrt{\rho \ln(1-u)\ln(1-v)}}{1-\rho} \right )\\ & \text{where } x=F_X^{-1}(u)=-\ln(1-u)/\lambda \\ & \text{where } y=F_Y^{-1}(v)=-\ln(1-v)/\mu \end{align}</math>

queuing system with infinitely many servers[82]
Rayleigh bivariate exponential, Rayleigh, and Weibull copulas have been proved to be equivalent[83][84][85] change detection from SAR images[86]
Weibull bivariate exponential, Rayleigh, and Weibull copulas have been proved to be equivalent[83][84][85] digital communication over fading channels[87]
Log-normal bivariate log-normal copula and Gaussian copula are equivalent[85][84] shadow fading along with multipath effect in wireless channel[88][89]
Farlie–Gumbel–Morgenstern (FGM) <math>\begin{align}

= {} & 1+\theta(1-2u)(1-2v) \\ & \text{where } \theta \in[-1,1] \end{align}

</math>

information processing of uncertainty in knowledge-based systems[90]
Clayton <math>\begin{align}

= {} & (1+\theta)(uv)^{(-1-\theta)}(-1 +u^{-\theta} + v^{-\theta})^{(-2-1/\theta)} \\ & \text{where } \theta \in(-1,\infty), \theta\neq0 \end{align} </math>

location estimation of random signal source and hypothesis testing using heterogeneous data[91][92]
Frank <math>\begin{align}

= {} & \frac {-\theta e^{-\theta(u+v)}(e^{-\theta}-1)} {(e^{-\theta}-e^{-\theta u}-e^{-\theta v}+e^{-\theta(u+v)})^2}\\ & \text{where } \theta \in(-\infty,+\infty), \theta\neq0 \end{align} </math>

quantitative risk assessment of geo-hazards[93]
Student's t <math>\begin{align}

= {} & \frac{\Gamma(0.5v)\Gamma(0.5v+1)( 1+(t_v^{-2}(u)+t_v^{-2}(v) -2 \rho t_v^{-1}(u) t_v^{-1}(v))/(v(1-\rho^2)))^{-0.5(v+2)} )} {\sqrt{1-\rho^2} \cdot \Gamma(0.5(v+1))^2 (1+ t_v^{-2}(u)/v)^{-0.5(v+1)} (1+ t_v^{-2}(v)/v)^{-0.5(v+1)} } \\ & \text{where } \rho\in (-1,1)\\ & \text{where } \phi(z)= \frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^z \exp \left(\frac{-t^2}{2}\right) \, dt \\ & \text{where } t_v(x\mid v)= \int\limits_{-\infty}^x \frac{\Gamma{(0.5(v+1))}}{\sqrt{v\pi}(\Gamma{0.5v})(1+v^{-1}t^2)^{0.5(v+1)}} dt\\ & \text{where } v=\text{degrees of freedom} \\ & \text{where } \Gamma \text{ is the Gamma function} \end{align}</math>

supervised SAR image classification,[86]

fusion of correlated sensor decisions[94]

Nakagami-m
Rician

See also

References

Шаблон:Reflist

Further reading

  • The standard reference for an introduction to copulas. Covers all fundamental aspects, summarizes the most popular copula classes, and provides proofs for the important theorems related to copulas
Roger B. Nelsen (1999), "An Introduction to Copulas", Springer. Шаблон:Isbn
  • A book covering current topics in mathematical research on copulas:
Piotr Jaworski, Fabrizio Durante, Wolfgang Karl Härdle, Tomasz Rychlik (Editors): (2010): "Copula Theory and Its Applications" Lecture Notes in Statistics, Springer. Шаблон:Isbn
  • A reference for sampling applications and stochastic models related to copulas is
Jan-Frederik Mai, Matthias Scherer (2012): Simulating Copulas (Stochastic Models, Sampling Algorithms and Applications). World Scientific. Шаблон:Isbn
  • A paper covering the historic development of copula theory, by the person associated with the "invention" of copulas, Abe Sklar.
Abe Sklar (1997): "Random variables, distribution functions, and copulas – a personal look backward and forward" in Rüschendorf, L., Schweizer, B. und Taylor, M. (eds) Distributions With Fixed Marginals & Related Topics (Lecture Notes – Monograph Series Number 28). Шаблон:Isbn
  • The standard reference for multivariate models and copula theory in the context of financial and insurance models
Alexander J. McNeil, Rudiger Frey and Paul Embrechts (2005) "Quantitative Risk Management: Concepts, Techniques, and Tools", Princeton Series in Finance. Шаблон:Isbn

External links

Шаблон:Statistics Шаблон:Authority control

  1. Thorsten Schmidt (2006) "Coping with Copulas", https://web.archive.org/web/20100705040514/http://www.tu-chemnitz.de/mathematik/fima/publikationen/TSchmidt_Copulas.pdf
  2. 2,0 2,1 Шаблон:Cite journal
  3. 3,0 3,1 Шаблон:Cite journal
  4. Шаблон:Citation
  5. Шаблон:Citation
  6. Шаблон:Citation
  7. Шаблон:Cite journal
  8. Шаблон:Cite journal
  9. 9,0 9,1 Шаблон:Cite journal
  10. Шаблон:Cite web
  11. Шаблон:Cite journal
  12. Шаблон:Cite web
  13. Шаблон:Cite journal
  14. 14,0 14,1 Шаблон:Cite book
  15. Шаблон:Cite journal
  16. Шаблон:Citation
  17. Шаблон:Cite journal
  18. Alexander J. McNeil, Rudiger Frey and Paul Embrechts (2005) "Quantitative Risk Management: Concepts, Techniques, and Tools", Princeton Series in Finance
  19. Шаблон:Cite book
  20. 20,0 20,1 Шаблон:Cite journal
  21. Шаблон:Cite journal
  22. Шаблон:Citation
  23. Шаблон:Citation
  24. Шаблон:Cite magazine
  25. Шаблон:Cite journal
  26. Шаблон:Cite book
  27. Шаблон:Citation
  28. 28,0 28,1 28,2 Шаблон:Citation
  29. Шаблон:Citation
  30. Шаблон:Citation
  31. Recipe for Disaster: The Formula That Killed Wall Street Wired, 2/23/2009
  32. Шаблон:Citation
  33. Шаблон:Citation
  34. 34,0 34,1 Шаблон:Cite book
  35. Шаблон:Cite journal
  36. Шаблон:Cite book
  37. Шаблон:Cite journal
  38. Шаблон:Cite journal
  39. Шаблон:Citation
  40. Шаблон:Cite journal
  41. Шаблон:Cite journal
  42. Шаблон:Citation
  43. Шаблон:Citation
  44. Шаблон:Citation
  45. Шаблон:Citation
  46. Шаблон:Cite journal
  47. Шаблон:Cite book
  48. Шаблон:Cite book
  49. Шаблон:Cite journal
  50. Шаблон:Cite book
  51. Шаблон:Cite book
  52. Шаблон:Cite book
  53. Шаблон:Cite book
  54. Шаблон:Citation
  55. Шаблон:Citation
  56. Шаблон:Cite book
  57. Шаблон:Cite book
  58. Шаблон:Cite book
  59. Шаблон:Cite journal
  60. Шаблон:Cite journal
  61. Шаблон:Cite journal
  62. Шаблон:Cite journal
  63. Шаблон:Cite journal
  64. Шаблон:Cite journal
  65. Шаблон:Cite journal
  66. Шаблон:Cite journal
  67. Шаблон:Cite journal
  68. Шаблон:Cite journal
  69. Шаблон:Cite journal
  70. Шаблон:Cite conference
  71. Шаблон:Cite book
  72. Шаблон:Cite journal
  73. Шаблон:Cite journal
  74. Шаблон:Cite book
  75. Шаблон:Cite journal
  76. Шаблон:Cite journal
  77. Шаблон:Cite journal
  78. Шаблон:Cite journal
  79. Шаблон:Cite journal
  80. Шаблон:Cite journal
  81. Шаблон:Cite journal
  82. Шаблон:Cite book
  83. 83,0 83,1 Шаблон:Cite journal
  84. 84,0 84,1 84,2 Шаблон:Cite journal
  85. 85,0 85,1 85,2 Шаблон:Cite journal
  86. 86,0 86,1 Шаблон:Cite journal
  87. Шаблон:Citation
  88. Шаблон:Cite journal
  89. Шаблон:Cite journal
  90. Шаблон:Citation
  91. Шаблон:Cite journal
  92. Шаблон:Cite journal
  93. Шаблон:Cite journal
  94. Шаблон:Cite journal