Английская Википедия:First-order second-moment method

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

In probability theory, the first-order second-moment (FOSM) method, also referenced as mean value first-order second-moment (MVFOSM) method, is a probabilistic method to determine the stochastic moments of a function with random input variables. The name is based on the derivation, which uses a first-order Taylor series and the first and second moments of the input variables.[1]

Approximation

Consider the objective function <math>g(x)</math>, where the input vector <math>x</math> is a realization of the random vector <math>X</math> with probability density function <math>f_X(x)</math>. Because <math>X</math> is randomly distributed, <math>g</math> is also randomly distributed. Following the FOSM method, the mean value of <math>g</math> is approximated by

<math> \mu_g \approx g(\mu) </math>

The variance of <math>g</math> is approximated by

<math> \sigma^2_g \approx \sum_{i=1}^n \sum_{j=1}^n \frac{\partial g(\mu)}{\partial x_i} \frac{\partial g(\mu)}{\partial x_j} \operatorname{cov}\left(X_i, X_j\right) </math>

where <math>n</math> is the length/dimension of <math>x</math> and <math display="inline"> \frac{\partial g( \mu)}{\partial x_i} </math> is the partial derivative of <math>g</math> at the mean vector <math>\mu</math> with respect to the i-th entry of <math>x</math>. More accurate, second-order second-moment approximations are also available [2]

Derivation

The objective function is approximated by a Taylor series at the mean vector <math>\mu</math>.

<math>

g(x) = g(\mu) + \sum_{i=1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu _i) + \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n \frac{\partial^2 g(\mu)}{\partial x_i \, \partial x_j} (x_i - \mu _i)(x_j - \mu _j) + \cdots </math>

The mean value of <math>g</math> is given by the integral

<math>\mu_g = E[ g(x)] = \int_{-\infty}^\infty g(x) f_X(x)\, dx</math>

Inserting the first-order Taylor series yields

<math>\begin{align}

\mu_g & \approx \int_{-\infty}^\infty \left[ g(\mu) + \sum_{i=1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu_i) \right] f_X(x) \, dx \\

& = \int_{-\infty}^\infty g(\mu) f_X(x) \, dx + \int_{-\infty}^\infty \sum_{i=1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu_i) f_X(x) \, dx \\

& = g(\mu) \underbrace{\int_{-\infty}^\infty f_X(x) \, dx }_1

   + \sum_{i=1}^n \frac{\partial g(\mu)}{\partial x_i} \underbrace{\int_{-\infty}^\infty (x_i - \mu_i) f_X(x) \, dx}_0 \\

& = g(\mu).

\end{align}</math>

The variance of <math>g</math> is given by the integral

<math>

\sigma_g^2 = E\left([g(x) - \mu_g]^2\right) = \int_{-\infty}^\infty [g(x) - \mu_g]^2 f_X(x) \, dx. </math>

According to the computational formula for the variance, this can be written as

<math>

\sigma _g^2 = E\left([g(x) - \mu_g]^2\right) = E\left(g(x)^2\right) - \mu_g^2 = \int_{-\infty}^\infty g(x)^2 f_X(x) \, dx - \mu_g^2 </math>

Inserting the Taylor series yields

<math>\begin{align}

\sigma_g^2 & \approx \int_{-\infty}^\infty \left[ g(\mu) + \sum_{i = 1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu _i) \right]^2 f_X (x) \, dx - \mu _g^2 \\ & = \int_{-\infty}^\infty \left\{ g(\mu)^2 + 2 g_\mu \sum_{i = 1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu_i) + \left[ \sum_{i = 1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu _i) \right]^2 \right\} f_X (x) \, dx - \mu _g^2 \\ & = \int_{-\infty}^\infty g(\mu)^2 f_X (x) \, dx + \int_{-\infty}^\infty 2\,g_\mu \sum_{i = 1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu _i) f_X (x) \,dx \\ & \quad {}+ \int_{-\infty}^\infty \left[\sum_{i = 1}^n \frac{\partial g(\mu)}{\partial x_i} (x_i - \mu_i) \right]^2 f_X (x) \, dx - \mu _g^2 \\ & = g_\mu^2 \underbrace{\int_{-\infty}^\infty f_X (x) \, dx}_1 + 2 g_\mu \sum_{i = 1}^n \frac{\partial g(\mu)}{\partial x_i} \underbrace{\int_{-\infty}^\infty (x_i - \mu_i) f_X (x) \, dx}_0 \\ & \quad {}+ \int_{-\infty}^\infty \left[ \sum_{i = 1}^n \sum_{j = 1}^n \frac{\partial g(\mu)}{\partial x_i} \frac{\partial g(\mu)}{\partial x_j} (x_i - \mu _i)(x_j - \mu_j) \right] f_X (x) \, dx - \mu_g^2 \\ & = \underbrace{ g(\mu)^2 }_{\mu_g^2} + \sum_{i = 1}^n \sum_{j = 1}^n \frac{\partial g(\mu)}{\partial x_i} \frac{\partial g(\mu)}{\partial x_j} \underbrace{\int_{-\infty}^\infty (x_i - \mu_i)(x_j - \mu _j) f_X (x) \, dx }_{\operatorname{cov}\left(X_i, X_j\right)} - \mu_g^2 \\ & = \sum_{i = 1}^n \sum_{j = 1}^n \frac{\partial g(\mu)}{\partial x_i} \frac{\partial g(\mu)}{\partial x_j} \operatorname{cov}\left(X_i, X_j\right). \end{align}</math>

Higher-order approaches

The following abbreviations are introduced.

<math>\begin{align}
      g_\mu &= g(\mu), &
     g_{,i} &= \frac{\partial g(\mu)}{\partial x_i}, &
    g_{,ij} &= \frac{\partial^2 g (\mu)}{\partial x_i \, \partial x_j }, &
 \mu _{i,j} &= E\left[ (x_i  - \mu _i)^j \right]

\end{align}</math>

In the following, the entries of the random vector <math>X</math> are assumed to be independent. Considering also the second-order terms of the Taylor expansion, the approximation of the mean value is given by

<math>

\mu _g \approx g_\mu + \frac{1}{2}\sum_{i = 1}^n g_{,ii} \;\mu _{i,2} </math>

The second-order approximation of the variance is given by

<math>\begin{align}
 \sigma _g^2 \approx{}
 g_\mu^2 &+ \sum_{i = 1}^n g_{,i}^2 \,\mu _{i,2}  + \frac{1}{4}\sum_{i = 1}^n g_{,ii}^2 \,\mu _{i,4} + g_\mu \sum_{i = 1}^n g_{,ii} \,\mu _{i,2} + \sum_{i = 1}^n g_{,i} \,g_{,ii} \,\mu _{i,3} \\ 
         &+ \frac{1}{2}\sum_{i = 1}^n \sum_{j = i + 1}^n g_{,ii} \,g_{,jj} \,\mu _{i,2} \,\mu _{j,2} + \sum_{i = 1}^n \sum_{j = i + 1}^n g_{,ij}^2 \,\mu _{i,2} \,\mu _{j,2}  - \mu _g^2

\end{align}</math>

The skewness of <math>g</math> can be determined from the third central moment <math>\mu_{g,3} </math>. When considering only linear terms of the Taylor series, but higher-order moments, the third central moment is approximated by

<math>

\mu _{g,3} \approx \sum_{i = 1}^n g_{,i}^3 \;\mu _{i,3} </math>

For the second-order approximations of the third central moment as well as for the derivation of all higher-order approximations see Appendix D of Ref.[3] Taking into account the quadratic terms of the Taylor series and the third moments of the input variables is referred to as second-order third-moment method.[4] However, the full second-order approach of the variance (given above) also includes fourth-order moments of input parameters,[5] the full second-order approach of the skewness 6th-order moments,[3][6] and the full second-order approach of the kurtosis up to 8th-order moments.[6]

Practical application

There are several examples in the literature where the FOSM method is employed to estimate the stochastic distribution of the buckling load of axially compressed structures (see e.g. Ref.[7][8][9][10]). For structures which are very sensitive to deviations from the ideal structure (like cylindrical shells) it has been proposed to use the FOSM method as a design approach. Often the applicability is checked by comparison with a Monte Carlo simulation. Two comprehensive application examples of the full second-order method specifically oriented towards the fatigue crack growth in a metal railway axle are discussed and checked by comparison with a Monte Carlo simulation in Ref.[5][6]

In engineering practice, the objective function often is not given as analytic expression, but for instance as a result of a finite-element simulation. Then the derivatives of the objective function need to be estimated by the central differences method. The number of evaluations of the objective function equals <math> 2 n + 1</math>. Depending on the number of random variables this still can mean a significantly smaller number of evaluations than performing a Monte Carlo simulation. However, when using the FOSM method as a design procedure, a lower bound shall be estimated, which is actually not given by the FOSM approach. Therefore, a type of distribution needs to be assumed for the distribution of the objective function, taking into account the approximated mean value and standard deviation.

References

Шаблон:Reflist

  1. A. Haldar and S. Mahadevan, Probability, Reliability, and Statistical Methods in Engineering Design. John Wiley & Sons New York/Chichester, UK, 2000.
  2. Шаблон:Cite journal
  3. 3,0 3,1 B. Kriegesmann, "Probabilistic Design of Thin-Walled Fiber Composite Structures", Mitteilungen des Instituts für Statik und Dynamik der Leibniz Universität Hannover 15/2012, Шаблон:ISSN, Gottfried Wilhelm Leibniz Universität Hannover, Hannover, Germany, 2012, PDF; 10,2MB.
  4. Y. J. Hong, J. Xing, and J. B. Wang, "A Second-Order Third-Moment Method for Calculating the Reliability of Fatigue", Int. J. Press. Vessels Pip., 76 (8), pp 567–570, 1999.
  5. 5,0 5,1 Mallor C, Calvo S, Núñez JL, Rodríguez-Barrachina R, Landaberea A. "Full second-order approach for expected value and variance prediction of probabilistic fatigue crack growth life." International Journal of Fatigue 2020;133:105454. https://doi.org/10.1016/j.ijfatigue.2019.105454.
  6. 6,0 6,1 6,2 Mallor C, Calvo S, Núñez JL, Rodríguez-Barrachina R, Landaberea A. "Uncertainty propagation using the full second-order approach for probabilistic fatigue crack growth life." International Journal of Numerical Methods for Calculation and Design in Engineering (RIMNI) 2020:11. https://doi.org/10.23967/j.rimni.2020.07.004.
  7. I. Elishakoff, S. van Manen, P. G. Vermeulen, and J. Arbocz, "First-Order Second-Moment Analysis of the Buckling of Shells with Random Imperfections", AIAA J., 25 (8), pp 1113–1117, 1987.
  8. I. Elishakoff, "Uncertain Buckling: Its Past, Present and Future", Int. J. Solids Struct., 37 (46–47), pp 6869–6889, Nov. 2000.
  9. J. Arbocz and M. W. Hilburger, "Toward a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells", AIAA J., 43 (8), pp 1823–1827, 2005.
  10. B. Kriegesmann, R. Rolfes, C. Hühne, and A. Kling, "Fast Probabilistic Design Procedure for Axially Compressed Composite Cylinders", Compos. Struct., 93, pp 3140–3149, 2011.