Английская Википедия:Euler–Maclaurin formula

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Use American English Шаблон:Short description In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.

The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.

The formula

If Шаблон:Mvar and Шаблон:Mvar are natural numbers and Шаблон:Math is a real or complex valued continuous function for real numbers Шаблон:Mvar in the interval Шаблон:Math, then the integral <math display=block>I = \int_m^n f(x)\,dx</math> can be approximated by the sum (or vice versa) <math display=block>S = f(m + 1) + \cdots + f(n - 1) + f(n)</math> (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives Шаблон:Math evaluated at the endpoints of the interval, that is to say Шаблон:Math and Шаблон:Math.

Explicitly, for Шаблон:Mvar a positive integer and a function Шаблон:Math that is Шаблон:Mvar times continuously differentiable on the interval Шаблон:Math, we have <math display=block>S - I = \sum_{k=1}^p {\frac{B_k}{k!} \left(f^{(k - 1)}(n) - f^{(k - 1)}(m)\right)} + R_p,</math> where Шаблон:Mvar is the Шаблон:Mvarth Bernoulli number (with Шаблон:Math) and Шаблон:Mvar is an error term which depends on Шаблон:Mvar, Шаблон:Mvar, Шаблон:Mvar, and Шаблон:Mvar and is usually small for suitable values of Шаблон:Mvar.

The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for Шаблон:Math. In this case we have[1][2] <math display=block>\sum_{i=m}^n f(i) =

   \int^n_m f(x)\,dx + \frac{f(n) + f(m)}{2} +
   \sum_{k=1}^{\left\lfloor \frac{p}{2}\right\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R_p,

</math> or alternatively <math display=block>\sum_{i=m+1}^n f(i) =

   \int^n_m f(x)\,dx + \frac{f(n) - f(m)}{2} +
   \sum_{k=1}^{\left\lfloor \frac{p}{2}\right\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R_p.

</math>

The remainder term

Шаблон:Seealso

The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals Шаблон:Math for Шаблон:Math. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term.

The remainder term has an exact expression in terms of the periodized Bernoulli functions Шаблон:Math. The Bernoulli polynomials may be defined recursively by Шаблон:Math and, for Шаблон:Math, <math display=block>\begin{align}

 B_k'(x) &= kB_{k - 1}(x), \\
 \int_0^1 B_k(x)\,dx &= 0.

\end{align}</math> The periodized Bernoulli functions are defined as <math display=block>P_k(x) = B_k\bigl(x - \lfloor x\rfloor\bigr),</math> where Шаблон:Math denotes the largest integer less than or equal to Шаблон:Mvar, so that Шаблон:Math always lies in the interval Шаблон:Math.

With this notation, the remainder term Шаблон:Mvar equals <math display="block">R_{p} = (-1)^{p+1}\int_m^n f^{(p)}(x) \frac{P_p(x)}{p!}\,dx. </math>

When Шаблон:Math, it can be shown that <math display=block>\bigl|B_k(x)\bigr| \le \frac{2 \cdot k!}{(2\pi)^k}\zeta(k),</math> where Шаблон:Mvar denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Шаблон:Math. The bound is achieved for even Шаблон:Mvar when Шаблон:Mvar is zero. The term Шаблон:Math may be omitted for odd Шаблон:Mvar but the proof in this case is more complex (see Lehmer).[3] Using this inequality, the size of the remainder term can be estimated as <math display=block>\left|R_p\right| \leq \frac{2 \zeta(p)}{(2\pi)^p}\int_m^n \left|f^{(p)}(x)\right|\,dx.</math>

Low-order cases

The Bernoulli numbers from Шаблон:Math to Шаблон:Math are Шаблон:Math. Therefore the low-order cases of the Euler–Maclaurin formula are: <math display=block>\begin{align} \sum_{i=m}^n f(i) - \int_m^n f(x)\,dx &= \frac{f(m)+f(n)}{2} + \int_m^n f'(x)P_1(x)\,dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \int_m^n f(x)\frac{P_2(x)}{2!}\,dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} + \int_m^n f(x)\frac{P_3(x)}{3!}\,dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \frac{1}{30}\frac{f(n) - f(m)}{4!}-\int_m^n f^{(4)}(x) \frac{P_4(x)}{4!}\, dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \frac{1}{30}\frac{f(n) - f(m)}{4!} + \int_m^n f^{(5)}(x)\frac{P_5(x)}{5!}\,dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \frac{1}{30}\frac{f(n) - f(m)}{4!} + \frac{1}{42}\frac{f^{(5)}(n) - f^{(5)}(m)}{6!} - \int_m^n f^{(6)}(x)\frac{P_6(x)}{6!}\,dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \frac{1}{30}\frac{f(n) - f(m)}{4!} + \frac{1}{42}\frac{f^{(5)}(n) - f^{(5)}(m)}{6!} + \int_m^n f^{(7)}(x)\frac{P_7(x)}{7!}\,dx. \end{align}</math>

Applications

The Basel problem

The Basel problem is to determine the sum <math display=block> 1 + \frac14 + \frac19 + \frac1{16} + \frac1{25} + \cdots = \sum_{n=1}^\infty \frac{1}{n^2}. </math>

Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals Шаблон:Math, which he proved in the same year.[4]

Sums involving a polynomial

Шаблон:Seealso If Шаблон:Mvar is a polynomial and Шаблон:Mvar is big enough, then the remainder term vanishes. For instance, if Шаблон:Math, we can choose Шаблон:Math to obtain, after simplification, <math display=block>\sum_{i=0}^n i^3 = \left(\frac{n(n + 1)}{2}\right)^2.</math>

Approximation of integrals

The formula provides a means of approximating a finite integral. Let Шаблон:Math be the endpoints of the interval of integration. Fix Шаблон:Mvar, the number of points to use in the approximation, and denote the corresponding step size by Шаблон:Math. Set Шаблон:Math, so that Шаблон:Math and Шаблон:Math. Then:[5] <math display=block> \begin{align} I & = \int_a^b f(x)\,dx \\ &\sim h\left(\frac{f(x_1)}{2} + f(x_2) + \cdots + f(x_{N-1}) + \frac{f(x_N)}{2}\right) + \frac{h^2}{12}\bigl[f'(x_1) - f'(x_N)\bigr] - \frac{h^4}{720}\bigl[f(x_1) - f(x_N)\bigr] + \cdots \end{align} </math>

This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some Шаблон:Mvar, depending upon Шаблон:Mvar and Шаблон:Mvar, such that the terms past order Шаблон:Mvar increase rapidly. Thus, the remainder term generally demands close attention.[5]

The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation.

Asymptotic expansion of sums

In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is <math display=block>\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,dx + \frac{f(b) + f(a)}{2} + \sum_{k=1}^\infty \,\frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right),</math>

where Шаблон:Mvar and Шаблон:Mvar are integers.[6] Often the expansion remains valid even after taking the limits Шаблон:Math or Шаблон:Math or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, <math display=block>\sum_{k=0}^\infty \frac{1}{(z + k)^2} \sim \underbrace{\int_0^\infty\frac{1}{(z + k)^2}\,dk}_{= \dfrac{1}{z}} + \frac{1}{2z^2} + \sum_{t = 1}^\infty \frac{B_{2t}}{z^{2t + 1}}.</math>

Here the left-hand side is equal to Шаблон:Math, namely the first-order polygamma function defined by

<math>\psi^{(1)}(z) = \frac{d^2}{dz^2}\log \Gamma(z);</math>

the gamma function Шаблон:Math is equal to Шаблон:Math when Шаблон:Mvar is a positive integer. This results in an asymptotic expansion for Шаблон:Math. That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.

Examples

If Шаблон:Mvar is an integer greater than 1 we have: <math display=block>\sum_{k=1}^n \frac{1}{k^s} \approx \frac 1{s-1}+\frac 12-\frac 1{(s-1)n^{s-1}}+\frac 1{2n^s}+\sum_{i=1}\frac{B_{2i}}{(2i)!}\left[\frac{(s+2i-2)!}{(s-1)!}-\frac{(s+2i-2)!}{(s-1)!n^{s+2i-1}}\right].</math>

Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: <math display=block>\sum_{k=1}^n \frac{1}{k^s} \sim\zeta(s)-\frac 1{(s-1)n^{s-1}}+\frac 1{2n^s}-\sum_{i=1}\frac{B_{2i}}{(2i)!}\frac{(s+2i-2)!}{(s-1)!n^{s+2i-1}}.</math>

For Шаблон:Mvar equal to 2 this simplifies to <math display=block>\sum_{k=1}^n \frac{1}{k^2} \sim\zeta(2)-\frac 1n+\frac 1{2n^2}-\sum_{i=1}\frac{B_{2i}}{n^{2i+1}},</math> or <math display=block>\sum_{k=1}^n \frac{1}{k^2} \sim \frac{\pi^2}{6} -\frac{1}{n} +\frac{1}{2n^2} -\frac{1}{6n^3}+\frac{1}{30n^5}-\frac{1}{42n^7} + \cdots.</math>

When Шаблон:Math, the corresponding technique gives an asymptotic expansion for the harmonic numbers: <math display=block>\sum_{k=1}^n \frac{1}{k} \sim \gamma + \log n + \frac{1}{2n} - \sum_{k=1}^\infty \frac{B_{2k}}{2kn^{2k}},</math> where Шаблон:Math is the Euler–Mascheroni constant.

Proofs

Derivation by mathematical induction

We outline the argument given in Apostol.[1]

The Bernoulli polynomials Шаблон:Math and the periodic Bernoulli functions Шаблон:Math for Шаблон:Math were introduced above.

The first several Bernoulli polynomials are <math display=block>\begin{align}

 B_0(x) &= 1, \\
 B_1(x) &= x - \tfrac{1}{2}, \\
 B_2(x) &= x^2 - x + \tfrac{1}{6}, \\
 B_3(x) &= x^3 - \tfrac{3}{2}x^2 + \tfrac{1}{2}x, \\
 B_4(x) &= x^4 - 2x^3 + x^2 - \tfrac{1}{30}, \\
        &\,\,\,\vdots

\end{align}</math>

The values Шаблон:Math are the Bernoulli numbers Шаблон:Math. Notice that for Шаблон:Math we have <math display=block>B_n = B_n(1) = B_n(0),</math> and for Шаблон:Math, <math display=block>B_1 = B_1(1) = -B_1(0).</math>

The functions Шаблон:Math agree with the Bernoulli polynomials on the interval Шаблон:Math and are periodic with period 1. Furthermore, except when Шаблон:Math, they are also continuous. Thus, <math display=block> P_n(0) = P_n(1) = B_n \quad \text{for }n \neq 1.</math>

Let Шаблон:Math be an integer, and consider the integral <math display=block> \int_k^{k + 1} f(x)\,dx = \int_k^{k + 1} u\,dv,</math> where <math display=block>\begin{align}

  u &= f(x), \\
 du &= f'(x)\,dx, \\
 dv &= P_0(x)\,dx & \text{since }P_0(x) &= 1, \\
  v &= P_1(x).

\end{align}</math>

Integrating by parts, we get <math display=block>\begin{align}

\int_k^{k + 1} f(x)\,dx &= \bigl[uv\bigr]_k^{k + 1} - \int_k^{k + 1} v\,du \\
                        &= \bigl[f(x)P_1(x)\bigr]_k^{k + 1} - \int_k^{k+1} f'(x)P_1(x)\,dx \\
                        &= B_1(1)f(k+1)-B_1(0)f(k) - \int_k^{k+1} f'(x)P_1(x)\,dx.

\end{align}</math>

Using Шаблон:Math, Шаблон:Math, and summing the above from Шаблон:Math to Шаблон:Math, we get <math display=block>\begin{align} \int_0^n f(x)\, dx &= \int_0^1 f(x)\,dx + \cdots + \int_{n-1}^n f(x)\,dx \\ &= \frac{f(0)}{2}+ f(1) + \dotsb + f(n-1) + \frac{f(n)}{2} - \int_0^n f'(x) P_1(x)\,dx. \end{align}</math>

Adding Шаблон:Math to both sides and rearranging, we have <math display=block> \sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + \frac{f(n) - f(0)}{2} + \int_0^n f'(x) P_1(x)\,dx.</math>

This is the Шаблон:Math case of the summation formula. To continue the induction, we apply integration by parts to the error term: <math display=block>\int_k^{k+1} f'(x)P_1(x)\,dx = \int_k^{k + 1} u\,dv,</math> where <math display=block>\begin{align}

  u &= f'(x), \\
 du &= f(x)\,dx, \\
 dv &= P_1(x)\,dx, \\
  v &= \tfrac{1}{2}P_2(x).

\end{align}</math>

The result of integrating by parts is <math display=block>\begin{align}

 \bigl[uv\bigr]_k^{k + 1} - \int_k^{k + 1} v\,du &= \left[\frac{f'(x)P_2(x)}{2} \right]_k^{k+1} - \frac{1}{2}\int_k^{k+1} f(x)P_2(x)\,dx \\
                 &= \frac{B_2}{2}(f'(k + 1) - f'(k)) - \frac{1}{2}\int_k^{k + 1} f(x)P_2(x)\,dx.

\end{align}</math>

Summing from Шаблон:Math to Шаблон:Math and substituting this for the lower order error term results in the Шаблон:Math case of the formula, <math display=block>\sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + \frac{f(n) - f(0)}{2} + \frac{B_2}{2}\bigl(f'(n) - f'(0)\bigr) - \frac{1}{2}\int_0^n f(x)P_2(x)\,dx.</math>

This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.

See also

References

Шаблон:Reflist

Further reading

Шаблон:Refbegin

Шаблон:Refend

External links

Шаблон:Leonhard Euler

Шаблон:Calculus topics