Английская Википедия:Differential of a function

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Other uses of Шаблон:Calculus

In calculus, the differential represents the principal part of the change in a function <math>y = f(x)</math> with respect to changes in the independent variable. The differential <math>dy</math> is defined by <math display="block">dy = f'(x)\,dx,</math> where <math>f'(x)</math> is the derivative of Шаблон:Math with respect to <math>x</math>, and <math>dx</math> is an additional real variable (so that <math>dy</math> is a function of <math>x</math> and <math>dx</math>). The notation is such that the equation

<math display="block">dy = \frac{dy}{dx}\, dx</math>

holds, where the derivative is represented in the Leibniz notation <math>dy/dx</math>, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes

<math display="block">df(x) = f'(x)\,dx.</math>

The precise meaning of the variables <math>dy</math> and <math>dx</math> depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables <math>dx</math> and <math>dy</math> are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis.

History and usage

The differential was first introduced via an intuitive or heuristic definition by Isaac Newton and furthered by Gottfried Leibniz, who thought of the differential Шаблон:Math as an infinitely small (or infinitesimal) change in the value Шаблон:Mvar of the function, corresponding to an infinitely small change Шаблон:Math in the function's argument Шаблон:Mvar. For that reason, the instantaneous rate of change of Шаблон:Mvar with respect to Шаблон:Mvar, which is the value of the derivative of the function, is denoted by the fraction

<math display="block"> \frac{dy}{dx} </math> in what is called the Leibniz notation for derivatives. The quotient <math>dy/dx</math> is not infinitely small; rather it is a real number.

The use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy (1823) defined the differential without appeal to the atomism of Leibniz's infinitesimals.[1][2] Instead, Cauchy, following d'Alembert, inverted the logical order of Leibniz and his successors: the derivative itself became the fundamental object, defined as a limit of difference quotients, and the differentials were then defined in terms of it. That is, one was free to define the differential <math>dy</math> by an expression <math display="block">dy = f'(x)\,dx</math> in which <math>dy</math> and <math>dx</math> are simply new variables taking finite real values,[3] not fixed infinitesimals as they had been for Leibniz.[4]

According to Шаблон:Harvtxt, Cauchy's approach was a significant logical improvement over the infinitesimal approach of Leibniz because, instead of invoking the metaphysical notion of infinitesimals, the quantities <math>dy</math> and <math>dx</math> could now be manipulated in exactly the same manner as any other real quantities in a meaningful way. Cauchy's overall conceptual approach to differentials remains the standard one in modern analytical treatments,[5] although the final word on rigor, a fully modern notion of the limit, was ultimately due to Karl Weierstrass.[6]

In physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Шаблон:Harvtxt reconcile the physical use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the particular purpose for which they are intended. Thus "physical infinitesimals" need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense.

Following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is more desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a linear functional of an increment <math>\Delta x</math>. This approach allows the differential (as a linear map) to be developed for a variety of more sophisticated spaces, ultimately giving rise to such notions as the Fréchet or Gateaux derivative. Likewise, in differential geometry, the differential of a function at a point is a linear function of a tangent vector (an "infinitely small displacement"), which exhibits it as a kind of one-form: the exterior derivative of the function. In non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing (see differential (infinitesimal)).

Definition

Файл:Sentido geometrico del diferencial de una funcion.png
The differential of a function <math>f(x)</math> at a point <math>x_0</math>.

The differential is defined in modern treatments of differential calculus as follows.[7] The differential of a function <math>f(x)</math> of a single real variable <math>x</math> is the function <math>df</math> of two independent real variables <math>x</math> and <math>\Delta x</math> given by

<math display="block">df(x, \Delta x) \ \stackrel{\mathrm{def}}{=} \ f'(x)\,\Delta x.</math>

One or both of the arguments may be suppressed, i.e., one may see <math>df(x)</math> or simply <math>df</math>. If <math>y = f(x)</math>, the differential may also be written as <math>dy</math>. Since <math>dx(x,\Delta x)=\Delta x</math>, it is conventional to write <math>dx=\Delta x</math> so that the following equality holds:

<math display="block">df(x) = f'(x) \, dx</math>

This notion of differential is broadly applicable when a linear approximation to a function is sought, in which the value of the increment <math>\Delta x</math> is small enough. More precisely, if <math>f</math> is a differentiable function at <math>x</math>, then the difference in <math>y</math>-values

<math display="block">\Delta y \ \stackrel{\rm{def}}{=}\ f(x+\Delta x) - f(x)</math>

satisfies

<math display="block">\Delta y = f'(x)\,\Delta x + \varepsilon = df(x) + \varepsilon\,</math>

where the error <math>\varepsilon</math> in the approximation satisfies <math>\varepsilon /\Delta x\rightarrow 0</math> as <math>\Delta x\rightarrow 0</math>. In other words, one has the approximate identity

<math display="block">\Delta y \approx dy</math>

in which the error can be made as small as desired relative to <math>\Delta x</math> by constraining <math>\Delta x</math> to be sufficiently small; that is to say, <math display="block">\frac{\Delta y - dy}{\Delta x}\to 0</math> as <math>\Delta x\rightarrow 0</math>. For this reason, the differential of a function is known as the principal (linear) part in the increment of a function: the differential is a linear function of the increment <math>\Delta x</math>, and although the error <math>\varepsilon</math> may be nonlinear, it tends to zero rapidly as <math>\Delta x</math> tends to zero.

Differentials in several variables

Operator / Function <math>f(x)</math> <math>f(x, y, u(x, y), v(x, y))</math>
Differential 1: <math>df \, \overset{\underset{\mathrm{def}}{}}{=} \, f'_x\,dx</math> 2: <math>d_x f \,

\overset{\underset{\mathrm{def}}{}}{=} \, f'_x\,dx</math>

3: <math>df \, \overset{\underset{\mathrm{def}}{}}{=} \, f'_x dx + f'_y dy + f'_u du + f'_v dv</math>

Partial derivative <math>f'_x \, \overset{\underset{\mathrm{(1)}}{}}{=} \, \frac{df}{dx}</math> <math>f'_x \,

\overset{\underset{\mathrm{(2)}}{}}{=} \, \frac{d_x f}{dx} = \frac{\partial f}{\partial x}</math>

Total derivative <math>\frac{df}{dx} \,

\overset{\underset{\mathrm{(1)}}{}}{=} \, f'_x</math>

<math>\frac{df}{dx} \,

\overset{\underset{\mathrm{(3)}}{}}{=} \, f'_x + f'_u \frac{du}{dx} + f'_v \frac{dv}{dx}; (f'_y \frac{dy}{dx} = 0) </math>

Following Шаблон:Harvtxt, for functions of more than one independent variable,

<math display="block"> y = f(x_1,\dots,x_n), </math>

the partial differential of Шаблон:Mvar with respect to any one of the variables Шаблон:Math is the principal part of the change in Шаблон:Mvar resulting from a change Шаблон:Math in that one variable. The partial differential is therefore

<math display="block"> \frac{\partial y}{\partial x_1} dx_1 </math>

involving the partial derivative of Шаблон:Mvar with respect to Шаблон:Math. The sum of the partial differentials with respect to all of the independent variables is the total differential

<math display="block"> dy = \frac{\partial y}{\partial x_1} dx_1 + \cdots + \frac{\partial y}{\partial x_n} dx_n, </math>

which is the principal part of the change in Шаблон:Mvar resulting from changes in the independent variables Шаблон:Math.

More precisely, in the context of multivariable calculus, following Шаблон:Harvtxt, if Шаблон:Math is a differentiable function, then by the definition of differentiability, the increment

<math display="block">\begin{align} \Delta y &{}~\stackrel{\mathrm{def}}{=}~ f(x_1+\Delta x_1, \dots, x_n+\Delta x_n) - f(x_1,\dots,x_n)\\ &{}= \frac{\partial y}{\partial x_1} \Delta x_1 + \cdots + \frac{\partial y}{\partial x_n} \Delta x_n + \varepsilon_1\Delta x_1 +\cdots+\varepsilon_n\Delta x_n \end{align}</math>

where the error terms Шаблон:Math tend to zero as the increments Шаблон:Math jointly tend to zero. The total differential is then rigorously defined as

<math display="block">dy = \frac{\partial y}{\partial x_1} \Delta x_1 + \cdots + \frac{\partial y}{\partial x_n} \Delta x_n.</math>

Since, with this definition, <math display="block">dx_i(\Delta x_1,\dots,\Delta x_n) = \Delta x_i,</math> one has <math display="block">dy = \frac{\partial y}{\partial x_1}\,d x_1 + \cdots + \frac{\partial y}{\partial x_n}\,d x_n.</math>

As in the case of one variable, the approximate identity holds

<math display="block">dy \approx \Delta y</math>

in which the total error can be made as small as desired relative to <math display="inline">\sqrt{\Delta x_1^2+\cdots +\Delta x_n^2}</math> by confining attention to sufficiently small increments.

Application of the total differential to error estimation

In measurement, the total differential is used in estimating the error <math>\Delta f</math> of a function <math>f</math> based on the errors <math>\Delta x,\Delta y,\ldots </math> of the parameters <math>x, y, \ldots</math>. Assuming that the interval is short enough for the change to be approximately linear:

<math display="block">\Delta f(x)=f'(x)\Delta x</math>

and that all variables are independent, then for all variables,

<math display="block">\Delta f = f_x \Delta x + f_y \Delta y + \cdots</math>

This is because the derivative <math>f_x</math> with respect to the particular parameter <math>x</math> gives the sensitivity of the function <math>f</math> to a change in <math>x</math>, in particular the error <math>\Delta x</math>. As they are assumed to be independent, the analysis describes the worst-case scenario. The absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign. From this principle the error rules of summation, multiplication etc. are derived, e.g.: Шаблон:Block indent That is to say, in multiplication, the total relative error is the sum of the relative errors of the parameters.

To illustrate how this depends on the function considered, consider the case where the function is <math>f(a,b)=a\ln b</math> instead. Then, it can be computed that the error estimate is <math display="block">\frac{\Delta f}{f} = \frac{\Delta a}{a} + \frac{\Delta b}{b \ln b}</math> with an extra 'Шаблон:Math' factor not found in the case of a simple product. This additional factor tends to make the error smaller, as Шаблон:Math is not as large as a bare Шаблон:Mvar.

Higher-order differentials

Higher-order differentials of a function Шаблон:Math of a single variable Шаблон:Mvar can be defined via:[8] <math display="block">d^2y = d(dy) = d(f'(x)dx) = (df'(x))dx = f(x)\,(dx)^2,</math> and, in general, <math display="block">d^ny = f^{(n)}(x)\,(dx)^n.</math> Informally, this motivates Leibniz's notation for higher-order derivatives <math display="block">f^{(n)}(x) = \frac{d^n f}{dx^n}.</math> When the independent variable Шаблон:Mvar itself is permitted to depend on other variables, then the expression becomes more complicated, as it must include also higher order differentials in Шаблон:Mvar itself. Thus, for instance, <math display="block"> \begin{align} d^2 y &= f(x)\,(dx)^2 + f'(x)d^2x \\[1ex] d^3 y &= f'(x)\, (dx)^3 + 3f(x)dx\,d^2x + f'(x)d^3x \end{align}</math> and so forth.

Similar considerations apply to defining higher order differentials of functions of several variables. For example, if Шаблон:Math is a function of two variables Шаблон:Mvar and Шаблон:Mvar, then <math display="block">d^nf = \sum_{k=0}^n \binom{n}{k}\frac{\partial^n f}{\partial x^k \partial y^{n-k}}(dx)^k(dy)^{n-k},</math> where <math display="inline">\binom{n}{k}</math> is a binomial coefficient. In more variables, an analogous expression holds, but with an appropriate multinomial expansion rather than binomial expansion.[9]

Higher order differentials in several variables also become more complicated when the independent variables are themselves allowed to depend on other variables. For instance, for a function Шаблон:Math of Шаблон:Mvar and Шаблон:Mvar which are allowed to depend on auxiliary variables, one has <math display="block">d^2f = \left(\frac{\partial^2f}{\partial x^2} (dx)^2 + 2\frac{\partial^2f}{\partial x\partial y} dx\,dy + \frac{\partial^2f}{\partial y^2} (dy)^2\right) + \frac{\partial f}{\partial x}d^2x + \frac{\partial f}{\partial y} d^2y.</math>

Because of this notational awkwardness, the use of higher order differentials was roundly criticized by Шаблон:Harvtxt, who concluded: Шаблон:Blockquote

That is: Finally, what is meant, or represented, by the equality [...]? In my opinion, nothing at all. In spite of this skepticism, higher order differentials did emerge as an important tool in analysis.[10]

In these contexts, the Шаблон:Mvar-th order differential of the function Шаблон:Math applied to an increment Шаблон:Math is defined by <math display="block">d^nf(x,\Delta x) = \left.\frac{d^n}{dt^n} f(x+t\Delta x)\right|_{t=0}</math> or an equivalent expression, such as <math display="block">\lim_{t\to 0}\frac{\Delta^n_{t\Delta x} f}{t^n}</math> where <math>\Delta^n_{t\Delta x} f</math> is an nth forward difference with increment Шаблон:Math.

This definition makes sense as well if Шаблон:Math is a function of several variables (for simplicity taken here as a vector argument). Then the Шаблон:Mvar-th differential defined in this way is a homogeneous function of degree Шаблон:Mvar in the vector increment Шаблон:Math. Furthermore, the Taylor series of Шаблон:Math at the point Шаблон:Mvar is given by <math display="block">f(x+\Delta x) \sim f(x) + df(x,\Delta x) + \frac{1}{2}d^2f(x,\Delta x) + \cdots + \frac{1}{n!} d^n f(x,\Delta x) + \cdots</math> The higher order Gateaux derivative generalizes these considerations to infinite dimensional spaces.

Properties

A number of properties of the differential follow in a straightforward manner from the corresponding properties of the derivative, partial derivative, and total derivative. These include:[11]

An operation Шаблон:Math with these two properties is known in abstract algebra as a derivation. They imply the power rule <math display="block"> d( f^n ) = n f^{n-1} df </math> In addition, various forms of the chain rule hold, in increasing level of generality:[12]

dy = \frac{dy}{dt} dt &= \frac{\partial y}{\partial x_1} dx_1 + \cdots + \frac{\partial y}{\partial x_n} dx_n\\[1ex] &= \frac{\partial y}{\partial x_1} \frac{dx_1}{dt} \, dt + \cdots + \frac{\partial y}{\partial x_n} \frac{dx_n}{dt} \, dt. \end{align}</math> Heuristically, the chain rule for several variables can itself be understood by dividing through both sides of this equation by the infinitely small quantity Шаблон:Math.

  • More general analogous expressions hold, in which the intermediate variables Шаблон:Math depend on more than one variable.

General formulation

Шаблон:See also A consistent notion of differential can be developed for a function Шаблон:Math between two Euclidean spaces. Let Шаблон:Math be a pair of Euclidean vectors. The increment in the function Шаблон:Math is <math display="block">\Delta f = f(\mathbf{x}+\Delta\mathbf{x}) - f(\mathbf{x}).</math> If there exists an Шаблон:Math matrix Шаблон:Mvar such that <math display="block">\Delta f = A\Delta\mathbf{x} + \|\Delta\mathbf{x}\|\boldsymbol{\varepsilon}</math> in which the vector Шаблон:Math as Шаблон:Math, then Шаблон:Math is by definition differentiable at the point Шаблон:Math. The matrix Шаблон:Mvar is sometimes known as the Jacobian matrix, and the linear transformation that associates to the increment Шаблон:Math the vector Шаблон:Math is, in this general setting, known as the differential Шаблон:Math of Шаблон:Math at the point Шаблон:Mvar. This is precisely the Fréchet derivative, and the same construction can be made to work for a function between any Banach spaces.

Another fruitful point of view is to define the differential directly as a kind of directional derivative: <math display="block">df(\mathbf{x},\mathbf{h}) = \lim_{t\to 0}\frac{f(\mathbf{x}+t\mathbf{h})-f(\mathbf{x})}{t} = \left.\frac{d}{dt} f(\mathbf{x}+t\mathbf{h})\right|_{t=0},</math> which is the approach already taken for defining higher order differentials (and is most nearly the definition set forth by Cauchy). If Шаблон:Mvar represents time and x position, then h represents a velocity instead of a displacement as we have heretofore regarded it. This yields yet another refinement of the notion of differential: that it should be a linear function of a kinematic velocity. The set of all velocities through a given point of space is known as the tangent space, and so Шаблон:Math gives a linear function on the tangent space: a differential form. With this interpretation, the differential of Шаблон:Math is known as the exterior derivative, and has broad application in differential geometry because the notion of velocities and the tangent space makes sense on any differentiable manifold. If, in addition, the output value of Шаблон:Math also represents a position (in a Euclidean space), then a dimensional analysis confirms that the output value of df must be a velocity. If one treats the differential in this manner, then it is known as the pushforward since it "pushes" velocities from a source space into velocities in a target space.

Other approaches

Шаблон:Main Although the notion of having an infinitesimal increment Шаблон:Math is not well-defined in modern mathematical analysis, a variety of techniques exist for defining the infinitesimal differential so that the differential of a function can be handled in a manner that does not clash with the Leibniz notation. These include:

Examples and applications

Differentials may be effectively used in numerical analysis to study the propagation of experimental errors in a calculation, and thus the overall numerical stability of a problem Шаблон:Harv. Suppose that the variable Шаблон:Mvar represents the outcome of an experiment and Шаблон:Mvar is the result of a numerical computation applied to x. The question is to what extent errors in the measurement of Шаблон:Mvar influence the outcome of the computation of y. If the Шаблон:Mvar is known to within Δx of its true value, then Taylor's theorem gives the following estimate on the error Δy in the computation of y: <math display="block">\Delta y = f'(x)\Delta x + \frac{(\Delta x)^2}{2}f(\xi)</math> where Шаблон:Math for some Шаблон:Math. If Шаблон:Math is small, then the second order term is negligible, so that Δy is, for practical purposes, well-approximated by Шаблон:Math.

The differential is often useful to rewrite a differential equation <math display="block"> \frac{dy}{dx} = g(x) </math> in the form <math display="block"> dy = g(x)\,dx, </math> in particular when one wants to separate the variables.

Notes

  1. For a detailed historical account of the differential, see Шаблон:Harvnb, especially page 275 for Cauchy's contribution on the subject. An abbreviated account appears in Шаблон:Harvnb.
  2. Cauchy explicitly denied the possibility of actual infinitesimal and infinite quantities Шаблон:Harv, and took the radically different point of view that "a variable quantity becomes infinitely small when its numerical value decreases indefinitely in such a way as to converge to zero" (Шаблон:Harvnb; translation from Шаблон:Harvnb).
  3. Шаблон:Harvnb
  4. Шаблон:Harvnb: "The differentials as thus defined are only new variables, and not fixed infinitesimals..."
  5. Шаблон:Harvnb: "Here we remark merely in passing that it is possible to use this approximate representation of the increment <math>\Delta y</math> by the linear expression <math>hf(x)</math> to construct a logically satisfactory definition of a "differential", as was done by Cauchy in particular."
  6. Шаблон:Harvnb
  7. See, for instance, the influential treatises of Шаблон:Harvnb, Шаблон:Harvnb, Шаблон:Harvnb, and Шаблон:Harvnb. Tertiary sources for this definition include also Шаблон:Harvnb and Шаблон:Harvnb.
  8. Шаблон:Harvnb. See also, for instance, Шаблон:Harvnb.
  9. Шаблон:Harvnb
  10. In particular to infinite dimensional holomorphy Шаблон:Harv and numerical analysis via the calculus of finite differences.
  11. Шаблон:Harvnb
  12. Шаблон:Harvnb
  13. Шаблон:Harvnb.
  14. See Шаблон:Harvnb and Шаблон:Harvnb.
  15. See Шаблон:Harvnb and Шаблон:Harvnb.

See also

References

Шаблон:Sfn whitelist

External links