Английская Википедия:Huber loss

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used.

Definition

Файл:Huber loss.svg
Huber loss (green, <math>\delta=1</math>) and squared error loss (blue) as a function of <math>y - f(x)</math>

The Huber loss function describes the penalty incurred by an estimation procedure Шаблон:Mvar. Huber (1964) defines the loss function piecewise by[1]

<math>

L_\delta (a) = \begin{cases}

\frac{1}{2}{a^2}                   & \text{for } |a| \le \delta, \\
\delta \cdot \left(|a| - \frac{1}{2}\delta\right), & \text{otherwise.}

\end{cases} </math>

This function is quadratic for small values of Шаблон:Mvar, and linear for large values, with equal values and slopes of the different sections at the two points where <math>|a| = \delta</math>. The variable Шаблон:Mvar often refers to the residuals, that is to the difference between the observed and predicted values <math>a = y - f(x)</math>, so the former can be expanded to[2]

<math>

L_\delta(y, f(x)) = \begin{cases}

\frac{1}{2}(y - f(x))^2                   & \text{for } |y - f(x)| \le \delta, \\
\delta\ \cdot \left(|y - f(x)| - \frac{1}{2}\delta\right), & \text{otherwise.}

\end{cases} </math>

The Huber loss is the convolution of the absolute value function with the rectangular function, scaled and translated. Thus it "smoothens out" the former's corner at the origin.

Файл:Comparison of loss functions.png
Comparison of Huber loss with other loss functions used for robust regression.

Motivation

Шаблон:Unreferenced section Two very commonly used loss functions are the squared loss, <math>L(a) = a^2</math>, and the absolute loss, <math>L(a)=|a|</math>. The squared loss function results in an arithmetic mean-unbiased estimator, and the absolute-value loss function results in a median-unbiased estimator (in the one-dimensional case, and a geometric median-unbiased estimator for the multi-dimensional case). The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of <math>a</math>'s (as in <math display="inline">\sum_{i=1}^n L(a_i) </math>), the sample mean is influenced too much by a few particularly large <math>a</math>-values when the distribution is heavy tailed: in terms of estimation theory, the asymptotic relative efficiency of the mean is poor for heavy-tailed distributions.

As defined above, the Huber loss function is strongly convex in a uniform neighborhood of its minimum <math>a=0</math>; at the boundary of this uniform neighborhood, the Huber loss function has a differentiable extension to an affine function at points <math> a=-\delta </math> and <math> a = \delta </math>. These properties allow it to combine much of the sensitivity of the mean-unbiased, minimum-variance estimator of the mean (using the quadratic loss function) and the robustness of the median-unbiased estimator (using the absolute value function).

Pseudo-Huber loss function

The Pseudo-Huber loss function can be used as a smooth approximation of the Huber loss function. It combines the best properties of L2 squared loss and L1 absolute loss by being strongly convex when close to the target/minimum and less steep for extreme values. The scale at which the Pseudo-Huber loss function transitions from L2 loss for values close to the minimum to L1 loss for extreme values and the steepness at extreme values can be controlled by the <math>\delta</math> value. The Pseudo-Huber loss function ensures that derivatives are continuous for all degrees. It is defined as[3][4]

<math>L_\delta (a) = \delta^2\left(\sqrt{1+(a/\delta)^2}-1\right).</math>

As such, this function approximates <math>a^2/2</math> for small values of <math>a</math>, and approximates a straight line with slope <math>\delta</math> for large values of <math>a</math>.

While the above is the most common form, other smooth approximations of the Huber loss function also exist.[5]

Variant for classification

For classification purposes, a variant of the Huber loss called modified Huber is sometimes used. Given a prediction <math>f(x)</math> (a real-valued classifier score) and a true binary class label <math>y \in \{+1, -1\}</math>, the modified Huber loss is defined as[6]

<math>

L(y, f(x)) = \begin{cases}

\max(0, 1 - y \, f(x))^2 & \textrm{for }\, \,  y \, f(x) > -1, \\
-4y \, f(x)              & \textrm{otherwise.}

\end{cases} </math>

The term <math>\max(0, 1 - y \, f(x))</math> is the hinge loss used by support vector machines; the quadratically smoothed hinge loss is a generalization of <math>L</math>.[6]

Applications

The Huber loss function is used in robust statistics, M-estimation and additive modelling.[7]

See also

References

Шаблон:Reflist

  1. Шаблон:Cite journal
  2. Шаблон:Cite book Compared to Hastie et al., the loss is scaled by a factor of ½, to be consistent with Huber's original definition given earlier.
  3. Шаблон:Cite journal
  4. Шаблон:Cite book
  5. Шаблон:Cite journal
  6. 6,0 6,1 Шаблон:Cite conference
  7. Шаблон:Cite journal