Английская Википедия:Covariance intersection

Материал из Онлайн справочника
Версия от 03:11, 22 февраля 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{Short description|Statistical algorithm}} {{technical|date=July 2018}} '''Covariance intersection''' ('''CI''') is an algorithm for combining two or more estimates of state variables in a Kalman filter when the correlation between them is unknown.<ref name="uthesis">{{cite thesis |degree=Ph.D. |first=Jeffrey |last=Uhlmann |s2cid=47808603 |title=Dynamic Map Building and L...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Technical

Covariance intersection (CI) is an algorithm for combining two or more estimates of state variables in a Kalman filter when the correlation between them is unknown.[1][2][3][4]

Formulation

Items of information a and b are known and are to be fused into information item c. We know a and b have mean/covariance <math>\hat a</math>, <math>A</math> and <math>\hat b</math>, <math>B</math>, but the cross correlation is not known. The covariance intersection update gives mean and covariance for c as

<math>C^{-1} = \omega A^{-1} + (1-\omega) B^{-1} \, ,</math>
<math>\hat c = C(\omega A^{-1} \hat a + (1-\omega)B^{-1} \hat b) \, .</math>

where ω is computed to minimize a selected norm, e.g., the trace, or the logarithm of the determinant. While it is necessary to solve an optimization problem for higher dimensions, closed-form solutions exist for lower dimensions.[5]

Application

CI can be used in place of the conventional Kalman update equations to ensure that the resulting estimate is conservative, regardless of the correlation between the two estimates, with covariance strictly non-increasing according to the chosen measure. The use of a fixed measure is necessary for rigor to ensure that a sequence of updates does not cause the filtered covariance to increase.[1][6]

Advantages

According to a recent survey paper [7] and,[8] the covariance intersection has the following advantages:

  1. The identification and computation of the cross covariances are completely avoided.
  2. It yields a consistent fused estimate, and thus a non-divergent filter is obtained.
  3. The accuracy of the fused estimate outperforms each local one.
  4. It gives a common upper bound of actual estimation error variances, which has robustness with respect to unknown correlations.

These advantages have been demonstrated in the case of simultaneous localization and mapping (SLAM) involving over a million map features/beacons.[9]

Motivation

It is widely believed that unknown correlations exist in a diverse range of multi-sensor fusion problems. Neglecting the effects of unknown correlations can result in severe performance degradation, and even divergence. As such, it has attracted and sustained the attention of researchers for decades. However, owing to its intricate, unknown nature, it is not easy to come up with a satisfying scheme to address fusion problems with unknown correlations. If we ignore the correlations, which is the so-called "naive fusion",[10] it may lead to filter divergence. To compensate this kind of divergence, a common sub-optimal approach is to artificially increase the system noise. However, this heuristic requires considerable expertise and compromises the integrity of the Kalman filter framework.[11]

References

Шаблон:Reflist

  1. 1,0 1,1 Шаблон:Cite thesis
  2. Шаблон:Cite conference
  3. Шаблон:Cite journal
  4. Шаблон:Cite conference
  5. Шаблон:Cite conference
  6. Шаблон:Cite journal
  7. Wangyan Li, Zidong Wang, Guoliang Wei, Lifeng Ma, Jun Hu, and Derui Ding. "A Survey on Multi-Sensor Fusion and Consensus Filtering for Sensor Networks." Discrete Dynamics in Nature and Society, vol. 2015, Article ID 683701, 12 pages, 2015. [1]
  8. Шаблон:Cite journal
  9. Шаблон:Cite conference
  10. Шаблон:Cite journal
  11. Шаблон:Cite book