Английская Википедия:Camera resectioning
Шаблон:Short description Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.
Usually, the camera parameters are represented in a 3 × 4 projection matrix called the camera matrix. The extrinsic parameters define the camera pose (position and orientation) while the intrinsic parameters specify the camera image format (focal length, pixel size, and image origin).
This process is often called geometric camera calibration or simply camera calibration, although that term may also refer to photometric camera calibration or be restricted for the estimation of the intrinsic parameters only. Exterior orientation and interior orientation refer to the determination of only the extrinsic and intrinsic parameters, respectively.
The classic camera calibration requires special objects in the scene, which is not required in camera auto-calibration. Camera resectioning is often used in the application of stereo vision where the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.
Formulation
The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is often represented by the series of transformations; e.g., a matrix of camera intrinsic parameters, a 3 × 3 rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a camera's image space with locations in 3D world space.
Homogeneous coordinates
In this context, we use <math>[u\ v\ 1]^T</math> to represent a 2D point position in pixel coordinates and <math>[x_w\ y_w\ z_w\ 1]^T</math> is used to represent a 3D point position in world coordinates. In both cases, they are represented in homogeneous coordinates (i.e. they have an additional last component, which is initially, by convention, a 1), which is the most common notation in robotics and rigid body transforms.
Projection
Referring to the pinhole camera model, a camera matrix <math>M</math> is used to denote a projective mapping from world coordinates to pixel coordinates.
- <math>z_{c}\begin{bmatrix}
u\\ v\\ 1\end{bmatrix}=K\, \begin{bmatrix} R & T\end{bmatrix}\begin{bmatrix} x_{w}\\ y_{w}\\ z_{w}\\ 1\end{bmatrix} =M \begin{bmatrix} x_{w}\\ y_{w}\\ z_{w}\\ 1\end{bmatrix} </math>
where <math> M = K\, \begin{bmatrix} R & T\end{bmatrix}</math>. <math> u,v </math> by convention are the x and y coordinates of the pixel in the camera, <math>K</math> is the intrinsic matrix as described below, and <math>R\,T</math> form the extrinsic matrix as described below. <math>x_{w},y_{w},z_{w}</math> are the coordinates of the source of the light ray which hits the camera sensor in world coordinates, relative to the origin of the world. By dividing the matrix product by <math>z_{c}</math>, the z-coordinate of the camera relative to the world origin, the theoretical value for the pixel coordinates can be found.
Intrinsic parameters
- <math>K=\begin{bmatrix}
\alpha_{x} & \gamma & u_{0} & 0\\ 0 & \alpha_{y} & v_{0} & 0\\ 0 & 0 & 1 & 0\end{bmatrix}</math>
The <math>K</math> contains 5 intrinsic parameters of the specific camera model. These parameters encompass focal length, image sensor format, and camera principal point. The parameters <math>\alpha_{x} = f \cdot m_{x}</math> and <math>\alpha_{y} = f \cdot m_{y}</math> represent focal length in terms of pixels, where <math>m_{x}</math> and <math>m_{y}</math> are the inverses of the width and height of a pixel on the projection plane and <math>f</math> is the focal length in terms of distance. [1] <math>\gamma</math> represents the skew coefficient between the x and the y axis, and is often 0. <math>u_{0}</math> and <math>v_{0}</math> represent the principal point, which would be ideally in the center of the image.
Nonlinear intrinsic parameters such as lens distortion are also important although they cannot be included in the linear camera model described by the intrinsic parameter matrix. Many modern camera calibration algorithms estimate these intrinsic parameters as well in the form of non-linear optimisation techniques. This is done in the form of optimising the camera and distortion parameters in the form of what is generally known as bundle adjustment.
Extrinsic parameters
<math>{}\begin{bmatrix}R_{3 \times 3} & T_{3 \times 1} \\ 0_{1 \times 3} & 1\end{bmatrix}_{4 \times 4}</math>
<math>R,T</math> are the extrinsic parameters which denote the coordinate system transformations from 3D world coordinates to 3D camera coordinates. Equivalently, the extrinsic parameters define the position of the camera center and the camera's heading in world coordinates. <math>T</math> is the position of the origin of the world coordinate system expressed in coordinates of the camera-centered coordinate system. <math>T</math> is often mistakenly considered the position of the camera. The position, <math>C</math>, of the camera expressed in world coordinates is <math>C = -R^{-1}T = -R^T T</math> (since <math>R</math> is a rotation matrix).
Camera calibration is often used as an early stage in computer vision.
When a camera is used, light from the environment is focused on an image plane and captured. This process reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored on a 2D image). Each pixel on the image plane therefore corresponds to a shaft of light from the original scene.
Algorithms
There are many different approaches to calculate the intrinsic and extrinsic parameters for a specific camera setup. The most common ones are:
- Direct linear transformation (DLT) method
- Zhang's method
- Tsai's method
- Selby's method (for X-ray cameras)
Zhang's method
Шаблон:Expand section Zhang model [2][3] is a camera calibration method that uses traditional calibration techniques (known calibration points) and self-calibration techniques (correspondence between the calibration points when they are in different positions). To perform a full calibration by the Zhang method at least three different images of the calibration target/gauge are required, either by moving the gauge or the camera itself. If some of the intrinsic parameters are given as data (orthogonality of the image or optical center coordinates) the number of images required can be reduced to two.
In a first step, an approximation of the estimated projection matrix <math>H</math> between the calibration target and the image plane is determined using DLT method.[4] Subsequently, applying self-calibration techniques to obtained the image of the absolute conic matrix [Link]. The main contribution of Zhang method is how to extract a constrained instrinsic <math>K</math> and <math>n</math> numbers of <math>R</math> and <math>T</math> calibration parameters from <math>n</math> pose of the calibration target.
Derivation
Assume we have a homography <math>\textbf{H}</math> that maps points <math>x_\pi</math> on a "probe plane" <math>\pi</math> to points <math>x</math> on the image.
The circular points <math>I, J = \begin{bmatrix}1 & \pm j & 0\end{bmatrix}^{\mathrm{T}}</math> lie on both our probe plane <math>\pi</math> and on the absolute conic <math>\Omega_\infty</math>. Lying on <math>\Omega_\infty</math> of course means they are also projected onto the image of the absolute conic (IAC) <math>\omega</math>, thus <math>x_1^T \omega x_1= 0</math> and <math>x_2^T \omega x_2= 0</math>. The circular points project as
- <math>
\begin{align} x_1 & = \textbf{H} I = \begin{bmatrix} h_1 & h_2 & h_3 \end{bmatrix} \begin{bmatrix} 1 \\ j \\ 0 \end{bmatrix} = h_1 + j h_2 \\ x_2 & = \textbf{H} J = \begin{bmatrix} h_1 & h_2 & h_3 \end{bmatrix} \begin{bmatrix} 1 \\ -j \\ 0 \end{bmatrix} = h_1 - j h_2 \end{align} </math>.
We can actually ignore <math>x_2</math> while substituting our new expression for <math>x_1</math> as follows:
- <math>
\begin{align} x_1^T \omega x_1 &= \left ( h_1 + j h_2 \right )^T \omega \left ( h_1 + j h_2 \right ) \\
&= \left ( h_1^T + j h_2^T \right ) \omega \left ( h_1 + j h_2 \right ) \\ &= h_1^T \omega h_1 + j \left ( h_2^T \omega h_2 \right ) \\ &= 0
\end{align} </math>
Tsai's Algorithm
Шаблон:Expand section It is a 2-stage algorithm, calculating the pose (3D Orientation, and x-axis and y-axis translation) in first stage. In second stage it computes the focal length, distortion coefficients and the z-axis translation.[5]
Selby's method (for X-ray cameras)
Шаблон:Expand section Selby's camera calibration method[6] addresses the auto-calibration of X-ray camera systems. X-ray camera systems, consisting of the X-ray generating tube and a solid state detector can be modelled as pinhole camera systems, comprising 9 intrinsic and extrinsic camera parameters. Intensity based registration based on an arbitrary X-ray image and a reference model (as a tomographic dataset) can then be used to determine the relative camera parameters without the need of a special calibration body or any ground-truth data.
See also
- 3D pose estimation
- Augmented reality
- Augmented virtuality
- Eight-point algorithm
- Mixed reality
- Pinhole camera model
- Perspective-n-Point
- Rational polynomial coefficient
References
- ↑ Шаблон:Cite book
- ↑ Z. Zhang, "A flexible new technique for camera calibration'" Шаблон:Webarchive, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pages 1330–1334, 2000
- ↑ P. Sturm and S. Maybank, "On plane-based camera calibration: a general algorithm, singularities, applications'" Шаблон:Webarchive, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 432–437, Fort Collins, CO, USA, June 1999
- ↑ Abdel-Aziz, Y.I., Karara, H.M. "Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry Шаблон:Webarchive", Proceedings of the Symposium on Close-Range Photogrammetry (pp. 1-18), Falls Church, VA: American Society of Photogrammetry, (1971)
- ↑ Roger Y. Tsai, "A Versatile Camera Calibration for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses'" Шаблон:Webarchive, IEEE Journal of Robotics and Automation, Vol. RA-3, No.4, August, 1987
- ↑ Boris Peter Selby et al., "Patient positioning with X-ray detector self-calibration for image guided therapy" Шаблон:Webarchive, Australasian Physical & Engineering Science in Medicine, Vol.34, No.3, pages 391–400, 2011
External links
- Zhang's Camera Calibration Method with Software
- Camera Calibration - Augmented reality lecture at TU Muenchen, Germany