Английская Википедия:CORDIC

Материал из Онлайн справочника
Версия от 14:03, 13 февраля 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{short description|Algorithm for computing trigonometric, hyperbolic, logarithmic and exponential functions}} {{redirect|Pseudo-division|polynomial pseudo-division|Pseudo-remainder}} {{use dmy dates|date=February 2020|cs1-dates=y}} {{use list-defined references|date=January 2022}} {{anchor|Differential CORDIC|Branching CORDIC|Compensated CORDIC|Redundant CORDIC|Hybrid CORDIC|Merged CO...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Redirect Шаблон:Use dmy dates Шаблон:Use list-defined references Шаблон:Anchor Шаблон:Trigonometry CORDIC (for "coordinate rotation digital computer"), also known as Volder's algorithm, or: Digit-by-digit method Circular CORDIC (Jack E. Volder),[1][2] Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther),[3][4] and Generalized Hyperbolic CORDIC (GH CORDIC) (Yuanyong Luo et al.),[5][6] is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and FPGAs), as the only operations it requires are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.

Шаблон:AnchorHistory

Similar mathematical techniques were published by Henry Briggs as early as 1624[7][8] and Robert Flower in 1771,[9] but CORDIC is better optimized for low-complexity finite-state CPUs.

CORDIC was conceived in 1956[10][11] by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber's navigation computer with a more accurate and faster real-time digital solution.[11] Therefore, CORDIC is sometimes referred to as a digital resolver.[12][13]

In his research Volder was inspired by a formula in the 1946 edition of the CRC Handbook of Chemistry and Physics:[11]

<math>

\begin{align}

K_n R \sin(\theta \pm \varphi) &= R \sin(\theta) \pm 2^{-n} R \cos(\theta), \\
K_n R \cos(\theta \pm \varphi) &= R \cos(\theta) \mp 2^{-n} R \sin(\theta), \\

\end{align} </math> where <math>\varphi</math> is such that <math>\tan(\varphi) = 2^{-n}</math>, and <math>K_n := \sqrt{1 + 2^{-2n}}</math>.

His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it.[10][11] The report also discussed the possibility to compute hyperbolic coordinate rotation, logarithms and exponential functions with modified CORDIC algorithms.[10][11] Utilizing CORDIC for multiplication and division was also conceived at this time.[11] Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD).[11][14]

Шаблон:AnchorIn 1958, Convair finally started to build a demonstration system to solve radar fix-taking problems named CORDIC I, completed in 1960 without Volder, who had left the company already.[1][11] More universal CORDIC II models A (stationary) and B (airborne) were built and tested by Daggett and Harry Schuss in 1962.[11][15]

Volder's CORDIC algorithm was first described in public in 1959,[1][2][11][13][16] which caused it to be incorporated into navigation computers by companies including Martin-Orlando, Computer Control, Litton, Kearfott, Lear-Siegler, Sperry, Raytheon, and Collins Radio.[11]

Шаблон:AnchorVolder teamed up with Malcolm McMillan to build Athena, a fixed-point desktop calculator utilizing his binary CORDIC algorithm.[17] The design was introduced to Hewlett-Packard in June 1965, but not accepted.[17] Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM[18]) had proposed as pseudo-multiplication and pseudo-division in 1961.[18][19] Meggitt's method also suggested the use of base 10[18] rather than base 2, as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966,[20][19] built by and conceptually derived from Thomas E. Osborne's prototypical Green Machine, a four-function, floating-point desktop calculator he had completed in DTL logic[17] in December 1964.[21] This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year.[17][21][22][23]

Шаблон:AnchorWhen Wang Laboratories found that the HP 9100A used an approach similar to the factor combining method in their earlier LOCI-1[24] (September 1964) and LOCI-2 (January 1965)[25][26] Logarithmic Computing Instrument desktop calculators,[27] they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang's patents in 1968.[19][28][29][30]

John Stephen Walther at Hewlett-Packard generalized the algorithm into the Unified CORDIC algorithm in 1971, allowing it to calculate hyperbolic functions, natural exponentials, natural logarithms, multiplications, divisions, and square roots.[31][3][4][32] The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code.[28] This development resulted in the first scientific handheld calculator, the HP-35 in 1972.[28][33][34][35][36][37] Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019.[5][6][38][39][40] Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC.[5]

Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973[16][13][41][42][43] and it was found only later that Hewlett-Packard had implemented it in 1966 already.[11][13][20][28]

Decimal CORDIC became widely used in pocket calculators,[13] most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed.

CORDIC has been implemented in the ARM-based STM32G4, Intel 8087,[43][44][45][46][47] 80287,[47][48] 80387[47][48] up to the 80486[43] coprocessor series as well as in the Motorola 68881[43][44] and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system.

Applications

CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition, QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing, communication systems, robotics and 3D graphics apart from general scientific and technical computation.[49][50]

Hardware

The algorithm was used in the navigational system of the Apollo program's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module.[51][52] CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication.[53]

CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC). In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for.

On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementationsШаблон:Citation needed.

The STM32G4 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries.[54] Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision.[55] Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks.

The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error.[56] Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error.

Software

Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log10, natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC.

Modes of operation

Шаблон:Anchor Rotation mode

CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle <math>\beta</math>, the y or x coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, one would start with the vector <math>v_0</math>:

<math>v_0 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.</math>
Файл:CORDIC-illustration.png
An illustration of the CORDIC algorithm in progress

In the first iteration, this vector is rotated 45° counterclockwise to get the vector <math>v_1</math>. Successive iterations rotate the vector in one or the other direction by size-decreasing steps, until the desired angle has been achieved. Each step angle is <math>\gamma_i = \arctan{(2^{-i})}</math> for <math>i = 0, 1, 2, \dots</math>.

More formally, every iteration calculates a rotation, which is performed by multiplying the vector <math>v_i</math> with the rotation matrix <math>R_{i}</math>:

<math>v_{i+1} = R_i v_i.</math>

The rotation matrix is given by

<math>R_i = \begin{bmatrix}
\cos(\gamma_i) & -\sin(\gamma_i) \\
\sin(\gamma_i) & \cos(\gamma_i)

\end{bmatrix}.</math>

Using the following two trigonometric identities:

<math>\begin{align}
\cos(\gamma_i) &= \frac{1}{\sqrt{1 + \tan^2(\gamma_i)}}, \\
\sin(\gamma_i) &= \frac{\tan(\gamma_i)}{\sqrt{1 + \tan^2(\gamma_i)}},

\end{align}</math>

the rotation matrix becomes

<math>R_i = \frac{1}{\sqrt{1 + \tan^2(\gamma_i)}} \begin{bmatrix}
1 & -\tan(\gamma_i) \\
\tan(\gamma_i) & 1

\end{bmatrix}.</math>

The expression for the rotated vector <math>v_{i+1} = R_i v_i</math> then becomes

<math>\begin{bmatrix}
x_{i+1} \\
y_{i+1}

\end{bmatrix} = \frac{1}{\sqrt{1 + \tan^2(\gamma_i)}} \begin{bmatrix}

1 & -\tan(\gamma_i) \\
\tan(\gamma_i) & 1

\end{bmatrix} \begin{bmatrix}

x_i \\
y_i

\end{bmatrix},</math>

where <math>x_i</math> and <math>y_i</math> are the components of <math>v_i</math>. Restricting the angles <math>\gamma_i</math> such that <math>\tan(\gamma_i) = \pm 2^{-i}</math>, the multiplication with the tangent can be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift. The expression then becomes

<math>\begin{bmatrix}
x_{i+1} \\
y_{i+1}

\end{bmatrix} = K_i \begin{bmatrix}

1 & -\sigma_i 2^{-i} \\
\sigma_i 2^{-i} & 1

\end{bmatrix} \begin{bmatrix}

x_i \\
y_i

\end{bmatrix},</math>

where

<math>K_i = \frac{1}{\sqrt{1 + 2^{-2i}}},</math>

and <math>\sigma_i</math> is used to determine the direction of the rotation: if the angle <math>\gamma_i</math> is positive, then <math>\sigma_i</math> is +1, otherwise it is −1.

All <math>K_i</math> factors can be ignored in the iterative process and then applied all at once afterwards with a scaling factor <math>K(n)</math>

<math>K(n) = \prod_{i=0}^{n-1} K_i = \prod_{i=0}^{n-1} \frac{1}{\sqrt{1 + 2^{-2i}}},</math>

which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling <math>v_0</math> and hence saving a multiplication. Additionally, it can be noted that[43]

<math>K = \lim_{n \to \infty} K(n) \approx 0.6072529350088812561694</math>

to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for <math>K</math> altogether, resulting in a processing gain <math>A</math>:[57]

<math>A = \frac{1}{K} = \lim_{n \to \infty} \prod_{i=0}^{n-1} \sqrt{1 + 2^{-2i}} \approx 1.64676025812107.</math>

After a sufficient number of iterations, the vector's angle will be close to the wanted angle <math>\beta</math>. For most ordinary purposes, 40 iterations (n = 40) are sufficient to obtain the correct result to the 10th decimal place.

The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of <math>\sigma</math>). This is done by keeping track of how much the angle was rotated at each iteration and subtracting that from the wanted angle; then in order to get closer to the wanted angle <math>\beta</math>, if <math>\beta_{n+1}</math> is positive, the rotation is clockwise, otherwise it is negative and the rotation is counterclockwise:

<math>\beta_0 = \beta </math>
<math>\beta_{i+1} = \beta_i - \sigma_i \gamma_i, \quad \gamma_i = \arctan(2^{-i}).</math>

The values of <math>\gamma_n</math> must also be precomputed and stored. But for small angles, <math>\arctan(\gamma_n) = \gamma_n</math> in fixed-point representation, reducing table size.

As can be seen in the illustration above, the sine of the angle <math>\beta</math> is the y coordinate of the final vector <math>v_n,</math> while the x coordinate is the cosine value.

Шаблон:Anchor Vectoring mode

The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the x axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on <math>\beta_i</math> being positive or negative.

The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose x coordinate is positive whereas the y coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the x axis (and therefore reducing the y coordinate to zero). At each step, the value of y determines the direction of the rotation. The final value of <math>\beta_i</math> contains the total angle of rotation. The final value of x will be the magnitude of the original vector scaled by K. So, an obvious use of the vectoring mode is the transformation from rectangular to polar coordinates.

Implementation

In Java the Math class has a scalb(double x,int scale) method to perform such a shift,[58] C has the ldexp function,[59] and the x86 class of processors have the fscale floating point operation.[60]

Software Example (Python)

from math import atan2, sqrt, sin, cos, radians

ITERS = 16
theta_table = [atan2(1, 2**i) for i in range(ITERS)]

def compute_K(n):
    """
    Compute K(n) for n = ITERS. This could also be
    stored as an explicit constant if ITERS above is fixed.
    """
    k = 1.0
    for i in range(n):
        k *= 1 / sqrt(1 + 2 ** (-2 * i))
    return k

def CORDIC(alpha, n):
    K_n = compute_K(n)
    theta = 0.0
    x = 1.0
    y = 0.0
    P2i = 1  # This will be 2**(-i) in the loop below
    for arc_tangent in theta_table:
        sigma = +1 if theta < alpha else -1
        theta += sigma * arc_tangent
        x, y = x - sigma * y * P2i, sigma * P2i * x + y
        P2i /= 2
    return x * K_n, y * K_n

if __name__ == "__main__":
    # Print a table of computed sines and cosines, from -90° to +90°, in steps of 15°,
    # comparing against the available math routines.
    print("  x       sin(x)     diff. sine     cos(x)    diff. cosine ")
    for x in range(-90, 91, 15):
        cos_x, sin_x = CORDIC(radians(x), ITERS)
        print(
            f"{x:+05.1f}°  {sin_x:+.8f} ({sin_x-sin(radians(x)):+.8f}) {cos_x:+.8f} ({cos_x-cos(radians(x)):+.8f})"
        )

Output

$ python cordic.py
  x       sin(x)     diff. sine     cos(x)    diff. cosine
-90.0°  -1.00000000 (+0.00000000) -0.00001759 (-0.00001759)
-75.0°  -0.96592181 (+0.00000402) +0.25883404 (+0.00001499)
-60.0°  -0.86601812 (+0.00000729) +0.50001262 (+0.00001262)
-45.0°  -0.70711776 (-0.00001098) +0.70709580 (-0.00001098)
-30.0°  -0.50001262 (-0.00001262) +0.86601812 (-0.00000729)
-15.0°  -0.25883404 (-0.00001499) +0.96592181 (-0.00000402)
+00.0°  +0.00001759 (+0.00001759) +1.00000000 (-0.00000000)
+15.0°  +0.25883404 (+0.00001499) +0.96592181 (-0.00000402)
+30.0°  +0.50001262 (+0.00001262) +0.86601812 (-0.00000729)
+45.0°  +0.70709580 (-0.00001098) +0.70711776 (+0.00001098)
+60.0°  +0.86601812 (-0.00000729) +0.50001262 (+0.00001262)
+75.0°  +0.96592181 (-0.00000402) +0.25883404 (+0.00001499)
+90.0°  +1.00000000 (-0.00000000) -0.00001759 (-0.00001759)

Hardware example

The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters.

Double iterations CORDIC

In two of the publications by Vladimir Baykov,[61][62] it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes EVERY time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared: <math>i = 0, 0, 1, 1, 2, 2\dots</math>. Whereas with ordinary iterations: <math>i = 0, 1, 2\dots</math>. The double iteration method guarantees the convergence of the method throughout the valid range of argument changes.

The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix <math>R</math> showed[63] that for the functions sine, cosine, arctangent, it is enough to perform <math>R - 1</math> iterations for each value of i (i = 0 or 1 to n, where n is the number of digits), i.e. for each digit of the result. For the natural logarithm, exponential, hyperbolic sine, cosine and arctangent, <math>R</math> iterations should be performed for each value <math>i</math>. For the functions arcsine and arccosine, two <math>R - 1</math> iterations should be performed for each number digit, i.e. for each value of <math>i</math>.[63]

For inverse hyperbolic sine and arcosine functions, the number of iterations will be <math>2R</math> for each <math>i</math>, that is, for each result digit.

Related algorithms

CORDIC is part of the class of "shift-and-add" algorithms, as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm, which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle <math>x</math> (in radians) by computing the exponential of <math>0+ix</math>, which is <math>\operatorname{cis}(x) = \cos(x) + i \sin(x)</math>. The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor (K).

See also

References

Шаблон:Reflist

Further reading

External links

Шаблон:Wikiversity

Шаблон:CPU technologies

  1. 1,0 1,1 1,2 Ошибка цитирования Неверный тег <ref>; для сносок Volder_1959_1 не указан текст
  2. 2,0 2,1 Ошибка цитирования Неверный тег <ref>; для сносок Volder_1959_2 не указан текст
  3. 3,0 3,1 Ошибка цитирования Неверный тег <ref>; для сносок Walther_1971 не указан текст
  4. 4,0 4,1 Ошибка цитирования Неверный тег <ref>; для сносок Walther_2000 не указан текст
  5. 5,0 5,1 5,2 Ошибка цитирования Неверный тег <ref>; для сносок Luo_2019_TVLSI не указан текст
  6. 6,0 6,1 Ошибка цитирования Неверный тег <ref>; для сносок Luo_2019_TVLSI_c не указан текст
  7. Ошибка цитирования Неверный тег <ref>; для сносок Briggs_1624 не указан текст
  8. Ошибка цитирования Неверный тег <ref>; для сносок Laporte_2014_Briggs не указан текст
  9. Ошибка цитирования Неверный тег <ref>; для сносок Flower_1771 не указан текст
  10. 10,0 10,1 10,2 Ошибка цитирования Неверный тег <ref>; для сносок Volder_1956 не указан текст
  11. 11,00 11,01 11,02 11,03 11,04 11,05 11,06 11,07 11,08 11,09 11,10 11,11 Ошибка цитирования Неверный тег <ref>; для сносок Volder_2000 не указан текст
  12. Ошибка цитирования Неверный тег <ref>; для сносок Perle_1971 не указан текст
  13. 13,0 13,1 13,2 13,3 13,4 Ошибка цитирования Неверный тег <ref>; для сносок Schmid_1983 не указан текст
  14. Ошибка цитирования Неверный тег <ref>; для сносок Daggett_1959 не указан текст
  15. Ошибка цитирования Неверный тег <ref>; для сносок ASG_1962 не указан текст
  16. 16,0 16,1 Ошибка цитирования Неверный тег <ref>; для сносок Schmid_1974 не указан текст
  17. 17,0 17,1 17,2 17,3 Ошибка цитирования Неверный тег <ref>; для сносок Leibson_2010_2 не указан текст
  18. 18,0 18,1 18,2 Ошибка цитирования Неверный тег <ref>; для сносок Meggitt_1962 не указан текст
  19. 19,0 19,1 19,2 Ошибка цитирования Неверный тег <ref>; для сносок Cochran_2010_2 не указан текст
  20. 20,0 20,1 Ошибка цитирования Неверный тег <ref>; для сносок Cochran_1966 не указан текст
  21. 21,0 21,1 Ошибка цитирования Неверный тег <ref>; для сносок Osborne_1994 не указан текст
  22. Ошибка цитирования Неверный тег <ref>; для сносок Leibson_2010_1 не указан текст
  23. Ошибка цитирования Неверный тег <ref>; для сносок Cochran_1968 не указан текст
  24. Ошибка цитирования Неверный тег <ref>; для сносок Wang_1964_LOCI-1 не указан текст
  25. Ошибка цитирования Неверный тег <ref>; для сносок Bensene_2013 не указан текст
  26. Ошибка цитирования Неверный тег <ref>; для сносок Wang_1967_LOCI не указан текст
  27. Ошибка цитирования Неверный тег <ref>; для сносок Bensene_2004 не указан текст
  28. 28,0 28,1 28,2 28,3 Ошибка цитирования Неверный тег <ref>; для сносок Cochran_2010_1 не указан текст
  29. Ошибка цитирования Неверный тег <ref>; для сносок Wang_US3402285 не указан текст
  30. Ошибка цитирования Неверный тег <ref>; для сносок Wang_DE1499281B1 не указан текст
  31. Ошибка цитирования Неверный тег <ref>; для сносок Swartzlander_1990 не указан текст
  32. Ошибка цитирования Неверный тег <ref>; для сносок Petrocelli_1972 не указан текст
  33. Ошибка цитирования Неверный тег <ref>; для сносок Cochran_1972 не указан текст
  34. Ошибка цитирования Неверный тег <ref>; для сносок Laporte_2005_Trig не указан текст
  35. Ошибка цитирования Неверный тег <ref>; для сносок Laporte_2005_Secret не указан текст
  36. Ошибка цитирования Неверный тег <ref>; для сносок Laporte_2012_Digit не указан текст
  37. Ошибка цитирования Неверный тег <ref>; для сносок Laporte_2012_HP35Log не указан текст
  38. Ошибка цитирования Неверный тег <ref>; для сносок Wang_2020_tvlsi не указан текст
  39. Ошибка цитирования Неверный тег <ref>; для сносок Mopuri_2019_Nth не указан текст
  40. Ошибка цитирования Неверный тег <ref>; для сносок Vachhani_2020 не указан текст
  41. Ошибка цитирования Неверный тег <ref>; для сносок Schmid_1973 не указан текст
  42. Ошибка цитирования Неверный тег <ref>; для сносок Franke_1973 не указан текст
  43. 43,0 43,1 43,2 43,3 43,4 Ошибка цитирования Неверный тег <ref>; для сносок Muller_2006 не указан текст
  44. 44,0 44,1 Ошибка цитирования Неверный тег <ref>; для сносок Nave_1983 не указан текст
  45. Ошибка цитирования Неверный тег <ref>; для сносок Palmer_1984 не указан текст
  46. Ошибка цитирования Неверный тег <ref>; для сносок Glass_1990 не указан текст
  47. 47,0 47,1 47,2 Ошибка цитирования Неверный тег <ref>; для сносок Jarvis_1990 не указан текст
  48. 48,0 48,1 Ошибка цитирования Неверный тег <ref>; для сносок Yuen_1988 не указан текст
  49. Ошибка цитирования Неверный тег <ref>; для сносок Meher_2009 не указан текст
  50. Ошибка цитирования Неверный тег <ref>; для сносок Meher_2013_CORDIC не указан текст
  51. Ошибка цитирования Неверный тег <ref>; для сносок Heffron-LaPiana_1970 не указан текст
  52. Ошибка цитирования Неверный тег <ref>; для сносок Smith-Mastin_1973 не указан текст
  53. Ошибка цитирования Неверный тег <ref>; для сносок Shirriff_2020 не указан текст
  54. Ошибка цитирования Неверный тег <ref>; для сносок STM_2021 не указан текст
  55. Ошибка цитирования Неверный тег <ref>; для сносок ARM_2021 не указан текст
  56. Ошибка цитирования Неверный тег <ref>; для сносок Error_2021 не указан текст
  57. Ошибка цитирования Неверный тег <ref>; для сносок Andraka_1998 не указан текст
  58. Ошибка цитирования Неверный тег <ref>; для сносок Java_Math не указан текст
  59. Ошибка цитирования Неверный тег <ref>; для сносок ldexp не указан текст
  60. Ошибка цитирования Неверный тег <ref>; для сносок Intel_2016 не указан текст
  61. Шаблон:Cite web
  62. Шаблон:Cite web
  63. 63,0 63,1 Шаблон:Cite web