Английская Википедия:AI accelerator

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Short description Шаблон:Merge from Шаблон:Use American English Шаблон:Use mdy dates An AI accelerator or neural processing unit is a class of specialized hardware accelerator[1] or computer system[2][3] designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks.[4] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. Шаблон:As of, a typical AI integrated circuit chip contains billions of MOSFET transistors.[5] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

History

Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks. Benchmarks such as MLPerf may be used to evaluate the performance of AI accelerators.[6]

Early attempts

First attempts like Intel's ETANN 80170NX incorporated analog circuits to compute neural functions.[7]

Later all-digital chips like the Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators to accelerate optical character recognition software.[8]

By 1988, Wei Zhang et al. had discussed fast optical implementations of convolutional neural networks for alphabet recognition.[9][10]

In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[11][12]

This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)[13]

FPGA-based accelerators were also first explored in the 1990s for both inference and training.[14][15]

Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015.[16][17]

Heterogeneous computing

Шаблон:Main Heterogeneous computing incorporates many specialized processors in a single system, or a single chip, each optimized for a specific type of task. Architectures such as the Cell microprocessor[18] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing throughput over latency. The Cell microprocessor has been applied to a number of tasks[19][20][21] including AI.[22][23][24]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low-precision data types.[25] Due to the increasing performance of CPUs, they are also used for running AI workloads. CPUs are superior for DNNs with small or medium-scale parallelism, for sparse DNNs and in low-batch-size scenarios.

Use of GPU

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[26][27] GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training and inference in devices such as self-driving cars.[28][29] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from. As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network-specific hardware to further accelerate these tasks.[30][31] Tensor cores are intended to speed up the training of neural networks.[31]

Use of FPGAs

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks, and software alongside each other.[32][14][15][33]

Microsoft has used FPGA chips to accelerate inference.[34]

Emergence of dedicated AI accelerator ASICs

While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency[35][36] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).Шаблон:Citation needed These accelerators employ strategies such as optimized memory useШаблон:Citation needed and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[37][38] Some low-precision floating-point formats used for AI acceleration are half-precision and the bfloat16 floating-point format.[39][40][41][42][43][44][45] Companies such as Google, Qualcomm, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.[46][47][48][49][50][51] Cerebras Systems has built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads.[52][53]

Ongoing research

In-memory computing architectures

Шаблон:Expand section In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[54] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[55] The system is based on phase-change memory arrays.[56]

In-memory computing with analog resistive memories

In 2019, researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation. Their algorithm is based on in-memory computing with analog resistive memories which performs with high efficiencies of time and energy, via conducting matrix–vector multiplication in one step using Ohm's law and Kirchhoff's law. The researchers showed that a feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step. Such an approach improves computational times drastically in comparison with digital algorithms.[57]

Atomically thin semiconductors

In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).[58] Such atomically thin semiconductors are considered promising for energy-efficient machine learning applications, where the same basic device structure is used for both logic operations and data storage. The authors used two-dimensional materials such as semiconducting molybdenum disulfide.[58]

Integrated photonic tensor core

In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing.[59] The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds.[59] Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.[59]

Nomenclature

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[60] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

All models of Intel Meteor Lake processors have a Versatile Processor Unit (VPU) built-in for accelerating inference for computer vision and deep learning.[61]

Potential applications

See also

References

Шаблон:Reflist

External links

Шаблон:Hardware acceleration

  1. Шаблон:Cite web
  2. Шаблон:Cite web
  3. Шаблон:Citation
  4. Шаблон:Cite web Google using its own AI accelerators.
  5. Шаблон:Cite web
  6. Шаблон:Cite web
  7. Шаблон:Cite web
  8. Шаблон:Cite web
  9. Шаблон:Cite journal
  10. Шаблон:Cite journal
  11. Шаблон:Cite journal
  12. Шаблон:Cite web
  13. Шаблон:Cite book
  14. 14,0 14,1 Шаблон:Cite web
  15. 15,0 15,1 Шаблон:Cite book
  16. Шаблон:Cite web
  17. Шаблон:Cite web
  18. Шаблон:Cite journal
  19. Шаблон:Cite journal
  20. Шаблон:Cite book
  21. Шаблон:Cite book
  22. Шаблон:Cite web
  23. Шаблон:Cite book
  24. Шаблон:Cite book
  25. Шаблон:Cite web
  26. Шаблон:Cite web
  27. Шаблон:Cite journal
  28. Шаблон:Cite web
  29. Шаблон:Cite web
  30. "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019
  31. 31,0 31,1 Шаблон:Cite web
  32. Шаблон:Cite book
  33. Шаблон:Cite web
  34. Шаблон:Cite web
  35. Шаблон:Cite web
  36. Шаблон:Cite web
  37. Шаблон:Cite web
  38. Шаблон:Cite arXiv
  39. Шаблон:Cite web
  40. Шаблон:Cite web
  41. Шаблон:Cite web
  42. Шаблон:Cite web
  43. Шаблон:Cite web
  44. Шаблон:Cite webШаблон:Dead link
  45. Шаблон:Cite report
  46. Шаблон:Cite web
  47. Шаблон:Cite web
  48. Шаблон:Cite webШаблон:Dead link
  49. Шаблон:Cite news
  50. Шаблон:Cite web
  51. Шаблон:Cite web
  52. Шаблон:Cite web
  53. Шаблон:Cite web
  54. Шаблон:Cite journal
  55. Шаблон:Cite news
  56. Шаблон:Cite journal
  57. Шаблон:Cite journal
  58. 58,0 58,1 Шаблон:Cite journal
  59. 59,0 59,1 59,2 Шаблон:Cite journal
  60. Шаблон:Cite web
  61. Шаблон:Cite web
  62. Шаблон:Cite web
  63. Шаблон:Cite web
  64. Шаблон:Cite web
  65. Шаблон:Cite web