Английская Википедия:Iron law of processor performance
In computer architecture, the iron law of processor performance (or simply iron law of performance) describes the performance trade-off between complexity and the number of primitive instructions that processors use to perform calculations.[1] This formulation of the trade-off spurred the developmentШаблон:Citation needed of Reduced Instruction Set Computers (RISC) whose instruction set architectures (ISAs) leverage a smaller set of core instructions to improve performance. The term was coined by Douglas Clark[2] based on research performed by Clark and Joel Emer in the 1980s.[3]
Explanation
The performance of a processor is the time it takes to execute a program: <math>\mathrm{\tfrac{Time}{Program}}</math>. This can be further broken down into three factors:[4]
<math display="block">\mathrm{\frac{Instructions}{Program} \times \frac{Clock Cycles}{Instruction} \times \frac{Time}{Clock Cycles}}</math>Selection of an instruction set architecture affects <math>\mathrm{\tfrac{Instructions}{Program} \times \tfrac{Clock Cycles}{Instruction}}</math>, whereas <math>\mathrm{\tfrac{Time}{Clock Cycles}}</math> is largely determined by the manufacturing technology. Classic Complex Instruction Set Computer (CISC) ISAs optimized <math>\mathrm{\tfrac{Instructions}{Program}}</math> by providing a larger set of more complex CPU instructions. Generally speaking, however, complex instructions inflate the number of clock cycles per instruction <math>\mathrm{\tfrac{ClockCycles}{Instruction}}</math> because they must be decoded into simpler micro-operations actually performed by the hardware. After converting X86 binary to the micro-operations used internally, the total number of operations is close to what is produced for a comparable RISC ISA.[5] The iron law of processor performance makes this trade-off explicit and pushes for optimization of <math>\mathrm{\tfrac{Time}{Program}}</math>as a whole, not just a single component.
While the iron law is credited for sparking the development of RISC architectures,Шаблон:Citation needed it does not imply that a simpler ISA is always faster. If that were the case, the fastest ISA would consist of simple binary logic. A single CISC instruction can be faster than the equivalent set of RISC instructions when it enables multiple micro-operations to be performed in a single clock cycle. In practice, however, the regularity of RISC instructions allowed a pipelined implementation where the total execution time of an instruction was (typically) ~5 clock cycles, but each instruction followed the previous instruction ~1 clock cycle later Шаблон:Citation needed. CISC processors can also achieve higher performance using techniques such as modular extensions, predictive logic, compressed instructions, and macro-operation fusion.[6][5][7]
See also
References
- ↑ Шаблон:Cite book
- ↑ Шаблон:Citation
- ↑ A Characterization of Processor Performance in the VAX-11/780, Joel S. Emer, Douglas W. Clark, 1984, IEEE
- ↑ Шаблон:Cite web
- ↑ 5,0 5,1 Шаблон:Cite arXiv
- ↑ Шаблон:Cite web
- ↑ Шаблон:Citation