Английская Википедия:End-to-end principle
Шаблон:Short description Шаблон:Net neutrality
The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The essence of what would later be called the end-to-end principle was contained in the work of Paul Baran and Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s.[1] The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark.[2]Шаблон:Efn The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper.[3]
A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness.Шаблон:Efn Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not.
Concept
The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high-reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgments and retransmissions (referred to as PAR or ARQ).Шаблон:Efn Put differently, it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes,Шаблон:Efn especially when the latter are beyond the control of, and not accountable to, the former.Шаблон:Efn Positive end-to-end acknowledgments with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another.Шаблон:Efn
The end-to-end principle does not extend to functions beyond end-to-end error control and correction, and security. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. In a 2001 paper, Blumenthal and Clark note: "[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the endpoints; if implementation inside the network is the only way to accomplish the requirement, then an end-to-end argument isn't appropriate in the first place."[4]Шаблон:Rp
The end-to-end principle is closely related, and sometimes seen as a direct precursor, to the principle of net neutrality.[5]
History
In the 1960s, Paul Baran and Donald Davies, in their pre-ARPANET elaborations of networking, made comments about reliability that capture the essence of the later end-to-end principle. To quote from a 1964 Baran paper, "Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist."[6]Шаблон:Rp Similarly, Davies notes on end-to-end error control, "It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated."[7]Шаблон:Rp
The ARPANET was the first large-scale general-purpose packet switching networkШаблон:Snd implementing several of the basic notions previously touched on by Baran and Davies.
Davies had worked on the simulation of datagram networks.[8][9] Building on this idea, Louis Pouzin's CYCLADES network was the first to implement datagrams and make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.[1] Concepts implemented in this network influenced TCP/IP architecture.[10]
Applications
ARPANET
The ARPANET demonstrated several important aspects of the end-to-end principle.
- Packet switching pushes some logical functions toward the communication endpoints
- If the basic premise of a distributed network is packet switching, then functions such as reordering and duplicate detection inevitably have to be implemented at the logical endpoints of such a network. Consequently, the ARPANET featured two distinct levels of functionality:
- a lower level concerned with transporting data packets between neighboring network nodes (called Interface Message Processors or IMPs), and
- a higher level concerned with various end-to-end aspects of the data transmission.Шаблон:Efn
- Dave Clark, one of the authors of the end-to-end principle paper, concludes: "The discovery of packets is not a consequence of the end-to-end argument. It is the success of packets that make the end-to-end argument relevant."[11]Шаблон:Rp
- No arbitrarily reliable data transfer without end-to-end acknowledgment and retransmission mechanisms
- The ARPANET was designed to provide reliable data transport between any two endpoints of the networkШаблон:Snd much like a simple I/O channel between a computer and a nearby peripheral device.Шаблон:Efn In order to remedy any potential failures of packet transmission normal ARPANET messages were handed from one node to the next node with a positive acknowledgment and retransmission scheme; after a successful handover they were then discarded,Шаблон:Efn no source-to-destination re-transmission in case of packet loss was catered for. However, in spite of significant efforts, perfect reliability as envisaged in the initial ARPANET specification turned out to be impossible to provideШаблон:Snda reality that became increasingly obvious once the ARPANET grew well beyond its initial four-node topology.Шаблон:Efn The ARPANET thus provided a strong case for the inherent limits of network-based hop-by-hop reliability mechanisms in pursuit of true end-to-end reliability.Шаблон:Efn
- Trade-off between reliability, latency, and throughput
- The pursuit of perfect reliability may hurt other relevant parameters of a data transmissionШаблон:Sndmost importantly latency and throughput. This is particularly important for applications that value predictable throughput and low latency over reliabilityШаблон:Sndthe classic example being interactive real-time voice applications. This use case was catered for in the ARPANET by providing a raw message service that dispensed with various reliability measures so as to provide faster and lower latency data transmission service to the end hosts.Шаблон:Efn
TCP/IP
Internet Protocol (IP) is a connectionless datagram service with no delivery guarantees. On the Internet, IP is used for nearly all communications. End-to-end acknowledgment and retransmission is the responsibility of the connection-oriented Transmission Control Protocol (TCP) which sits on top of IP. The functional split between IP and TCP exemplifies the proper application of the end-to-end principle to transport protocol design.
File transfer
An example of the end-to-end principle is that of an arbitrarily reliable file transfer between two endpoints in a distributed network of a varying, nontrivial size:[3] The only way two endpoints can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgment (ACK/NACK) protocols are justified only for the purpose of optimizing performanceШаблон:Sndthey are useful to the vast majority of clients, but are not enough to fulfill the reliability requirement of this particular application. A thorough checksum is hence best done at the endpoints, and the network maintains a relatively low level of complexity and reasonable performance for all clients.[3]
Limitations
The most important limitation of the end-to-end principle is that its basic premise, placing functions in the application endpoints rather than in the intermediary nodes, is not trivial to implement.
An example of the limitations of the end-to-end principle exists in mobile devices, for instance with mobile IPv6.[12] Pushing service-specific complexity to the endpoints can cause issues with mobile devices if the device has unreliable access to network channels.[13]
Further problems can be seen with a decrease in network transparency from the addition of network address translation (NAT), which IPv4 relies on to combat address exhaustion.[14] With the introduction of IPv6, users once again have unique identifiers, allowing for true end-to-end connectivity. Unique identifiers may be based on a physical address, or can be generated randomly by the host.[15]
The end-to-end principle advocates pushing coordination-related functionality ever higher, ultimately into the application layer. The premise is that application-level information enables flexible coordination between the application endpoints and yields better performance because the coordination would be exactly what is needed. This leads to the idea of modeling each application via its own application-specific protocol that supports the desired coordination between its endpoints while assuming only a simple lower-layer communication service. Broadly, this idea is known as application semantics (meaning).
Multiagent systems offers approaches based on application semantics that enable conveniently implementing distributed applications without requiring message ordering and delivery guarantees from the underlying communication services. A basic idea in these approaches is to model the coordination between application endpoints via an information protocol[16] and then implement the endpoints (agents) based on the protocol. Information protocols can be enacted over lossy, unordered communication services. A middleware based on information protocols and the associated programming model abstracts away message receptions from the underlying network and enables endpoint programmers to focus on the business logic for sending messages.
See also
Notes
References
- ↑ 1,0 1,1 Шаблон:Cite web
- ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокSRC1981
не указан текст - ↑ 3,0 3,1 3,2 Ошибка цитирования Неверный тег
<ref>
; для сносокSRC1984
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокBC2001
не указан текст - ↑ Шаблон:Cite web
- ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокBar1964
не указан текст - ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокDav1967
не указан текст - ↑ Шаблон:Cite book
- ↑ Шаблон:Cite book
- ↑ Шаблон:Cite news
- ↑ Ошибка цитирования Неверный тег
<ref>
; для сносокClark2007
не указан текст - ↑ Шаблон:Cite IETF
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite web
- ↑ Шаблон:Cite web
- Английская Википедия
- Страницы с неработающими файловыми ссылками
- Internet architecture
- Net neutrality
- Network architecture
- Programming paradigms
- Страницы, где используется шаблон "Навигационная таблица/Телепорт"
- Страницы с телепортом
- Википедия
- Статья из Википедии
- Статья из Английской Википедии
- Страницы с ошибками в примечаниях