web.archive.org

End-to-end principle

The end-to-end principle is one of the central design principles of the Internet and is implemented in the design of the underlying methods and protocols in the Internet Protocol Suite. It is also used in other distributed systems. The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled.

According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization, hence, Transmission Control Protocol (TCP) retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached.

History

The concept and research of end-to-end connectivity and network intelligence at the end-nodes reaches back to packet-switching networks in the 1970s, cf. CYCLADES. A 1981 presentation entitled End-to-end arguments in system design[1] by Jerome H. Saltzer, David P. Reed, and David D. Clark, argued that reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing in the intermediate system. They pointed out that most features in the lowest level of a communications system have costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to reimplement the features on an end-to-end basis.

This leads to the model of a dumb, minimal network with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals. However, the End-to-end principle was always meant to be a pragmatic engineering philosophy for network system design that merely prefers putting intelligence towards the end points. It does not forbid intelligence in the network itself if it makes more practical sense to put certain intelligence in the network rather than the end-points. David D. Clark along with Marjory S. Blumenthal wrote in 2001 in Rethinking the design of the Internet: The end to end arguments vs. the brave new world[2]:

from the beginning, the end to end arguments revolved around requirements that could be implemented correctly at the end-points; if implementation inside the network is the only way to accomplish the requirement, then an end to end argument isn't appropriate in the first place.

Indeed, as noted in RFC 1958 edited by Brian Carpenter in June 1996, entitled “Architectural Principles of the Internet,” “[i]n searching for Internet architectural principles, we must remember that technical change is continuous in the information technology industry. The Internet reflects this. . . .In this environment, some architectural principles inevitably change. Principles that seemed inviolable a few years ago are deprecated tomorrow. The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely.” This is particularly true with respect to the so-called “end-to-end” principle.

As noted by Bob Kahn, co-inventor of the Internet Protocol:

The original Internet involved three individual networks, namely the ARPANET, the Packet Radio network and the Packet Satellite network, all three of which had been developed with DARPA support. One early consideration that was rejected was to change each of these networks to be able to interpret and route internet packets so that there would be no need for external devices to route the traffic. However, this would have required major changes to all three networks and would have required synchronized changes in all three to accommodate protocol evolutions. Instead, it was decided to create what were called “gateways,” the forerunner of today’s routers, to handle the IP protocol-based networks. Reliable packet communication was handled by a combination of factors, but, ultimately, the TCP protocol provided an end-to-end means of reassembly of packet fragments, error checking and acknowledgment back to the source. The resulting fact that no changes were needed in the individual networks was interpreted by some as implying that the Internet design assumed only dumb networks with all the smarts being at the boundaries. Nothing could have been further from the truth. The initial choice of using gateways/routers was purely pragmatic and should imply nothing about how the Internet might operate in the future.

In 1995, the Federal Networking Council adopted a resolution defining the Internet as a “global information system” that is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and provides, uses or makes accessible, either publicly or privately, high level services layered on this communications and related infrastructure [1].

In comments submitted by Patrice Lyons to the United Nations Working Group on Internet Governance (November 4, 2004), entitled “The End-End Principle and the Definition of Internet,” on behalf of Bob Kahn’s non profit research organization, Corporation for National Research Initiatives (CNRI), it was noted that:

To argue today that the only stateful elements that may be active in the Internet environment should be located at the edges of the Internet is to ignore the evolution of software and other technologies to provide a host of services throughout the Internet. The layering approach has many advantages and should be retained along with more integrated system architectures; the approach was a practical way of overlaying the Internet architecture over existing networks when it was difficult to coordinate the modification of these networks, if indeed such modifications could have been agreed upon and implemented. For some newer applications, maintaining state information within the network may now be desirable for efficiency if not overall performance effectiveness. In addition, current research efforts may need to draw upon innovative methods to increase security of communications, develop new forms of structuring data, create and deploy dynamic metadata repositories, or real-time authentication of the information itself [2].

Specifically, CNRI proposed that, in the third element of the FNC definition of Internet, after the phrase "high level services layered on", it is advisable to add the following words: "or integrated with", and observed that this point is "directly relevant to the ongoing discussions about the so-called ‘end-to-end’ principle that is often viewed as essential to an understanding of the Internet". Further, while the end-to-end principle may have been relevant in the environment where the Internet originated, it has not been critical for a number of years going back "at least to the early work on mobile programs, distributed searching, and certain aspects of collaborative computing".

Examples

In the Internet Protocol Suite, the Internet Protocol is a simple ("dumb"), stateless protocol that moves datagrams across the network, and TCP is a smart transport protocol providing error detection, retransmission, congestion control, and flow control end-to-end. The network itself (the routers) needs only to support the simple, lightweight IP; the endpoints run the heavier TCP on top of it when needed.

A second canonical example is that of file transfer. Every reliable file transfer protocol and file transfer program should contain a checksum, which is validated only after everything has been successfully stored on disk. Disk errors, router errors, and file transfer software errors make an end-to-end checksum necessary. Therefore, there is a limit to how secure TCP checksum should be, because it has to be reimplemented for any robust end-to-end application to be secure.

A third example (not from the original paper) is the EtherType field of Ethernet. An Ethernet frame does not attempt to provide interpretation for the 16 bits of type in an original Ethernet packet. To add special interpretation to some of these bits would reduce the total number of Ethertypes, hurting the scalability of higher layer protocols, i.e. all higher layer protocols would pay a price for the benefit of just a few. Attempts to add elaborate interpretation (e.g. IEEE 802 SSAP/DSAP) have generally been ignored by most network designs, which follow the end-to-end principle.

See also

References

  1. ^ Saltzer, J., Reed, D., and Clark, D.D. End-to-End Arguments in System Design. Second International Conference on Distributed Computing Systems, pages 509-512, April 1981. ACM Transactions on Computer Systems, 2(4), pages 277-288, 1984.
  2. ^ Blumenthal, M.S, Clark, D.D., Rethinking the design of the Internet: The end to end arguments vs. the brave new world. ACM Transactions on Internet Technology, 1(1), pages 70-109, 2001.

External links

This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)