--- 1/draft-ietf-tsvwg-byte-pkt-congest-09.txt 2013-05-24 08:14:21.369259145 +0100 +++ 2/draft-ietf-tsvwg-byte-pkt-congest-10.txt 2013-05-24 08:14:21.465261794 +0100 @@ -1,162 +1,162 @@ Transport Area Working Group B. Briscoe Internet-Draft BT Updates: 2309 (if approved) J. Manner Intended status: BCP Aalto University -Expires: May 11, 2013 November 7, 2012 +Expires: November 24, 2013 May 23, 2013 Byte and Packet Congestion Notification - draft-ietf-tsvwg-byte-pkt-congest-09 + draft-ietf-tsvwg-byte-pkt-congest-10 Abstract This document provides recommendations of best current practice for - dropping or marking packets using active queue management (AQM) such - as random early detection (RED) or pre-congestion notification (PCN). - We give three strong recommendations: (1) packet size should be taken - into account when transports read and respond to congestion - indications, (2) packet size should not be taken into account when - network equipment creates congestion signals (marking, dropping), and - therefore (3) the byte-mode packet drop variant of the RED AQM - algorithm that drops fewer small packets should not be used. This - memo updates RFC 2309 to deprecate deliberate preferential treatment - of small packets in AQM algorithms. + dropping or marking packets using any active queue management (AQM) + algorithm, such as random early detection (RED), BLUE, pre-congestion + notification (PCN), etc. We give three strong recommendations: (1) + packet size should be taken into account when transports read and + respond to congestion indications, (2) packet size should not be + taken into account when network equipment creates congestion signals + (marking, dropping), and therefore (3) in the specific case of RED, + the byte-mode packet drop variant that drops fewer small packets + should not be used. This memo updates RFC 2309 to deprecate + deliberate preferential treatment of small packets in AQM algorithms. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on May 11, 2013. + This Internet-Draft will expire on November 24, 2013. Copyright Notice - Copyright (c) 2012 IETF Trust and the persons identified as the + Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1. Terminology and Scoping . . . . . . . . . . . . . . . . . 6 1.2. Example Comparing Packet-Mode Drop and Byte-Mode Drop . . 7 - 2. Recommendations . . . . . . . . . . . . . . . . . . . . . . . 8 + 2. Recommendations . . . . . . . . . . . . . . . . . . . . . . . 9 2.1. Recommendation on Queue Measurement . . . . . . . . . . . 9 - 2.2. Recommendation on Encoding Congestion Notification . . . . 9 - 2.3. Recommendation on Responding to Congestion . . . . . . . . 10 + 2.2. Recommendation on Encoding Congestion Notification . . . . 10 + 2.3. Recommendation on Responding to Congestion . . . . . . . . 11 2.4. Recommendation on Handling Congestion Indications when - Splitting or Merging Packets . . . . . . . . . . . . . . . 11 + Splitting or Merging Packets . . . . . . . . . . . . . . . 12 3. Motivating Arguments . . . . . . . . . . . . . . . . . . . . . 12 3.1. Avoiding Perverse Incentives to (Ab)use Smaller Packets . 12 - 3.2. Small != Control . . . . . . . . . . . . . . . . . . . . . 13 - 3.3. Transport-Independent Network . . . . . . . . . . . . . . 13 + 3.2. Small != Control . . . . . . . . . . . . . . . . . . . . . 14 + 3.3. Transport-Independent Network . . . . . . . . . . . . . . 14 3.4. Partial Deployment of AQM . . . . . . . . . . . . . . . . 15 - 3.5. Implementation Efficiency . . . . . . . . . . . . . . . . 16 - 4. A Survey and Critique of Past Advice . . . . . . . . . . . . . 16 + 3.5. Implementation Efficiency . . . . . . . . . . . . . . . . 17 + 4. A Survey and Critique of Past Advice . . . . . . . . . . . . . 17 4.1. Congestion Measurement Advice . . . . . . . . . . . . . . 17 - 4.1.1. Fixed Size Packet Buffers . . . . . . . . . . . . . . 17 - 4.1.2. Congestion Measurement without a Queue . . . . . . . . 18 - 4.2. Congestion Notification Advice . . . . . . . . . . . . . . 19 - 4.2.1. Network Bias when Encoding . . . . . . . . . . . . . . 19 + 4.1.1. Fixed Size Packet Buffers . . . . . . . . . . . . . . 18 + 4.1.2. Congestion Measurement without a Queue . . . . . . . . 19 + 4.2. Congestion Notification Advice . . . . . . . . . . . . . . 20 + 4.2.1. Network Bias when Encoding . . . . . . . . . . . . . . 20 4.2.2. Transport Bias when Decoding . . . . . . . . . . . . . 21 4.2.3. Making Transports Robust against Control Packet - Losses . . . . . . . . . . . . . . . . . . . . . . . . 22 + Losses . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2.4. Congestion Notification: Summary of Conflicting Advice . . . . . . . . . . . . . . . . . . . . . . . . 23 5. Outstanding Issues and Next Steps . . . . . . . . . . . . . . 24 5.1. Bit-congestible Network . . . . . . . . . . . . . . . . . 24 - 5.2. Bit- & Packet-congestible Network . . . . . . . . . . . . 24 + 5.2. Bit- & Packet-congestible Network . . . . . . . . . . . . 25 6. Security Considerations . . . . . . . . . . . . . . . . . . . 25 - 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 + 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26 8. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 26 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 27 10. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 27 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 27 11.1. Normative References . . . . . . . . . . . . . . . . . . . 27 - 11.2. Informative References . . . . . . . . . . . . . . . . . . 27 + 11.2. Informative References . . . . . . . . . . . . . . . . . . 28 Appendix A. Survey of RED Implementation Status . . . . . . . . . 31 Appendix B. Sufficiency of Packet-Mode Drop . . . . . . . . . . . 32 B.1. Packet-Size (In)Dependence in Transports . . . . . . . . . 33 B.2. Bit-Congestible and Packet-Congestible Indications . . . . 36 Appendix C. Byte-mode Drop Complicates Policing Congestion - Response . . . . . . . . . . . . . . . . . . . . . . 38 - Appendix D. Changes from Previous Versions . . . . . . . . . . . 39 + Response . . . . . . . . . . . . . . . . . . . . . . 37 + Appendix D. Changes from Previous Versions . . . . . . . . . . . 38 1. Introduction This memo concerns how we should correctly scale congestion control - functions with packet size for the long term. It also recognises - that expediency may be necessary to deal with existing widely - deployed protocols that don't live up to the long term goal. + functions with respect to packet size for the long term. It also + recognises that expediency may be necessary to deal with existing + widely deployed protocols that don't live up to the long term goal. - When notifying congestion, the problem of how (and whether) to take + When signalling congestion, the problem of how (and whether) to take packet sizes into account has exercised the minds of researchers and practitioners for as long as active queue management (AQM) has been discussed. Indeed, one reason AQM was originally introduced was to reduce the lock-out effects that small packets can have on large packets in drop-tail queues. This memo aims to state the principles we should be using and to outline how these principles will affect future protocol design, taking into account the existing deployments we have already. The question of whether to take into account packet size arises at three stages in the congestion notification process: Measuring congestion: When a congested resource measures locally how - congested it is, should it measure its queue length in bytes or - packets? + congested it is, should it measure its queue length in time, bytes + or packets? Encoding congestion notification into the wire protocol: When a - congested network resource notifies its level of congestion, - should it drop / mark each packet dependent on the byte-size of - the particular packet in question? + congested network resource signals its level of congestion, should + it drop / mark each packet dependent on the size of the particular + packet in question? Decoding congestion notification from the wire protocol: When a transport interprets the notification in order to decide how much - to respond to congestion, should it take into account the byte- - size of each missing or marked packet? + to respond to congestion, should it take into account the size of + each missing or marked packet? - Consensus has emerged over the years concerning the first stage: - whether queues are measured in bytes or packets, termed byte-mode - queue measurement or packet-mode queue measurement. Section 2.1 of - this memo records this consensus in the RFC Series. In summary the - choice solely depends on whether the resource is congested by bytes - or packets. + Consensus has emerged over the years concerning the first stage: if + queues cannot be measured in time, whether they should be measured in + bytes or packets. Section 2.1 of this memo records this consensus in + the RFC Series. In summary the choice solely depends on whether the + resource is congested by bytes or packets. The controversy is mainly around the last two stages: whether to allow for the size of the specific packet notifying congestion i) when the network encodes or ii) when the transport decodes the congestion notification. Currently, the RFC series is silent on this matter other than a paper trail of advice referenced from [RFC2309], which conditionally recommends byte-mode (packet-size dependent) drop [pktByteEmail]. + Reducing drop of small packets certainly has some tempting advantages: i) it drops less control packets, which tend to be small and ii) it makes TCP's bit-rate less dependent on packet size. However, there are ways of addressing these issues at the transport layer, rather than reverse engineering network forwarding to fix the problems. This memo updates [RFC2309] to deprecate deliberate preferential treatment of small packets in AQM algorithms. It recommends that (1) packet size should be taken into account when transports read @@ -154,37 +154,39 @@ advantages: i) it drops less control packets, which tend to be small and ii) it makes TCP's bit-rate less dependent on packet size. However, there are ways of addressing these issues at the transport layer, rather than reverse engineering network forwarding to fix the problems. This memo updates [RFC2309] to deprecate deliberate preferential treatment of small packets in AQM algorithms. It recommends that (1) packet size should be taken into account when transports read congestion indications, (2) not when network equipment writes them. + This memo also adds to the congestion control principles enumerated + in BCP 41 [RFC2914]. - In particular this means that the byte-mode packet drop variant of - Random early Detection (RED) should not be used to drop fewer small - packets, because that creates a perverse incentive for transports to - use tiny segments, consequently also opening up a DoS vulnerability. - Fortunately all the RED implementers who responded to our admittedly - limited survey (Section 4.2.4) have not followed the earlier advice - to use byte-mode drop, so the position this memo argues for seems to - already exist in implementations. + In the particular case of Random early Detection (RED), this means + that the byte-mode packet drop variant should not be used to drop + fewer small packets, because that creates a perverse incentive for + transports to use tiny segments, consequently also opening up a DoS + vulnerability. Fortunately all the RED implementers who responded to + our admittedly limited survey (Section 4.2.4) have not followed the + earlier advice to use byte-mode drop, so the position this memo + argues for seems to already exist in implementations. However, at the transport layer, TCP congestion control is a widely deployed protocol that doesn't scale with packet size. To date this hasn't been a significant problem because most TCP implementations have been used with similar packet sizes. But, as we design new - congestion control mechanisms, the current recommendation is that we - should build in scaling with packet size rather than assuming we - should follow TCP's example. + congestion control mechanisms, this memo recommends that we should + build in scaling with packet size rather than assuming we should + follow TCP's example. This memo continues as follows. First it discusses terminology and scoping. Section 2 gives the concrete formal recommendations, followed by motivating arguments in Section 3. We then critically survey the advice given previously in the RFC series and the research literature (Section 4), referring to an assessment of whether or not this advice has been followed in production networks (Appendix A). To wrap up, outstanding issues are discussed that will need resolution both to inform future protocol designs and to handle legacy (Section 5). Then security issues are collected together in @@ -195,31 +197,39 @@ This memo intentionally includes a non-negligible amount of material on the subject. For the busy reader Section 2 summarises the recommendations for the Internet community. 1.1. Terminology and Scoping The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. + This memo applies to the design of all AQM algorithms, for example, + Random Early Detection (RED) [RFC2309], BLUE [BLUE02], Pre-Congestion + Notification (PCN) [RFC5670], Controlled Delay (CoDel) [CoDel12] and + the Proportional Integral controller Enhanced (PIE) + [I-D.pan-tsvwg-pie]. Throughout, RED is used as a concrete example + because it is a widely known and deployed AQM algorithm. There is no + intention to imply that the advice is any less applicable to the + other algorithms, nor that RED is preferred. + Congestion Notification: Congestion notification is a changing signal that aims to communicate the probability that the network resource(s) will not be able to forward the level of traffic load offered (or that there is an impending risk that they will not be able to). - The `impending risk' qualifier is added, because AQM systems (e.g. - RED, PCN [RFC5670]) set a virtual limit smaller than the actual - limit to the resource, then notify when this virtual limit is - exceeded in order to avoid uncontrolled congestion of the actual - capacity. + The `impending risk' qualifier is added, because AQM systems set a + virtual limit smaller than the actual limit to the resource, then + notify when this virtual limit is exceeded in order to avoid + uncontrolled congestion of the actual capacity. Congestion notification communicates a real number bounded by the range [ 0 , 1 ]. This ties in with the most well-understood measure of congestion notification: drop probability. Explicit and Implicit Notification: The byte vs. packet dilemma concerns congestion notification irrespective of whether it is signalled implicitly by drop or using explicit congestion notification (ECN [RFC3168] or PCN [RFC5670]). Throughout this document, unless clear from the context, the term marking will be @@ -234,59 +244,63 @@ Examples of packet-congestible resources are route look-up engines and firewalls, because load depends on how many packet headers they have to process. Examples of bit-congestible resources are transmission links, radio power and most buffer memory, because the load depends on how many bits they have to transmit or store. Some machine architectures use fixed size packet buffers, so buffer memory in these cases is packet-congestible (see Section 4.1.1). - Currently a design goal of network processing equipment such as - routers and firewalls is to keep packet processing uncongested - even under worst case packet rates with runs of minimum size - packets. Therefore, packet-congestion is currently rare [RFC6077; - S.3.3], but there is no guarantee that it will not become more - common in future. + The path through a machine will typically encounter both packet- + congestible and bit-congestible resources. However, currently, a + design goal of network processing equipment such as routers and + firewalls is to size the packet-processing engine(s) relative to + the lines in order to keep packet processing uncongested even + under worst case packet rates with runs of minimum size packets. + Therefore, packet-congestion is currently rare [RFC6077; S.3.3], + but there is no guarantee that it will not become more common in + future. Note that information is generally processed or transmitted with a minimum granularity greater than a bit (e.g. octets). The appropriate granularity for the resource in question should be used, but for the sake of brevity we will talk in terms of bytes in this memo. Coarser Granularity: Resources may be congestible at higher levels of granularity than bits or packets, for instance stateful firewalls are flow-congestible and call-servers are session- congestible. This memo focuses on congestion of connectionless resources, but the same principles may be applicable for congestion notification protocols controlling per-flow and per- session processing or state. RED Terminology: In RED whether to use packets or bytes when measuring queues is called respectively "packet-mode queue measurement" or "byte-mode queue measurement". And whether the probability of dropping a particular packet is independent or - dependent on its byte-size is called respectively "packet-mode - drop" or "byte-mode drop". The terms byte-mode and packet-mode - should not be used without specifying whether they apply to queue - measurement or to drop. + dependent on its size is called respectively "packet-mode drop" or + "byte-mode drop". The terms byte-mode and packet-mode should not + be used without specifying whether they apply to queue measurement + or to drop. 1.2. Example Comparing Packet-Mode Drop and Byte-Mode Drop - A central question addressed by this document is whether to recommend - that AQM uses RED's packet-mode drop and to deprecate byte-mode drop. - Table 1 compares how packet-mode and byte-mode drop affect two flows - of different size packets. For each it gives the expected number of - packets and of bits dropped in one second. Each example flow runs at - the same bit-rate of 48Mb/s, but one is broken up into small 60 byte - packets and the other into large 1500 byte packets. + Taking RED as a well-known example algorithm, a central question + addressed by this document is whether to recommend RED's packet-mode + drop variant and to deprecate byte-mode drop. Table 1 compares how + packet-mode and byte-mode drop affect two flows of different size + packets. For each it gives the expected number of packets and of + bits dropped in one second. Each example flow runs at the same bit- + rate of 48Mb/s, but one is broken up into small 60 byte packets and + the other into large 1500 byte packets. To keep up the same bit-rate, in one second there are about 25 times more small packets because they are 25 times smaller. As can be seen from the table, the packet rate is 100,000 small packets versus 4,000 large packets per second (pps). Parameter Formula Small packets Large packets -------------------- -------------- ------------- ------------- Packet size s/8 60B 1,500B Packet size s 480b 12,000b @@ -330,76 +344,99 @@ proportionate to the size of the packet it is in. 2. Recommendations This section gives recommendations related to network equipment in Sections 2.1 and 2.2, and in Sections 2.3 and 2.4 we discuss the implications on the transport protocols. 2.1. Recommendation on Queue Measurement - Queue length is usually the most correct and simplest way to measure - congestion of a resource. To avoid the pathological effects of drop - tail, an AQM function can then be used to transform queue length into - the probability of dropping or marking a packet (e.g. RED's - piecewise linear function between thresholds). + Ideally, an AQM would measure the service time of the queue to + measure congestion of a resource. However service time can only be + measured as packets leave the queue, where it is not always feasible + to implement a full AQM algorithm. To predict the service time as + packets join the queue, an AQM algorithm needs to measure the length + of the queue. - If the resource is bit-congestible, the implementation SHOULD measure - the length of the queue in bytes. If the resource is packet- - congestible, the implementation SHOULD measure the length of the - queue in packets. No other choice makes sense, because the number of - packets waiting in the queue isn't relevant if the resource gets - congested by bytes and vice versa. + In this case, if the resource is bit-congestible, the AQM + implementation SHOULD measure the length of the queue in bytes and, + if the resource is packet-congestible, the implementation SHOULD + measure the length of the queue in packets. No other choice makes + sense, because the number of packets waiting in the queue isn't + relevant if the resource gets congested by bytes and vice versa. For + example, the length of the queue into a transmission line would be + measured in bytes, while the length of the queue into a firewall + would be measured in packets. - What this advice means for the case of RED: + To avoid the pathological effects of drop tail, the AQM can then + transform this service time or queue length into the probability of + dropping or marking a packet (e.g. RED's piecewise linear function + between thresholds). + + What this advice means for RED as a specific example: 1. A RED implementation SHOULD use byte mode queue measurement for measuring the congestion of bit-congestible resources and packet mode queue measurement for packet-congestible resources. 2. An implementation SHOULD NOT make it possible to configure the way a queue measures itself, because whether a queue is bit- congestible or packet-congestible is an inherent property of the queue. + Exceptions to these recommendations MAY be necessary, for instance + where a packet-congestible resource has to be configured as a proxy + bottleneck for a bit-congestible resource in an adjacent box that + does not support AQM. + The recommended approach in less straightforward scenarios, such as - fixed size buffers, and resources without a queue, is discussed in - Section 4.1. + fixed size packet buffers, resources without a queue and buffers + comprising a mix of packet and bit-congestible resources, is + discussed in Section 4.1. For instance, Section 4.1.1 explains that + the queue into a line should be measured in bytes even if the queue + consists of fixed-size packet-buffers, because the root-cause of any + congestion is bytes arriving too fast for the line--packets filling + buffers are merely a symptom of the underlying congestion of the + line. 2.2. Recommendation on Encoding Congestion Notification - When encoding congestion notification (e.g. by drop, ECN & PCN), a - network device SHOULD treat all packets equally, regardless of their - size. In other words, the probability that network equipment drops - or marks a particular packet to notify congestion SHOULD NOT depend - on the size of the packet in question. As the example in Section 1.2 - illustrates, to drop any bit with probability 0.1% it is only - necessary to drop every packet with probability 0.1% without regard - to the size of each packet. + When encoding congestion notification (e.g. by drop, ECN or PCN), the + probability that network equipment drops or marks a particular packet + to notify congestion SHOULD NOT depend on the size of the packet in + question. As the example in Section 1.2 illustrates, to drop any bit + with probability 0.1% it is only necessary to drop every packet with + probability 0.1% without regard to the size of each packet. This approach ensures the network layer offers sufficient congestion information for all known and future transport protocols and also ensures no perverse incentives are created that would encourage transports to use inappropriately small packet sizes. - What this advice means for the case of RED: + What this advice means for RED as a specific example: - 1. AQM algorithms such as RED SHOULD use packet-mode drop, ie they - SHOULD NOT use byte-mode drop. The latter is more complex, it - creates the perverse incentive to fragment segments into tiny + 1. The RED AQM algorithm SHOULD NOT use byte-mode drop, i.e. it + ought to use packet-mode drop. Byte-mode drop is more complex, + it creates the perverse incentive to fragment segments into tiny pieces and it is vulnerable to floods of small packets. 2. If a vendor has implemented byte-mode drop, and an operator has - turned it on, it is RECOMMENDED to turn it off, after - establishing if there are any implications on the relative - performance of applications using different packet sizes. - RED as a whole SHOULD NOT be turned off. Without RED, a drop + turned it on, it is RECOMMENDED to switch it to packet-mode drop, + after establishing if there are any implications on the relative + performance of applications using different packet sizes. The + unlikely possibility of some application-specific legacy use of + byte-mode drop is the only reason that all the above + recommendations on encoding congestion notification are not + phrased more strongly. + + RED as a whole SHOULD NOT be switched off. Without RED, a drop tail queue biases against large packets and is vulnerable to floods of small packets. Note well that RED's byte-mode queue drop is completely orthogonal to byte-mode queue measurement and should not be confused with it. If a RED implementation has a byte-mode but does not specify what sort of byte-mode, it is most probably byte-mode queue measurement, which is fine. However, if in doubt, the vendor should be consulted. A survey (Appendix A) showed that there appears to be little, if any, @@ -419,69 +456,73 @@ indication on every octet of the packet, not just one indication per packet. To be clear, the above recommendation solely describes how a transport should interpret the meaning of a congestion indication. It makes no recommendation on whether a transport should act differently based on this interpretation. It merely aids interoperablity between transports, if they choose to make their actions depend on the strength of congestion indications. - This definition will be useful as the the IETF transport area - continues its programme of; + This definition will be useful as the IETF transport area continues + its programme of; o updating host-based congestion control protocols to take account of packet size o making transports less sensitive to losing control packets like SYNs and pure ACKs. What this advice means for the case of TCP: 1. If two TCP flows with different packet sizes are required to run - at equal bit rates under the same path conditions, this should be + at equal bit rates under the same path conditions, this SHOULD be done by altering TCP (Section 4.2.2), not network equipment (the latter affects other transports besides TCP). 2. If it is desired to improve TCP performance by reducing the - chance that a SYN or a pure ACK will be dropped, this should be + chance that a SYN or a pure ACK will be dropped, this SHOULD be done by modifying TCP (Section 4.2.3), not network equipment. To be clear, we are not recommending at all that TCPs under equivalent conditions should aim for equal bit-rates. We are merely saying that anyone trying to do such a thing should modify their TCP algorithm, not the network. + These recommendations are phrased as 'SHOULD' rather than 'MUST', + because there may be cases where compatibility with pre-existing + versions of a transport protocol make the recommendations + impractical. + 2.4. Recommendation on Handling Congestion Indications when Splitting or Merging Packets Packets carrying congestion indications may be split or merged in - some circumstances (e.g. at a RTCP transcoder or during IP fragment - reassembly). Splitting and merging only make sense in the context of - ECN, not loss. + some circumstances (e.g. at a RTP/RTCP transcoder or during IP + fragment reassembly). Splitting and merging only make sense in the + context of ECN, not loss. The general rule to follow is that the number of octets in packets with congestion indications SHOULD be equivalent before and after merging or splitting. This is based on the principle used above; that an indication of congestion on a packet can be considered as an indication of congestion on each octet of the packet. The above rule is not phrased with the word "MUST" to allow the following exception. There are cases where pre-existing protocols were not designed to conserve congestion marked octets (e.g. IP fragment reassembly [RFC3168] or loss statistics in RTCP receiver - reports [RFC3550] before ECN was added - [I-D.ietf-avtcore-ecn-for-rtp]). When any such protocol is updated, - it SHOULD comply with the above rule to conserve marked octets. - However, the rule may be relaxed if it would otherwise become too - complex to interoperate with pre-existing implementations of the - protocol. + reports [RFC3550] before ECN was added [RFC6679]). When any such + protocol is updated, it SHOULD comply with the above rule to conserve + marked octets. However, the rule may be relaxed if it would + otherwise become too complex to interoperate with pre-existing + implementations of the protocol. One can think of a splitting or merging process as if all the incoming congestion-marked octets increment a counter and all the outgoing marked octets decrement the same counter. In order to ensure that congestion indications remain timely, even the smallest positive remainder in the conceptual counter should trigger the next outgoing packet to be marked (causing the counter to go negative). 3. Motivating Arguments @@ -505,30 +546,30 @@ than smaller ones would be dangerous in both the following cases: Malicious transports: A queue that gives an advantage to small packets can be used to amplify the force of a flooding attack. By sending a flood of small packets, the attacker can get the queue to discard more traffic in large packets, allowing more attack traffic to get through to cause further damage. Such a queue allows attack traffic to have a disproportionately large effect on regular traffic without the attacker having to do much work. - Non-malicious transports: Even if a transport designer is not + Non-malicious transports: Even if an application designer is not actually malicious, if over time it is noticed that small packets tend to go faster, designers will act in their own interest and use smaller packets. Queues that give advantage to small packets - create an evolutionary pressure for transports to send at the same - bit-rate but break their data stream down into tiny segments to - reduce their drop rate. Encouraging a high volume of tiny packets - might in turn unnecessarily overload a completely unrelated part - of the system, perhaps more limited by header-processing than - bandwidth. + create an evolutionary pressure for applications or transports to + send at the same bit-rate but break their data stream down into + tiny segments to reduce their drop rate. Encouraging a high + volume of tiny packets might in turn unnecessarily overload a + completely unrelated part of the system, perhaps more limited by + header-processing than bandwidth. Imagine two unresponsive flows arrive at a bit-congestible transmission link each with the same bit rate, say 1Mbps, but one consists of 1500B and the other 60B packets, which are 25x smaller. Consider a scenario where gentle RED [gentle_RED] is used, along with the variant of RED we advise against, i.e. where the RED algorithm is configured to adjust the drop probability of packets in proportion to each packet's size (byte mode packet drop). In this case, RED aims to drop 25x more of the larger packets than the smaller ones. Thus, for example if RED drops 25% of the larger packets, it will aim to @@ -541,28 +582,28 @@ Note that, although the byte-mode drop variant of RED amplifies small packet attacks, drop-tail queues amplify small packet attacks even more (see Security Considerations in Section 6). Wherever possible neither should be used. 3.2. Small != Control Dropping fewer control packets considerably improves performance. It is tempting to drop small packets with lower probability in order to - improve performance, because many control packets are small (TCP SYNs - & ACKs, DNS queries & responses, SIP messages, HTTP GETs, etc). - However, we must not give control packets preference purely by virtue - of their smallness, otherwise it is too easy for any data source to - get the same preferential treatment simply by sending data in smaller - packets. Again we should not create perverse incentives to favour - small packets rather than to favour control packets, which is what we - intend. + improve performance, because many control packets tend to be smaller + (TCP SYNs & ACKs, DNS queries & responses, SIP messages, HTTP GETs, + etc). However, we must not give control packets preference purely by + virtue of their smallness, otherwise it is too easy for any data + source to get the same preferential treatment simply by sending data + in smaller packets. Again we should not create perverse incentives + to favour small packets rather than to favour control packets, which + is what we intend. Just because many control packets are small does not mean all small packets are control packets. So, rather than fix these problems in the network, we argue that the transport should be made more robust against losses of control packets (see 'Making Transports Robust against Control Packet Losses' in Section 4.2.3). 3.3. Transport-Independent Network @@ -649,21 +690,21 @@ argument applies solely to drop, not to ECN marking. A queue drops packets for either of two reasons: a) to signal to host congestion controls that they should reduce the load and b) because there is no buffer left to store the packets. Active queue management tries to use drops as a signal for hosts to slow down (case a) so that drop due to buffer exhaustion (case b) should not be necessary. AQM is not universally deployed in every queue in the Internet; many - cheap ethernet bridges, software firewalls, NATs on consumer devices, + cheap Ethernet bridges, software firewalls, NATs on consumer devices, etc implement simple tail-drop buffers. Even if AQM were universal, it has to be able to cope with buffer exhaustion (by switching to a behaviour like tail-drop), in order to cope with unresponsive or excessive transports. For these reasons networks will sometimes be dropping packets as a last resort (case b) rather than under AQM control (case a). When buffers are exhausted (case b), they don't naturally drop packets in proportion to their size. The network can only reduce the probability of dropping smaller packets if it has enough space to @@ -728,101 +769,96 @@ on its own size (Section 4.2). The rest of this section is structured accordingly. 4.1. Congestion Measurement Advice The choice of which metric to use to measure queue length was left open in RFC2309. It is now well understood that queues for bit- congestible resources should be measured in bytes, and queues for packet-congestible resources should be measured in packets + [pktByteEmail]. Congestion in some legacy bit-congestible buffers is only measured in packets not bytes. In such cases, the operator has to set the thresholds mindful of a typical mix of packets sizes. Any AQM algorithm on such a buffer will be oversensitive to high proportions - of small packets, e.g. a DoS attack, and undersensitive to high + of small packets, e.g. a DoS attack, and under-sensitive to high proportions of large packets. However, there is no need to make allowances for the possibility of such legacy in future protocol - design. This is safe because any undersensitivity during unusual + design. This is safe because any under-sensitivity during unusual traffic mixes cannot lead to congestion collapse given the buffer will eventually revert to tail drop, discarding proportionately more large packets. 4.1.1. Fixed Size Packet Buffers The question of whether to measure queues in bytes or packets seems - to be well understood. However, measuring congestion is not - straightforward when the resource is bit congestible but the queue is - packet congestible or vice versa. This section outlines the approach - to take. There is no controversy over what should be done, you just - need to be expert in probability to work it out. And, even if you - know what should be done, it's not always easy to find a practical - algorithm to implement it. + to be well understood. However, measuring congestion is confusing + when the resource is bit congestible but the queue into the resource + is packet congestible. This section outlines the approach to take. - Some, mostly older, queuing hardware sets aside fixed sized buffers - in which to store each packet in the queue. Also, with some - hardware, any fixed sized buffers not completely filled by a packet - are padded when transmitted to the wire. If we imagine a theoretical - forwarding system with both queuing and transmission in fixed, MTU- - sized units, it should clearly be treated as packet-congestible, - because the queue length in packets would be a good model of - congestion of the lower layer link. + Some, mostly older, queuing hardware allocates fixed sized buffers in + which to store each packet in the queue. This hardware forwards to + the line in one of two ways: - If we now imagine a hybrid forwarding system with transmission delay - largely dependent on the byte-size of packets but buffers of one MTU - per packet, it should strictly require a more complex algorithm to - determine the probability of congestion. It should be treated as two - resources in sequence, where the sum of the byte-sizes of the packets - within each packet buffer models congestion of the line while the - length of the queue in packets models congestion of the queue. Then - the probability of congesting the forwarding buffer would be a - conditional probability--conditional on the previously calculated - probability of congesting the line. + o With some hardware, any fixed sized buffers not completely filled + by a packet are padded when transmitted to the wire. This case, + should clearly be treated as packet-congestible, because both + queuing and transmission are in fixed MTU-sized units. Therefore + the queue length in packets is a good model of congestion of the + link. - In systems that use fixed size buffers, it is unusual for all the - buffers used by an interface to be the same size. Typically pools of + o More commonly, hardware with fixed size packet buffers transmits + packets to line without padding. This implies a hybrid forwarding + system with transmission congestion dependent on the size of + packets but queue congestion dependent on the number of packets, + irrespective of their size. + + Nonetheless, there would be no queue at all unless the line had + become congested--the root-cause of any congestion is too many + bytes arriving for the line. Therefore, the AQM should measure + the queue length as the sum of all the packet sizes in bytes that + are queued up waiting to be serviced by the line, irrespective of + whether each packet is held in a fixed size buffer. + + In the (unlikely) first case where use of padding means the queue + should be measured in packets, further confusion is likely because + the fixed buffers are rarely all one size. Typically pools of different sized buffers are provided (Cisco uses the term 'buffer carving' for the process of dividing up memory into these pools [IOSArch]). Usually, if the pool of small buffers is exhausted, arriving small packets can borrow space in the pool of large buffers, - but not vice versa. However, it is easier to work out what should be - done if we temporarily set aside the possibility of such borrowing. - Then, with fixed pools of buffers for different sized packets and no - borrowing, the size of each pool and the current queue length in each - pool would both be measured in packets. So an AQM algorithm would - have to maintain the queue length for each pool, and judge whether to - drop/mark a packet of a particular size by looking at the pool for - packets of that size and using the length (in packets) of its queue. - - We now return to the issue we temporarily set aside: small packets - borrowing space in larger buffers. In this case, the only difference - is that the pools for smaller packets have a maximum queue size that - includes all the pools for larger packets. And every time a packet - takes a larger buffer, the current queue size has to be incremented - for all queues in the pools of buffers less than or equal to the - buffer size used. + but not vice versa. However, there is no need to consider all this + complexity, because the root-cause of any congestion is still line + overload--buffer consumption is only the symptom. Therefore, the + length of the queue should be measured as the sum of the bytes in the + queue that will be transmitted to line, including any padding. In + the (unusual) case of transmission with padding this means the sum of + the sizes of the small buffers queued plus the sum of the sizes of + the large buffers queued. We will return to borrowing of fixed sized buffers when we discuss biasing the drop/marking probability of a specific packet because of - its size in Section 4.2.1. But here we can give a at least one - simple rule for how to measure the length of queues of fixed buffers: - no matter how complicated the scheme is, ultimately any fixed buffer - system will need to measure its queue length in packets not bytes. + its size in Section 4.2.1. But here we can repeat the simple rule + for how to measure the length of queues of fixed buffers: no matter + how complicated the buffering scheme is, ultimately a transmission + line is nearly always bit-congestible so the number of bytes queued + up waiting for the line measures how congested the line is, and it is + rarely important to measure how congested the buffering system is. 4.1.2. Congestion Measurement without a Queue AQM algorithms are nearly always described assuming there is a queue for a congested resource and the algorithm can use the queue length to determine the probability that it will drop or mark each packet. - But not all congested resources lead to queues. For instance, wireless spectrum is usually regarded as bit-congestible (for a given coding scheme). But wireless link protocols do not always maintain a queue that depends on spectrum interference. Similarly, power limited resources are also usually bit-congestible if energy is primarily required for transmission rather than header processing, but it is rare for a link protocol to build a queue as it approaches maximum power. Nonetheless, AQM algorithms do not require a queue in order to work. @@ -857,21 +893,21 @@ by making the policing mechanism count the volume of bytes randomly dropped, not the number of packets. A few months before RFC2309 was published, an addendum was added to the above archived email referenced from the RFC, in which the final paragraph seemed to partially retract what had previously been said. It clarified that the question of whether the probability of dropping/marking a packet should depend on its size was not related to whether the resource itself was bit congestible, but a completely orthogonal question. However the only example given had the queue - measured in packets but packet drop depended on the byte-size of the + measured in packets but packet drop depended on the size of the packet in question. No example was given the other way round. In 2000, Cnodder et al [REDbyte] pointed out that there was an error in the part of the original 1993 RED algorithm that aimed to distribute drops uniformly, because it didn't correctly take into account the adjustment for packet size. They recommended an algorithm called RED_4 to fix this. But they also recommended a further change, RED_5, to adjust drop rate dependent on the square of relative packet size. This was indeed consistent with one implied motivation behind RED's byte mode drop--that we should reverse @@ -891,49 +927,40 @@ probabilities of greater than 1 (which gives a hint that there is probably a mistake in the theory somewhere). On 10-Nov-2004, this variant of byte-mode packet drop was made the default in the ns2 simulator. It seems unlikely that byte-mode drop has ever been implemented in production networks (Appendix A), therefore any conclusions based on ns2 simulations that use RED without disabling byte-mode drop are likely to behave very differently from RED in production networks. -4.2.1.2. Packet Size Bias Regardless of RED +4.2.1.2. Packet Size Bias Regardless of AQM - The byte-mode drop variant of RED is, of course, not the only - possible bias towards small packets in queueing systems. We have - already mentioned that tail-drop queues naturally tend to lock-out - large packets once they are full. But also queues with fixed sized - buffers reduce the probability that small packets will be dropped if - (and only if) they allow small packets to borrow buffers from the - pools for larger packets. As was explained in Section 4.1.1 on fixed - size buffer carving, borrowing effectively makes the maximum queue - size for small packets greater than that for large packets, because - more buffers can be used by small packets while less will fit large - packets. + The byte-mode drop variant of RED (or a similar variant of other AQM + algorithms) is not the only possible bias towards small packets in + queueing systems. We have already mentioned that tail-drop queues + naturally tend to lock-out large packets once they are full. - In itself, the bias towards small packets caused by buffer borrowing - is perfectly correct. Lower drop probability for small packets is - legitimate in buffer borrowing schemes, because small packets - genuinely congest the machine's buffer memory less than large - packets, given they can fit in more spaces. The bias towards small - packets is not artificially added (as it is in RED's byte-mode drop - algorithm), it merely reflects the reality of the way fixed buffer - memory gets congested. Incidentally, the bias towards small packets - from buffer borrowing is nothing like as large as that of RED's byte- - mode drop. + But also queues with fixed sized buffers reduce the probability that + small packets will be dropped if (and only if) they allow small + packets to borrow buffers from the pools for larger packets (see + Section 4.1.1). Borrowing effectively makes the maximum queue size + for small packets greater than that for large packets, because more + buffers can be used by small packets while less will fit large + packets. Incidentally, the bias towards small packets from buffer + borrowing is nothing like as large as that of RED's byte-mode drop. Nonetheless, fixed-buffer memory with tail drop is still prone to - lock-out large packets, purely because of the tail-drop aspect. So a - good AQM algorithm like RED with packet-mode drop should be used with - fixed buffer memories where possible. If RED is too complicated to + lock-out large packets, purely because of the tail-drop aspect. So, + fixed size packet-buffers should be augmented with a good AQM + algorithm and packet-mode drop. If an AQM is too complicated to implement with multiple fixed buffer pools, the minimum necessary to prevent large packet lock-out is to ensure smaller packets never use the last available buffer in any of the pools for larger packets. 4.2.2. Transport Bias when Decoding The above proposals to alter the network equipment to bias towards smaller packets have largely carried on outside the IETF process. Whereas, within the IETF, there are many different proposals to alter transport protocols to achieve the same goals, i.e. either to make @@ -1222,212 +1250,191 @@ 10. Comments Solicited Comments and questions are encouraged and very welcome. They can be addressed to the IETF Transport Area working group mailing list , and/or to the authors. 11. References 11.1. Normative References - [RFC2119] Bradner, S., "Key words for use in - RFCs to Indicate Requirement Levels", - BCP 14, RFC 2119, March 1997. + [RFC2119] Bradner, S., "Key words for use in RFCs to + Indicate Requirement Levels", BCP 14, RFC 2119, + March 1997. - [RFC3168] Ramakrishnan, K., Floyd, S., and D. - Black, "The Addition of Explicit - Congestion Notification (ECN) to IP", - RFC 3168, September 2001. + [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The + Addition of Explicit Congestion Notification + (ECN) to IP", RFC 3168, September 2001. 11.2. Informative References - [CCvarPktSize] Widmer, J., Boutremans, C., and J-Y. - Le Boudec, "Congestion Control for - Flows with Variable Packet Size", ACM - CCR 34(2) 137--151, 2004, . + [BLUE02] Feng, W-c., Shin, K., Kandlur, D., and D. Saha, + "The BLUE active queue management algorithms", + IEEE/ACM Transactions on Networking 10(4) 513-- + 528, August 2002, + . - [CHOKe_Var_Pkt] Psounis, K., Pan, R., and B. - Prabhaker, "Approximate Fair Dropping - for Variable Length Packets", IEEE - Micro 21(1):48--56, January- - February 2001, . + [CCvarPktSize] Widmer, J., Boutremans, C., and J-Y. Le Boudec, + "Congestion Control for Flows with Variable + Packet Size", ACM CCR 34(2) 137--151, 2004, + . - [DRQ] Shin, M., Chong, S., and I. Rhee, - "Dual-Resource TCP/AQM for - Processing-Constrained Networks", - IEEE/ACM Transactions on - Networking Vol 16, issue 2, - April 2008, . + [CHOKe_Var_Pkt] Psounis, K., Pan, R., and B. Prabhaker, + "Approximate Fair Dropping for Variable Length + Packets", IEEE Micro 21(1):48--56, January- + February 2001, . - [DupTCP] Wischik, D., "Short messages", Royal - Society workshop on networks: - modelling and control , - September 2007, . + [CoDel12] Nichols, K. and V. Jacobson, "Controlling Queue + Delay", ACM Queue 10(5), May 2012, + . - [ECNFixedWireless] Siris, V., "Resource Control for - Elastic Traffic in CDMA Networks", - Proc. ACM MOBICOM'02 , - September 2002, . + + [DupTCP] Wischik, D., "Short messages", Royal Society + workshop on networks: modelling and control , + September 2007, . + + [ECNFixedWireless] Siris, V., "Resource Control for Elastic Traffic + in CDMA Networks", Proc. ACM MOBICOM'02 , + September 2002, . - [Evol_cc] Gibbens, R. and F. Kelly, "Resource - pricing and the evolution of - congestion control", - Automatica 35(12)1969--1985, - December 1999, . - [I-D.ietf-avtcore-ecn-for-rtp] Westerlund, M., Johansson, I., - Perkins, C., O'Hanlon, P., and K. - Carlberg, "Explicit Congestion - Notification (ECN) for RTP over UDP", - draft-ietf-avtcore-ecn-for-rtp-08 - (work in progress), May 2012. - - [I-D.ietf-conex-concepts-uses] Briscoe, B., Woundy, R., and A. - Cooper, "ConEx Concepts and Use - Cases", - (work in progress), March 2012. + [I-D.pan-tsvwg-pie] Pan, R., Natarajan, P., Piglione, C., and M. + Prabhu, "PIE: A Lightweight Control Scheme To + Address the Bufferbloat Problem", + draft-pan-tsvwg-pie-00 (work in progress), + December 2012. - [IOSArch] Bollapragada, V., White, R., and C. - Murphy, "Inside Cisco IOS Software - Architecture", Cisco Press: CCIE - Professional Development ISBN13: 978- - 1-57870-181-0, July 2000. + [IOSArch] Bollapragada, V., White, R., and C. Murphy, + "Inside Cisco IOS Software Architecture", Cisco + Press: CCIE Professional Development ISBN13: + 978-1-57870-181-0, July 2000. - [PktSizeEquCC] Vasallo, P., "Variable Packet Size - Equation-Based Congestion Control", - ICSI Technical Report tr-00-008, - 2000, . + [PktSizeEquCC] Vasallo, P., "Variable Packet Size Equation- + Based Congestion Control", ICSI Technical + Report tr-00-008, 2000, . - [RED93] Floyd, S. and V. Jacobson, "Random - Early Detection (RED) gateways for - Congestion Avoidance", IEEE/ACM - Transactions on Networking 1(4) 397-- - 413, August 1993, . + [RED93] Floyd, S. and V. Jacobson, "Random Early + Detection (RED) gateways for Congestion + Avoidance", IEEE/ACM Transactions on + Networking 1(4) 397--413, August 1993, + . - [REDbias] Eddy, W. and M. Allman, "A Comparison - of RED's Byte and Packet Modes", - Computer Networks 42(3) 261--280, - June 2003, . - [REDbyte] De Cnodder, S., Elloumi, O., and K. - Pauwels, "RED behavior with different - packet sizes", Proc. 5th IEEE - Symposium on Computers and - Communications (ISCC) 793--799, - July 2000, . + [REDbyte] De Cnodder, S., Elloumi, O., and K. Pauwels, + "RED behavior with different packet sizes", + Proc. 5th IEEE Symposium on Computers and + Communications (ISCC) 793--799, July 2000, + . - [RFC2309] Braden, B., Clark, D., Crowcroft, J., - Davie, B., Deering, S., Estrin, D., - Floyd, S., Jacobson, V., Minshall, - G., Partridge, C., Peterson, L., - Ramakrishnan, K., Shenker, S., - Wroclawski, J., and L. Zhang, - "Recommendations on Queue Management - and Congestion Avoidance in the + [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., + Deering, S., Estrin, D., Floyd, S., Jacobson, + V., Minshall, G., Partridge, C., Peterson, L., + Ramakrishnan, K., Shenker, S., Wroclawski, J., + and L. Zhang, "Recommendations on Queue + Management and Congestion Avoidance in the Internet", RFC 2309, April 1998. - [RFC2474] Nichols, K., Blake, S., Baker, F., - and D. Black, "Definition of the - Differentiated Services Field (DS - Field) in the IPv4 and IPv6 Headers", + [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, + "Definition of the Differentiated Services Field + (DS Field) in the IPv4 and IPv6 Headers", RFC 2474, December 1998. - [RFC3426] Floyd, S., "General Architectural and - Policy Considerations", RFC 3426, - November 2002. + [RFC2914] Floyd, S., "Congestion Control Principles", + BCP 41, RFC 2914, September 2000. - [RFC3550] Schulzrinne, H., Casner, S., - Frederick, R., and V. Jacobson, "RTP: - A Transport Protocol for Real-Time - Applications", STD 64, RFC 3550, + [RFC3426] Floyd, S., "General Architectural and Policy + Considerations", RFC 3426, November 2002. + + [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and + V. Jacobson, "RTP: A Transport Protocol for + Real-Time Applications", STD 64, RFC 3550, July 2003. - [RFC3714] Floyd, S. and J. Kempf, "IAB Concerns - Regarding Congestion Control for - Voice Traffic in the Internet", - RFC 3714, March 2004. + [RFC3714] Floyd, S. and J. Kempf, "IAB Concerns Regarding + Congestion Control for Voice Traffic in the + Internet", RFC 3714, March 2004. - [RFC4828] Floyd, S. and E. Kohler, "TCP - Friendly Rate Control (TFRC): The - Small-Packet (SP) Variant", RFC 4828, - April 2007. + [RFC4828] Floyd, S. and E. Kohler, "TCP Friendly Rate + Control (TFRC): The Small-Packet (SP) Variant", + RFC 4828, April 2007. - [RFC5348] Floyd, S., Handley, M., Padhye, J., - and J. Widmer, "TCP Friendly Rate - Control (TFRC): Protocol - Specification", RFC 5348, + [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. + Widmer, "TCP Friendly Rate Control (TFRC): + Protocol Specification", RFC 5348, September 2008. - [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, - S., and K. Ramakrishnan, "Adding - Explicit Congestion Notification - (ECN) Capability to TCP's SYN/ACK + [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, S., and K. + Ramakrishnan, "Adding Explicit Congestion + Notification (ECN) Capability to TCP's SYN/ACK Packets", RFC 5562, June 2009. - [RFC5670] Eardley, P., "Metering and Marking - Behaviour of PCN-Nodes", RFC 5670, - November 2009. + [RFC5670] Eardley, P., "Metering and Marking Behaviour of + PCN-Nodes", RFC 5670, November 2009. - [RFC5681] Allman, M., Paxson, V., and E. - Blanton, "TCP Congestion Control", - RFC 5681, September 2009. + [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP + Congestion Control", RFC 5681, September 2009. - [RFC5690] Floyd, S., Arcia, A., Ros, D., and J. - Iyengar, "Adding Acknowledgement - Congestion Control to TCP", RFC 5690, - February 2010. + [RFC5690] Floyd, S., Arcia, A., Ros, D., and J. Iyengar, + "Adding Acknowledgement Congestion Control to + TCP", RFC 5690, February 2010. - [RFC6077] Papadimitriou, D., Welzl, M., Scharf, - M., and B. Briscoe, "Open Research - Issues in Internet Congestion - Control", RFC 6077, February 2011. + [RFC6077] Papadimitriou, D., Welzl, M., Scharf, M., and B. + Briscoe, "Open Research Issues in Internet + Congestion Control", RFC 6077, February 2011. - [Rate_fair_Dis] Briscoe, B., "Flow Rate Fairness: - Dismantling a Religion", ACM - CCR 37(2)63--74, April 2007, . + [RFC6679] Westerlund, M., Johansson, I., Perkins, C., + O'Hanlon, P., and K. Carlberg, "Explicit + Congestion Notification (ECN) for RTP over UDP", + RFC 6679, August 2012. - [gentle_RED] Floyd, S., "Recommendation on using - the "gentle_" variant of RED", Web - page , March 2000, . + [RFC6789] Briscoe, B., Woundy, R., and A. Cooper, + "Congestion Exposure (ConEx) Concepts and Use + Cases", RFC 6789, December 2012. - [pBox] Floyd, S. and K. Fall, "Promoting the - Use of End-to-End Congestion Control - in the Internet", IEEE/ACM - Transactions on Networking 7(4) 458-- - 472, August 1999, . + [Rate_fair_Dis] Briscoe, B., "Flow Rate Fairness: Dismantling a + Religion", ACM CCR 37(2)63--74, April 2007, + . - [pktByteEmail] Floyd, S., "RED: Discussions of Byte - and Packet Modes", Web page Red Queue - Management, March 1997, . + [gentle_RED] Floyd, S., "Recommendation on using the + "gentle_" variant of RED", Web page , + March 2000, + . + + [pBox] Floyd, S. and K. Fall, "Promoting the Use of + End-to-End Congestion Control in the Internet", + IEEE/ACM Transactions on Networking 7(4) 458-- + 472, August 1999, + . + + [pktByteEmail] Floyd, S., "RED: Discussions of Byte and Packet + Modes", email , March 1997, . Appendix A. Survey of RED Implementation Status This Appendix is informative, not normative. In May 2007 a survey was conducted of 84 vendors to assess how widely drop probability based on packet size has been implemented in RED Table 3. About 19% of those surveyed replied, giving a sample size of 16. Although in most cases we do not have permission to identify the respondents, we can say that those that have responded include @@ -1749,43 +1755,135 @@ cannot, at least not without a lot of complexity. The early research proposals for type (i) policing at a bottleneck link [pBox] used byte-mode drop, then detected flows that contributed disproportionately to the number of packets dropped. However, with no extra complexity, later proposals used packet mode drop and looked for flows that contributed a disproportionate amount of dropped bytes [CHOKe_Var_Pkt]. Work is progressing on the congestion exposure protocol (ConEx - [I-D.ietf-conex-concepts-uses]), which enables a type (ii) edge - policer located at a user's attachment point. The idea is to be able - to take an integrated view of the effect of all a user's traffic on - any link in the internetwork. However, byte-mode drop would - effectively preclude such edge policing because of the MTU issue - above. + [RFC6789]), which enables a type (ii) edge policer located at a + user's attachment point. The idea is to be able to take an + integrated view of the effect of all a user's traffic on any link in + the internetwork. However, byte-mode drop would effectively preclude + such edge policing because of the MTU issue above. Indeed, making drop probability depend on the size of the packets that bits happen to be divided into would simply encourage the bits to be divided into smaller packets in order to confuse policing. In contrast, as long as a dropped/marked packet is taken to mean that all the bytes in the packet are dropped/marked, a policer can remain robust against bits being re-divided into different size packets or across different size flows [Rate_fair_Dis]. Appendix D. Changes from Previous Versions To be removed by the RFC Editor on publication. Full incremental diffs between each version are available at (courtesy of the rfcdiff tool): + From -09 to -10: Following IESG review: + + * Updates 2309: Left header unchanged reflecting eventual IESG + consensus [Sean Turner, Pete Resnick]. + + * S.1 Intro: This memo adds to the congestion control principles + enumerated in BCP 41 [Pete Resnick] + + * Abstract, S.1, S.1.1, s.1.2 Intro, Scoping and Example: Made + applicability to all AQMs clearer listing some more example + AQMs and explained that we always use RED for examples, but + this doesn't mean it's not applicable to other AQMs. [A number + of reviewers have described the draft as "about RED"] + + * S.1 & S.2.1 Queue measurement: Explained that the choice + between measuring the queue in packets or bytes is only + relevant if measuring it in time units is infeasible [So as not + to imply that we haven't noticed the advances made by PDPC & + CoDel] + + * S.1.1. Terminology: Better explained why hybrid systems + congested by both packets and bytes are often designed to be + treated as bit-congestible [Richard Barnes]. + + * S.2.1. Queue measurement advice: Added examples. Added a + counter-example to justify SHOULDs rather than MUSTs. Pointed + to S.4.1 for a list of more complicated scenarios. [Benson + Schliesser, OpsDir] + + * S2.2. Recommendation on Encoding Congestion Notification: + Removed SHOULD treat packets equally, leaving only SHOULD NOT + drop dependent on packet size, to avoid it sounding like we're + saying QoS is not allowed. Pointed to possible app-specific + legacy use of byte-mode as a counter-example that prevents us + saying MUST NOT. [Pete Resnick] + + * S.2.3. Recommendation on Responding to Congestion: capitalised + the two SHOULDs in recommendations for TCP, and gave possible + counter-examples. [noticed while dealing with Pete Resnick's + point] + + * S2.4. Splitting & Merging: RTCP -> RTP/RTCP [Pete McCann, Gen- + ART] + + * S.3.2 Small != Control: many control packets are small -> + ...tend to be small [Stephen Farrell] + + * S.3.1 Perverse incentives: Changed transport designers to app + developers [Stephen Farrell] + + * S.4.1.1. Fixed Size Packet Buffers: Nearly completely re- + written to simplify and to reverse the advice when the + underlying resource is bit-congestible, irrespective of whether + the buffer consists of fixed-size packet buffers. [Richard + Barnes & Benson Schliesser] + + * S.4.2.1.2. Packet Size Bias Regardless of AQM: Largely re- + written to reflect the earlier change in advice about fixed- + size packet buffers, and to primarily focus on getting rid of + tail-drop, not various nuances of tail-drop. [Richard Barnes & + Benson Schliesser] + + * Editorial corrections [Tim Bray, AppsDir, Pete McCann, Gen-ART + and others] + + * Updated refs (two I-Ds have become RFCs). [Pete McCann] + + From -08 to -09: Following WG last call: + + * S.2.1: Made RED-related queue measurement recommendations + clearer + + * S.2.3: Added to "Recommendation on Responding to Congestion" to + make it clear that we are definitely not saying transports have + to equalise bit-rates, just how to do it and not do it, if you + want to. + + * S.3: Clarified motivation sections S.3.3 "Transport-Independent + Network" and S.3.5 "Implementation Efficiency" + + * S.3.4: Completely changed motivating argument from "Scaling + Congestion Control with Packet Size" to "Partial Deployment of + AQM". + + From -07 to -08: + + * Altered abstract to say it provides best current practice and + highlight that it updates RFC2309 + + * Added null IANA section + + * Updated refs + From -06 to -07: * A mix-up with the corollaries and their naming in 2.1 to 2.3 fixed. From -05 to -06: * Primarily editorial fixes. From -04 to -05: