draft-ietf-mptcp-congestion-03.txt   draft-ietf-mptcp-congestion-04.txt 
Internet Engineering Task Force C. Raiciu Internet Engineering Task Force C. Raiciu
Internet-Draft M. Handley Internet-Draft M. Handley
Intended status: Experimental D. Wischik Intended status: Experimental D. Wischik
Expires: October 13, 2011 University College London Expires: December 17, 2011 University College London
April 11, 2011 June 15, 2011
Coupled Congestion Control for Multipath Transport Protocols Coupled Congestion Control for Multipath Transport Protocols
draft-ietf-mptcp-congestion-03 draft-ietf-mptcp-congestion-04
Abstract Abstract
Often endpoints are connected by multiple paths, but communications Often endpoints are connected by multiple paths, but communications
are usually restricted to a single path per connection. Resource are usually restricted to a single path per connection. Resource
usage within the network would be more efficient were it possible for usage within the network would be more efficient were it possible for
these multiple paths to be used concurrently. Multipath TCP is a these multiple paths to be used concurrently. Multipath TCP is a
proposal to achieve multipath transport in TCP. proposal to achieve multipath transport in TCP.
New congestion control algorithms are needed for multipath transport New congestion control algorithms are needed for multipath transport
protocols such as Multipath TCP, as single path algorithms have a protocols such as Multipath TCP, as single path algorithms have a
series of issues in the multipath context. One of the prominent series of issues in the multipath context. One of the prominent
problems is that running existing algorithms such as TCP New Reno problems is that running existing algorithms such as standard TCP
independently on each path would give the multipath flow more than independently on each path would give the multipath flow more than
its fair share at a bottleneck link traversed by more than one of its its fair share at a bottleneck link traversed by more than one of its
subflows. Further, it is desirable that a source with multiple paths subflows. Further, it is desirable that a source with multiple paths
available will transfer more traffic using the least congested of the available will transfer more traffic using the least congested of the
paths, hence achieving resource pooling. This would increase the paths, hence achieving resource pooling. This would increase the
overall efficiency of the network and also its robustness to failure. overall efficiency of the network and also its robustness to failure.
This document presents a congestion control algorithm which couples This document presents a congestion control algorithm which couples
the congestion control algorithms running on different subflows by the congestion control algorithms running on different subflows by
linking their increase functions, and dynamically controls the linking their increase functions, and dynamically controls the
overall aggresiveness of the multipath flow. The result is a overall aggressiveness of the multipath flow. The result is a
practical algorithm that is fair to TCP at bottlenecks while moving practical algorithm that is fair to TCP at bottlenecks while moving
traffic away from congested links. traffic away from congested links.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
skipping to change at page 2, line 13 skipping to change at page 2, line 13
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on October 13, 2011. This Internet-Draft will expire on December 17, 2011.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 11 skipping to change at page 3, line 11
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the BSD License. described in the BSD License.
Table of Contents Table of Contents
1. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 1. Requirements Language . . . . . . . . . . . . . . . . . . . . 4
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Coupled Congestion Control Algorithm . . . . . . . . . . . . . 6 3. Coupled Congestion Control Algorithm . . . . . . . . . . . . . 6
4. Implementation Considerations . . . . . . . . . . . . . . . . 7 4. Implementation Considerations . . . . . . . . . . . . . . . . 7
4.1. Implementation Considerations when CWND is Expressed 4.1. Computing alpha in Practice . . . . . . . . . . . . . . . 8
4.2. Implementation Considerations when CWND is Expressed
in Packets . . . . . . . . . . . . . . . . . . . . . . . . 9 in Packets . . . . . . . . . . . . . . . . . . . . . . . . 9
5. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6. Security Considerations . . . . . . . . . . . . . . . . . . . 10 6. Security Considerations . . . . . . . . . . . . . . . . . . . 10
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 10 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11
9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 11 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 11
9.1. Normative References . . . . . . . . . . . . . . . . . . . 11 9.1. Normative References . . . . . . . . . . . . . . . . . . . 11
9.2. Informative References . . . . . . . . . . . . . . . . . . 11 9.2. Informative References . . . . . . . . . . . . . . . . . . 11
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 12 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 12
1. Requirements Language 1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119]. document are to be interpreted as described in RFC 2119 [RFC2119] .
2. Introduction 2. Introduction
Multipath TCP (MPTCP, [I-D.ford-mptcp-multiaddressed]) is a set of Multipath TCP (MPTCP, [I-D.ford-mptcp-multiaddressed]) is a set of
extensions to regular TCP [RFC0793] that allows one TCP connection to extensions to regular TCP [RFC0793] that allows one TCP connection to
be spread across multiple paths. MPTCP distributes load through the be spread across multiple paths. MPTCP distributes load through the
creation of separate "subflows" across potentially disjoint paths. creation of separate "subflows" across potentially disjoint paths.
How should congestion control be performed for multipath TCP? First, How should congestion control be performed for multipath TCP? First,
each subflow must have its own congestion control state (i.e. cwnd) each subflow must have its own congestion control state (i.e. cwnd)
so that capacity on that path is matched by offered load. The so that capacity on that path is matched by offered load. The
simplest way to achieve this goal is to simply run TCP New Reno simplest way to achieve this goal is to simply run standard TCP
congestion control [RFC5681] on each subflow. However this solution congestion control on each subflow. However this solution is
is unsatisfactory as it gives the multipath flow an unfair share when unsatisfactory as it gives the multipath flow an unfair share when
the paths taken by its different subflows share a common bottleneck. the paths taken by its different subflows share a common bottleneck.
Bottleneck fairness is just one requirement multipath congestion Bottleneck fairness is just one requirement multipath congestion
control should meet. The following three goals capture the desirable control should meet. The following three goals capture the desirable
properties of a practical multipath congestion control algorithm: properties of a practical multipath congestion control algorithm:
o Goal 1 (Improve Throughput) A multipath flow should perform at o Goal 1 (Improve Throughput) A multipath flow should perform at
least as well as a single path flow would on the best of the paths least as well as a single path flow would on the best of the paths
available to it. available to it.
skipping to change at page 5, line 4 skipping to change at page 5, line 4
Goals 1 and 2 together ensure fairness at the bottleneck. Goal 3 Goals 1 and 2 together ensure fairness at the bottleneck. Goal 3
captures the concept of resource pooling [WISCHIK]: if each multipath captures the concept of resource pooling [WISCHIK]: if each multipath
flow sends more data through its least congested path, the traffic in flow sends more data through its least congested path, the traffic in
the network will move away from congested areas. This improves the network will move away from congested areas. This improves
robustness and overall throughput, among other things. The way to robustness and overall throughput, among other things. The way to
achieve resource pooling is to effectively "couple" the congestion achieve resource pooling is to effectively "couple" the congestion
control loops for the different subflows. control loops for the different subflows.
We propose an algorithm that couples only the additive increase We propose an algorithm that couples only the additive increase
function of the subflows, and uses unmodified TCP New Reno behavior function of the subflows, and uses unmodified TCP behavior in case of
in case of a drop. The algorithm relies on the traditional TCP a drop. The algorithm relies on the traditional TCP mechanisms to
mechanisms to detect drops, to retransmit data, etc. detect drops, to retransmit data, etc.
Detecting shared bottlenecks reliably is quite difficult, but is just Detecting shared bottlenecks reliably is quite difficult, but is just
one part of a bigger question. This bigger question is how much one part of a bigger question. This bigger question is how much
bandwidth a multipath user should use in total, even if there is no bandwidth a multipath user should use in total, even if there is no
shared bottleneck. shared bottleneck.
The congestion controller aims to set the multipath flow's aggregate The congestion controller aims to set the multipath flow's aggregate
bandwidth to be the same as a regular TCP flow would get on the best bandwidth to be the same as a regular TCP flow would get on the best
path available to the multipath flow. To estimate the bandwidth of a path available to the multipath flow. To estimate the bandwidth of a
regular TCP flow, the multipath flow estimates loss rates and round regular TCP flow, the multipath flow estimates loss rates and round
trip times and computes the target rate. Then it adjusts the overall trip times and computes the target rate. Then it adjusts the overall
aggresiveness (parameter alpha) to achieve the desired rate. aggresiveness (parameter alpha) to achieve the desired rate.
While the mechanism above applies always, its effect depends on While the mechanism above applies always, its effect depends on
whether the multipath TCP flow influences or does not influence the whether the multipath TCP flow influences or does not influence the
link loss rates (high vs. low statistical multiplexing). If MPTCP link loss rates (low vs. high statistical multiplexing). If MPTCP
does not influence link loss rates, MPTCP will get the same does not influence link loss rates, MPTCP will get the same
throughput as TCP on the best path. In cases with low statistical throughput as TCP on the best path. In cases with low statistical
multiplexing, where the multipath flow influences the loss rates on multiplexing, where the multipath flow influences the loss rates on
the path, the multipath throughput will be strictly higher than a the path, the multipath throughput will be strictly higher than a
single TCP would get on any of the paths. In particular, if using single TCP would get on any of the paths. In particular, if using
two idle paths, multipath throughput will be sum of the two paths' two idle paths, multipath throughput will be sum of the two paths'
throughput. throughput.
This algorithm ensures bottleneck fairness and fairness in the This algorithm ensures bottleneck fairness and fairness in the
broader, network sense. We acknowledge that current TCP fairness broader, network sense. We acknowledge that current TCP fairness
skipping to change at page 5, line 47 skipping to change at page 5, line 47
It is intended that the algorithm presented here can be applied to It is intended that the algorithm presented here can be applied to
other multipath transport protocols, such as alternative multipath other multipath transport protocols, such as alternative multipath
extensions to TCP, or indeed any other congestion-aware transport extensions to TCP, or indeed any other congestion-aware transport
protocols. However, for the purposes of example this document will, protocols. However, for the purposes of example this document will,
where appropriate, refer to the MPTCP protocol. where appropriate, refer to the MPTCP protocol.
The design decisions and evaluation of the congestion control The design decisions and evaluation of the congestion control
algorithm are published in [NSDI]. algorithm are published in [NSDI].
The algorithm presented here only extends TCP New Reno congestion The algorithm presented here only extends standard TCP congestion
control for multipath operation. It is foreseeable that other control for multipath operation. It is foreseeable that other
congestion controllers will be implemented for multipath transport to congestion controllers will be implemented for multipath transport to
achieve the bandwidth-scaling properties of the newer congestion achieve the bandwidth-scaling properties of the newer congestion
control algorithms for regular TCP (such as Compound TCP and Cubic). control algorithms for regular TCP (such as Compound TCP and Cubic).
3. Coupled Congestion Control Algorithm 3. Coupled Congestion Control Algorithm
The algorithm we present only applies to the increase phase of the The algorithm we present only applies to the increase phase of the
congestion avoidance state specifying how the window inflates upon congestion avoidance state specifying how the window inflates upon
receiving an ack. The slow start, fast retransmit, and fast recovery receiving an ack. The slow start, fast retransmit, and fast recovery
algorithms, as well as the multiplicative decrease of the congestion algorithms, as well as the multiplicative decrease of the congestion
avoidance state are the same as in TCP [RFC5681]. avoidance state are the same as in standard TCP[RFC5681].
Let cwnd_i be the congestion window on the subflow i. Let tot_cwnd Let cwnd_i be the congestion window on the subflow i. Let tot_cwnd
be the sum of the congestion windows of all subflows in the be the sum of the congestion windows of all subflows in the
connection. Let p_i, rtt_i and mss_i be the loss rate, round trip connection. Let p_i, rtt_i and mss_i be the loss rate, round trip
time (i.e. smoothed round trip time estimate) and maximum segment time (i.e. smoothed round trip time estimate used by TCP) and maximum
size on subflow i. segment size on subflow i.
We assume throughout this document that the congestion window is We assume throughout this document that the congestion window is
maintained in bytes, unless otherwise specified. We briefly describe maintained in bytes, unless otherwise specified. We briefly describe
the algorithm for packet-based implementations of cwnd in section the algorithm for packet-based implementations of cwnd in section
Section 4.1. Section 4.2.
Our proposed "Linked Increases" algorithm MUST: Our proposed "Linked Increases" algorithm MUST:
o For each ack received on subflow i, increase cwnd_i by min o For each ack received on subflow i, increase cwnd_i by min
(alpha*bytes_acked*mss_i/tot_cwnd , bytes_acked*mss_i/cwnd_i ) (alpha*bytes_acked*mss_i/tot_cwnd , bytes_acked*mss_i/cwnd_i )
The increase formula takes the minimum between the computed increase The increase formula takes the minimum between the computed increase
for the multipath subflow (first argument to min), and the increase for the multipath subflow (first argument to min), and the increase
TCP would get in the same scenario (the second argument). In this TCP would get in the same scenario (the second argument). In this
way, we ensure that any multipath subflow cannot be more aggressive way, we ensure that any multipath subflow cannot be more aggressive
skipping to change at page 7, line 5 skipping to change at page 7, line 5
the case where all the subflows have the same round trip time and the case where all the subflows have the same round trip time and
MSS. In this case the algorithm will grow the total window by MSS. In this case the algorithm will grow the total window by
approximately alpha*MSS per RTT. This increase is distributed to the approximately alpha*MSS per RTT. This increase is distributed to the
individual flows according to their instantaneous window size. individual flows according to their instantaneous window size.
Subflow i will increase by alpha*cwnd_i/tot_cwnd segments per RTT. Subflow i will increase by alpha*cwnd_i/tot_cwnd segments per RTT.
Note that, as in standard TCP, when tot_cwnd is large the increase Note that, as in standard TCP, when tot_cwnd is large the increase
may be 0. In this case the increase MUST be set to 1. We discuss may be 0. In this case the increase MUST be set to 1. We discuss
how to implement this formula in practice in the next section. how to implement this formula in practice in the next section.
We assume appropriate byte counting (ABC, [RFC3465]) is used, hence We assume implementations use an approach similar to appropriate byte
the bytes_acked variable records the number of bytes newly counting (ABC, [RFC3465]), where the bytes_acked variable records the
acknowledged. If ABC is not used, bytes_acked SHOULD be set to number of bytes newly acknowledged. If this is not true, bytes_acked
mss_i. SHOULD be set to mss_i.
To compute tot_cwnd, it is an easy mistake to sum up cwnd_i across To compute tot_cwnd, it is an easy mistake to sum up cwnd_i across
all subflows: when a flow is in fast retransmit, its cwnd is all subflows: when a flow is in fast retransmit, its cwnd is
typically inflated and no longer represents the real congestion typically inflated and no longer represents the real congestion
window. The correct behavior is to use the ssthresh value for flows window. The correct behavior is to use the ssthresh value for flows
in fast retransmit when computing tot_cwnd. To cater for connections in fast retransmit when computing tot_cwnd. To cater for connections
that are app limited, the computation should consider the minimum that are app limited, the computation should consider the minimum
between flight_size_i and cwnd_i, and flight_size_i and ssthresh_i between flight_size_i and cwnd_i, and flight_size_i and ssthresh_i
where appropriate. where appropriate.
The total throughput of a multipath flow depends on the value of The total throughput of a multipath flow depends on the value of
alpha and the loss rates, maximum segment sizes and round trip times alpha and the loss rates, maximum segment sizes and round trip times
of its paths. Since we require that the total throughput is no worse of its paths. Since we require that the total throughput is no worse
than the throughput a single TCP would get on the best path, it is than the throughput a single TCP would get on the best path, it is
impossible to choose a-priori a single value of alpha that achieves impossible to choose a-priori a single value of alpha that achieves
the desired throughput in every ocasion. Hence, alpha must be the desired throughput in every occasion. Hence, alpha must be
computed based on the observed properties of the paths. computed based on the observed properties of the paths.
The formula to compute alpha is: The formula to compute alpha is:
cwnd_i cwnd_i
max -------- max --------
i 2 i 2
rtt_i rtt_i
alpha = tot_cwnd * ---------------- alpha = tot_cwnd * ----------------
/ cwnd_i \ 2 / cwnd_i \ 2
skipping to change at page 7, line 50 skipping to change at page 8, line 5
alpha. alpha.
4. Implementation Considerations 4. Implementation Considerations
The formula for alpha above implies that alpha is a floating point The formula for alpha above implies that alpha is a floating point
value. This would require performing costly floating point value. This would require performing costly floating point
operations whenever an ACK is received, Further, in many kernels operations whenever an ACK is received, Further, in many kernels
floating point operations are disabled. There is an easy way to floating point operations are disabled. There is an easy way to
approximate the above calculations using integer arithmetic. approximate the above calculations using integer arithmetic.
4.1. Computing alpha in Practice
Let alpha_scale be an integer. When computing alpha, use alpha_scale Let alpha_scale be an integer. When computing alpha, use alpha_scale
* tot_cwnd instead of tot_cwnd, and do all the operations in integer * tot_cwnd instead of tot_cwnd, and do all the operations in integer
arithmetic. arithmetic.
Then, scale down the increase per ack by alpha_scale. The algorithm Then, scale down the increase per ack by alpha_scale. The algorithm
is: is:
o For each ack received on subflow i, increase cwnd_i by min ( o For each ack received on subflow i, increase cwnd_i by min (
(alpha*bytes_acked*mss_i/tot_cwnd)/alpha_scale , (alpha*bytes_acked*mss_i/tot_cwnd)/alpha_scale ,
bytes_acked*mss_i/cwnd_i ) bytes_acked*mss_i/cwnd_i )
skipping to change at page 8, line 39 skipping to change at page 8, line 43
It is possible to implement the algorithm by calculating tot_cwnd on It is possible to implement the algorithm by calculating tot_cwnd on
each ack, however this would be costly especially when the number of each ack, however this would be costly especially when the number of
subflows is large. To avoid this overhead the implementation MAY subflows is large. To avoid this overhead the implementation MAY
choose to maintain a new per connection state variable called choose to maintain a new per connection state variable called
tot_cwnd. If it does so, the implementation will update tot_cwnd tot_cwnd. If it does so, the implementation will update tot_cwnd
value whenever the individual subflows' windows are updated. value whenever the individual subflows' windows are updated.
Updating only requires one more addition or subtraction operation Updating only requires one more addition or subtraction operation
compared to the regular, per subflow congestion control code, so its compared to the regular, per subflow congestion control code, so its
performance impact should be minimal. performance impact should be minimal.
Computing alpha per ack is also costly. We propose alpha be a per Computing alpha per ack is also costly. We propose alpha to be a per
connection variable, computed whenever there is a drop and once per connection variable, computed whenever there is a drop and once per
RTT otherwise. More specifically, let cwnd_new be the new value of RTT otherwise. More specifically, let cwnd_new be the new value of
the congestion window after it is inflated or after a drop. Update the congestion window after it is inflated or after a drop. Update
alpha only if cwnd_i/mss_i != cwnd_new_i/mss_i. alpha only if cwnd_i/mss_i != cwnd_new_i/mss_i.
In certain cases with small RTTs, computing alpha can still be In certain cases with small RTTs, computing alpha can still be
expensive. We observe that if RTTs were constant, it is sufficient expensive. We observe that if RTTs were constant, it is sufficient
to compute alpha once per drop, as alpha does not change between to compute alpha once per drop, as alpha does not change between
drops (the insight here is that cwnd_i/cwnd_j = constant as long as drops (the insight here is that cwnd_i/cwnd_j = constant as long as
both windows increase). Experimental results show that even if round both windows increase). Experimental results show that even if round
trip times are not constant, using average round trip time instead of trip times are not constant, using average round trip time per
instantaneous round trip time gives good precision for computing sawtooth instead of instantaneous round trip time (i.e. TCP's
alpha. Hence, it is possible to compute alpha only once per drop smoothed RTT estimator) gives good precision for computing alpha.
according to the formula above, by replacing rtt_i with rtt_avg_i. Hence, it is possible to compute alpha only once per drop according
to the formula above, by replacing rtt_i with rtt_avg_i.
If using average round trip time, rtt_avg_i will be computed by If using average round trip time, rtt_avg_i will be computed by
sampling the rtt_i whenever the window can accomodate one more sampling the rtt_i whenever the window can accommodate one more
packet, i.e. when cwnd / mss < (cwnd+increase)/mss. The samples are packet, i.e. when cwnd / mss < (cwnd+increase)/mss. The samples are
averaged once per sawtooth into rtt_avg_i. This sampling ensures averaged once per sawtooth into rtt_avg_i. This sampling ensures
that there is no sampling bias for larger windows. that there is no sampling bias for larger windows.
Given tot_cwnd and alpha, the congestion control algorithm is run for Given tot_cwnd and alpha, the congestion control algorithm is run for
each subflow independently, with similar complexity to the standard each subflow independently, with similar complexity to the standard
TCP increase code [RFC5681]. TCP increase code [RFC5681].
4.1. Implementation Considerations when CWND is Expressed in Packets 4.2. Implementation Considerations when CWND is Expressed in Packets
When the congestion control algorithm maintains cwnd in packets When the congestion control algorithm maintains cwnd in packets
rather than bytes, the algorithms above must change to take into rather than bytes, the algorithms above must change to take into
account path mss. account path mss.
To compute the increase when an ack is received, the implementation To compute the increase when an ack is received, the implementation
for multipath congestion control is a simple extension of the TCP New for multipath congestion control is a simple extension of the
Reno code. In TCP New Reno cwnd_cnt is an additional state variable standard TCP code. In standard TCP cwnd_cnt is an additional state
that tracks the number of segments acked since the last cwnd variable that tracks the number of segments acked since the last cwnd
increment; cwnd is incremented only when cwnd_cnt > cwnd; then increment; cwnd is incremented only when cwnd_cnt > cwnd; then
cwnd_cnt is set to 0. cwnd_cnt is set to 0.
In the multipath case, cwnd_cnt_i is maintained for each subflow as In the multipath case, cwnd_cnt_i is maintained for each subflow as
above, and cwnd_i is increased by 1 when cwnd_cnt_i > max(alpha_scale above, and cwnd_i is increased by 1 when cwnd_cnt_i > max(alpha_scale
* tot_cwnd / alpha, cwnd_i). * tot_cwnd / alpha, cwnd_i).
When computing alpha for packet-based stacks, the errors in computing When computing alpha for packet-based stacks, the errors in computing
the terms in the denominator are larger (this is because cwnd is much the terms in the denominator are larger (this is because cwnd is much
smaller and rtt may be comparatively large). Let max be the index of smaller and rtt may be comparatively large). Let max be the index of
skipping to change at page 10, line 42 skipping to change at page 10, line 47
When the loss rates differ, progressively more window will be When the loss rates differ, progressively more window will be
allocated to the flow with the lower loss rate. In contrast, perfect allocated to the flow with the lower loss rate. In contrast, perfect
resource pooling requires that all the window should be allocated on resource pooling requires that all the window should be allocated on
the path with the lowest loss rate. the path with the lowest loss rate.
6. Security Considerations 6. Security Considerations
None. None.
Detailed security analysis for the Multipath TCP protocol itself is Detailed security analysis for the Multipath TCP protocol itself is
included in [I-D.ford-mptcp-multiaddressed] and [REF] included in [I-D.ford-mptcp-multiaddressed] and RFC 6181 [RFC6181]
7. Acknowledgements 7. Acknowledgements
We thank Christoph Paasch for his suggestions for computing alpha in We thank Christoph Paasch for his suggestions for computing alpha in
packet-based stacks. The authors are supported by Trilogy packet-based stacks. The authors are supported by Trilogy
(http://www.trilogy-project.org), a research project (ICT-216372) (http://www.trilogy-project.org), a research project (ICT-216372)
partially funded by the European Community under its Seventh partially funded by the European Community under its Seventh
Framework Program. The views expressed here are those of the Framework Program. The views expressed here are those of the
author(s) only. The European Commission is not liable for any use author(s) only. The European Commission is not liable for any use
that may be made of the information in this document. that may be made of the information in this document.
8. IANA Considerations 8. IANA Considerations
None. This document does not require any action from IANA.
9. References 9. References
9.1. Normative References 9.1. Normative References
[RFC0793] Postel, J., "Transmission Control Protocol", STD 7,
RFC 793, September 1981.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009.
9.2. Informative References 9.2. Informative References
[I-D.ford-mptcp-multiaddressed] [I-D.ford-mptcp-multiaddressed]
Ford, A., Raiciu, C., Handley, M., and S. Barre, "TCP Ford, A., Raiciu, C., Handley, M., and S. Barre, "TCP
Extensions for Multipath Operation with Multiple Extensions for Multipath Operation with Multiple
Addresses", draft-ford-mptcp-multiaddressed-01 (work in Addresses", draft-ford-mptcp-multiaddressed-01 (work in
progress), July 2009. progress), July 2009.
[KELLY] Kelly, F. and T. Voice, "Stability of end-to-end [KELLY] Kelly, F. and T. Voice, "Stability of end-to-end
algorithms for joint routing and rate control", ACM algorithms for joint routing and rate control", ACM
SIGCOMM CCR vol. 35 num. 2, pp. 5-12, 2005, SIGCOMM CCR vol. 35 num. 2, pp. 5-12, 2005,
<http://portal.acm.org/citation.cfm?id=1064415>. <http://portal.acm.org/citation.cfm?id=1064415>.
[NSDI] Wischik, D., Raiciu, C., Greenhalgh, A., and M. Handley, [NSDI] Wischik, D., Raiciu, C., Greenhalgh, A., and M. Handley,
"Design, Implementation and Evaluation of Congestion "Design, Implementation and Evaluation of Congestion
Control for Multipath TCP", Usenix NSDI , March 2011, <htt Control for Multipath TCP", Usenix NSDI , March 2011, <htt
p://www.cs.ucl.ac.uk/staff/c.raiciu/files/mptcp-nsdi.pdf>. p://www.cs.ucl.ac.uk/staff/c.raiciu/files/mptcp-nsdi.pdf>.
[RFC0793] Postel, J., "Transmission Control Protocol", STD 7,
RFC 793, September 1981.
[RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte [RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte
Counting (ABC)", RFC 3465, February 2003. Counting (ABC)", RFC 3465, February 2003.
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC6181] Bagnulo, M., "Threat Analysis for TCP Extensions for
Control", RFC 5681, September 2009. Multipath Operation with Multiple Addresses", RFC 6181,
March 2011.
[WISCHIK] Wischik, D., Handley, M., and M. Bagnulo Braun, "The [WISCHIK] Wischik, D., Handley, M., and M. Bagnulo Braun, "The
Resource Pooling Principle", ACM SIGCOMM CCR vol. 38 num. Resource Pooling Principle", ACM SIGCOMM CCR vol. 38 num.
5, pp. 47-52, October 2008, 5, pp. 47-52, October 2008,
<http://ccr.sigcomm.org/online/files/p47-handleyA4.pdf>. <http://ccr.sigcomm.org/online/files/p47-handleyA4.pdf>.
Authors' Addresses Authors' Addresses
Costin Raiciu Costin Raiciu
University College London University College London
 End of changes. 29 change blocks. 
43 lines changed or deleted 51 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/