draft-ietf-mptcp-congestion-06.txt   draft-ietf-mptcp-congestion-07.txt 
Internet Engineering Task Force C. Raiciu Internet Engineering Task Force C. Raiciu
Internet-Draft University Politehnica of Internet-Draft University Politehnica of
Intended status: Experimental Bucharest Intended status: Experimental Bucharest
Expires: January 26, 2012 M. Handley Expires: January 30, 2012 M. Handley
D. Wischik D. Wischik
University College London University College London
July 25, 2011 July 29, 2011
Coupled Congestion Control for Multipath Transport Protocols Coupled Congestion Control for Multipath Transport Protocols
draft-ietf-mptcp-congestion-06 draft-ietf-mptcp-congestion-07
Abstract Abstract
Often endpoints are connected by multiple paths, but communications Often endpoints are connected by multiple paths, but communications
are usually restricted to a single path per connection. Resource are usually restricted to a single path per connection. Resource
usage within the network would be more efficient were it possible for usage within the network would be more efficient were it possible for
these multiple paths to be used concurrently. Multipath TCP is a these multiple paths to be used concurrently. Multipath TCP is a
proposal to achieve multipath transport in TCP. proposal to achieve multipath transport in TCP.
New congestion control algorithms are needed for multipath transport New congestion control algorithms are needed for multipath transport
skipping to change at page 2, line 11 skipping to change at page 2, line 11
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 19, 2012. This Internet-Draft will expire on January 30, 2012.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 17 skipping to change at page 3, line 17
1. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 1. Requirements Language . . . . . . . . . . . . . . . . . . . . 4
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Coupled Congestion Control Algorithm . . . . . . . . . . . . . 6 3. Coupled Congestion Control Algorithm . . . . . . . . . . . . . 6
4. Implementation Considerations . . . . . . . . . . . . . . . . 7 4. Implementation Considerations . . . . . . . . . . . . . . . . 7
4.1. Computing alpha in Practice . . . . . . . . . . . . . . . 8 4.1. Computing alpha in Practice . . . . . . . . . . . . . . . 8
4.2. Implementation Considerations when CWND is Expressed 4.2. Implementation Considerations when CWND is Expressed
in Packets . . . . . . . . . . . . . . . . . . . . . . . . 9 in Packets . . . . . . . . . . . . . . . . . . . . . . . . 9
5. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6. Security Considerations . . . . . . . . . . . . . . . . . . . 11 6. Security Considerations . . . . . . . . . . . . . . . . . . . 11
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12
9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 11 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12
9.1. Normative References . . . . . . . . . . . . . . . . . . . 11 9.1. Normative References . . . . . . . . . . . . . . . . . . . 12
9.2. Informative References . . . . . . . . . . . . . . . . . . 11 9.2. Informative References . . . . . . . . . . . . . . . . . . 12
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 12 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 13
1. Requirements Language 1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119] . document are to be interpreted as described in RFC 2119 [RFC2119] .
2. Introduction 2. Introduction
Multipath TCP (MPTCP, [I-D.ford-mptcp-multiaddressed]) is a set of Multipath TCP (MPTCP, [I-D.ietf-mptcp-multiaddressed]) is a set of
extensions to regular TCP [RFC0793] that allows one TCP connection to extensions to regular TCP [RFC0793] that allows one TCP connection to
be spread across multiple paths. MPTCP distributes load through the be spread across multiple paths. MPTCP distributes load through the
creation of separate "subflows" across potentially disjoint paths. creation of separate "subflows" across potentially disjoint paths.
How should congestion control be performed for multipath TCP? First, How should congestion control be performed for multipath TCP? First,
each subflow must have its own congestion control state (i.e. cwnd) each subflow must have its own congestion control state (i.e. cwnd)
so that capacity on that path is matched by offered load. The so that capacity on that path is matched by offered load. The
simplest way to achieve this goal is to simply run standard TCP simplest way to achieve this goal is to simply run standard TCP
congestion control on each subflow. However this solution is congestion control on each subflow. However this solution is
unsatisfactory as it gives the multipath flow an unfair share when unsatisfactory as it gives the multipath flow an unfair share when
skipping to change at page 6, line 26 skipping to change at page 6, line 26
time (i.e. smoothed round trip time estimate used by TCP) and maximum time (i.e. smoothed round trip time estimate used by TCP) and maximum
segment size on subflow i. segment size on subflow i.
We assume throughout this document that the congestion window is We assume throughout this document that the congestion window is
maintained in bytes, unless otherwise specified. We briefly describe maintained in bytes, unless otherwise specified. We briefly describe
the algorithm for packet-based implementations of cwnd in section the algorithm for packet-based implementations of cwnd in section
Section 4.2. Section 4.2.
Our proposed "Linked Increases" algorithm MUST: Our proposed "Linked Increases" algorithm MUST:
o For each ack received on subflow i, increase cwnd_i by min o For each ack received on subflow i, increase cwnd_i by
(alpha*bytes_acked*mss_i/tot_cwnd , bytes_acked*mss_i/cwnd_i )
The increase formula takes the minimum between the computed increase alpha * bytes_acked * mss_i bytes_acked * mss_i
for the multipath subflow (first argument to min), and the increase min ( --------------------------- , ------------------- ) (1)
TCP would get in the same scenario (the second argument). In this tot_cwnd cwnd_i
way, we ensure that any multipath subflow cannot be more aggressive
than a TCP flow in the same circumstances, hence achieving goal 2 (do The increase formula (1) takes the minimum between the computed
no harm). increase for the multipath subflow (first argument to min), and the
increase TCP would get in the same scenario (the second argument).
In this way, we ensure that any multipath subflow cannot be more
aggressive than a TCP flow in the same circumstances, hence achieving
goal 2 (do no harm).
"alpha" is a parameter of the algorithm that describes the "alpha" is a parameter of the algorithm that describes the
aggresiveness of the multipath flow. To meet Goal 1 (improve aggresiveness of the multipath flow. To meet Goal 1 (improve
throughput), the value of alpha is chosen such that the aggregate throughput), the value of alpha is chosen such that the aggregate
throughput of the multipath flow is equal to the throughput a TCP throughput of the multipath flow is equal to the throughput a TCP
flow would get if it ran on the best path. flow would get if it ran on the best path.
To get an intuition of what the algorithm is trying to do, let's take To get an intuition of what the algorithm is trying to do, let's take
the case where all the subflows have the same round trip time and the case where all the subflows have the same round trip time and
MSS. In this case the algorithm will grow the total window by MSS. In this case the algorithm will grow the total window by
approximately alpha*MSS per RTT. This increase is distributed to the approximately alpha*MSS per RTT. This increase is distributed to the
individual flows according to their instantaneous window size. individual flows according to their instantaneous window size.
Subflow i will increase by alpha*cwnd_i/tot_cwnd segments per RTT. Subflow i will increase by alpha*cwnd_i/tot_cwnd segments per RTT.
Note that, as in standard TCP, when tot_cwnd is large the increase Note that, as in standard TCP, when tot_cwnd is large the increase
may be 0. In this case the increase MUST be set to 1. We discuss may be 0. In this case the increase MUST be set to 1. We discuss
how to implement this formula in practice in the next section. how to implement this formula in practice in the next section.
We assume implementations use an approach similar to appropriate byte We assume implementations use an approach similar to appropriate byte
counting (ABC, [RFC3465]), where the bytes_acked variable records the counting (ABC, [RFC3465]), where the bytes_acked variable records the
number of bytes newly acknowledged. If this is not true, bytes_acked number of bytes newly acknowledged. If this is not the case,
SHOULD be set to mss_i. bytes_acked SHOULD be set to mss_i.
To compute tot_cwnd, it is an easy mistake to sum up cwnd_i across To compute tot_cwnd, it is an easy mistake to sum up cwnd_i across
all subflows: when a flow is in fast retransmit, its cwnd is all subflows: when a flow is in fast retransmit, its cwnd is
typically inflated and no longer represents the real congestion typically inflated and no longer represents the real congestion
window. The correct behavior is to use the ssthresh value for flows window. The correct behavior is to use the ssthresh value for flows
in fast retransmit when computing tot_cwnd. To cater for connections in fast retransmit when computing tot_cwnd. To cater for connections
that are app limited, the computation should consider the minimum that are app limited, the computation should consider the minimum
between flight_size_i and cwnd_i, and flight_size_i and ssthresh_i between flight_size_i and cwnd_i, and flight_size_i and ssthresh_i
where appropriate. where appropriate.
skipping to change at page 7, line 33 skipping to change at page 7, line 35
impossible to choose a-priori a single value of alpha that achieves impossible to choose a-priori a single value of alpha that achieves
the desired throughput in every occasion. Hence, alpha must be the desired throughput in every occasion. Hence, alpha must be
computed based on the observed properties of the paths. computed based on the observed properties of the paths.
The formula to compute alpha is: The formula to compute alpha is:
cwnd_i cwnd_i
max -------- max --------
i 2 i 2
rtt_i rtt_i
alpha = tot_cwnd * ---------------- alpha = tot_cwnd * ---------------- (2)
/ cwnd_i \ 2 / cwnd_i \ 2
| sum ---------| | sum ---------|
\ i rtt_i / \ i rtt_i /
The formula is derived by equalizing the rate of the multipath flow The formula (2) is derived by equalizing the rate of the multipath
with the rate of a TCP running on the best path, and solving for flow with the rate of a TCP running on the best path, and solving for
alpha. alpha.
4. Implementation Considerations 4. Implementation Considerations
The formula for alpha above implies that alpha is a floating point Equation (2) implies that alpha is a floating point value. This
value. This would require performing costly floating point would require performing costly floating point operations whenever an
operations whenever an ACK is received, Further, in many kernels ACK is received, Further, in many kernels floating point operations
floating point operations are disabled. There is an easy way to are disabled. There is an easy way to approximate the above
approximate the above calculations using integer arithmetic. calculations using integer arithmetic.
4.1. Computing alpha in Practice 4.1. Computing alpha in Practice
Let alpha_scale be an integer. When computing alpha, use alpha_scale Let alpha_scale be an integer. When computing alpha, use alpha_scale
* tot_cwnd instead of tot_cwnd, and do all the operations in integer * tot_cwnd instead of tot_cwnd, and do all the operations in integer
arithmetic. arithmetic.
Then, scale down the increase per ack by alpha_scale. The algorithm Then, scale down the increase per ack by alpha_scale. The resulting
is: algorithm is a simple change from Equation (1):
o For each ack received on subflow i, increase cwnd_i by min ( o For each ack received on subflow i, increase cwnd_i by
(alpha*bytes_acked*mss_i/tot_cwnd)/alpha_scale ,
bytes_acked*mss_i/cwnd_i )
Alpha scale denotes the precision we want for computing alpha. alpha * bytes_acked * mss_i bytes_acked * mss_i
Observe that the errors in computing the numerator or the denominator min ( --------------------------- , ------------------- ) (3)
in the formula for alpha are quite small, as the cwnd in bytes is alpha_scale * tot_cwnd cwnd_i
typically much larger than the RTT (measured in ms).
The alpha_scale parameter denotes the precision we want for computing
alpha. Observe that the errors in computing the numerator or the
denominator in the formula for alpha are quite small, as the cwnd in
bytes is typically much larger than the RTT (measured in ms).
With these changes, all the operations can be done using integer With these changes, all the operations can be done using integer
arithmetic. We propose alpha_scale be a small power of two, to allow arithmetic. We propose alpha_scale be a small power of two, to allow
using faster shift operations instead of multiplication and division. using faster shift operations instead of multiplication and division.
Our experiments show that using alpha_scale=512 works well in a wide Our experiments show that using alpha_scale=512 works well in a wide
range of scenarios. Increasing alpha_scale increases precision, but range of scenarios. Increasing alpha_scale increases precision, but
also increases the risk of overflow when computing alpha. Using also increases the risk of overflow when computing alpha. Using
64bit operations would solve this issue. Another option is to 64bit operations would solve this issue. Another option is to
dynamically adjust alpha_scale when computing alpha; in this way we dynamically adjust alpha_scale when computing alpha; in this way we
avoid overflow and obtain maximum precision. avoid overflow and obtain maximum precision.
skipping to change at page 8, line 47 skipping to change at page 8, line 50
tot_cwnd. If it does so, the implementation will update tot_cwnd tot_cwnd. If it does so, the implementation will update tot_cwnd
value whenever the individual subflows' windows are updated. value whenever the individual subflows' windows are updated.
Updating only requires one more addition or subtraction operation Updating only requires one more addition or subtraction operation
compared to the regular, per subflow congestion control code, so its compared to the regular, per subflow congestion control code, so its
performance impact should be minimal. performance impact should be minimal.
Computing alpha per ack is also costly. We propose alpha to be a per Computing alpha per ack is also costly. We propose alpha to be a per
connection variable, computed whenever there is a drop and once per connection variable, computed whenever there is a drop and once per
RTT otherwise. More specifically, let cwnd_new be the new value of RTT otherwise. More specifically, let cwnd_new be the new value of
the congestion window after it is inflated or after a drop. Update the congestion window after it is inflated or after a drop. Update
alpha only if cwnd_i/mss_i != cwnd_new_i/mss_i. alpha only if the quotient of cwnd_i/mss_i differs from the quotient
of cwnd_new_i/mss_i.
In certain cases with small RTTs, computing alpha can still be In certain cases with small RTTs, computing alpha can still be
expensive. We observe that if RTTs were constant, it is sufficient expensive. We observe that if RTTs were constant, it is sufficient
to compute alpha once per drop, as alpha does not change between to compute alpha once per drop, as alpha does not change between
drops (the insight here is that cwnd_i/cwnd_j = constant as long as drops (the insight here is that cwnd_i/cwnd_j = constant as long as
both windows increase). Experimental results show that even if round both windows increase). Experimental results show that even if round
trip times are not constant, using average round trip time per trip times are not constant, using average round trip time per
sawtooth instead of instantaneous round trip time (i.e. TCP's sawtooth instead of instantaneous round trip time (i.e. TCP's
smoothed RTT estimator) gives good precision for computing alpha. smoothed RTT estimator) gives good precision for computing alpha.
Hence, it is possible to compute alpha only once per drop according Hence, it is possible to compute alpha only once per drop using a
to the formula above, by replacing rtt_i with rtt_avg_i. modified version of equation (2) where rtt_i is replaced with
rtt_avg_i.
If using average round trip time, rtt_avg_i will be computed by If using average round trip time, rtt_avg_i will be computed by
sampling the rtt_i whenever the window can accommodate one more sampling the rtt_i whenever the window can accommodate one more
packet, i.e. when cwnd / mss < (cwnd+increase)/mss. The samples are packet, i.e. when cwnd / mss < (cwnd+increase)/mss. The samples are
averaged once per sawtooth into rtt_avg_i. This sampling ensures averaged once per sawtooth into rtt_avg_i. This sampling ensures
that there is no sampling bias for larger windows. that there is no sampling bias for larger windows.
Given tot_cwnd and alpha, the congestion control algorithm is run for Given tot_cwnd and alpha, the congestion control algorithm is run for
each subflow independently, with similar complexity to the standard each subflow independently, with similar complexity to the standard
TCP increase code [RFC5681]. TCP increase code [RFC5681].
skipping to change at page 9, line 43 skipping to change at page 9, line 49
In the multipath case, cwnd_cnt_i is maintained for each subflow as In the multipath case, cwnd_cnt_i is maintained for each subflow as
above, and cwnd_i is increased by 1 when cwnd_cnt_i > max(alpha_scale above, and cwnd_i is increased by 1 when cwnd_cnt_i > max(alpha_scale
* tot_cwnd / alpha, cwnd_i). * tot_cwnd / alpha, cwnd_i).
When computing alpha for packet-based stacks, the errors in computing When computing alpha for packet-based stacks, the errors in computing
the terms in the denominator are larger (this is because cwnd is much the terms in the denominator are larger (this is because cwnd is much
smaller and rtt may be comparatively large). Let max be the index of smaller and rtt may be comparatively large). Let max be the index of
the subflow used in the numerator. To reduce errors, it is easiest the subflow used in the numerator. To reduce errors, it is easiest
to move rtt_max (once calculated) from the numerator to the to move rtt_max (once calculated) from the numerator to the
denominator, obtaining the equivalent formula below. denominator, changing equation (2) to obtain the equivalent formula
below.
cwnd_max cwnd_max
alpha = alpha_scale * tot_cwnd * ----------------------- alpha = alpha_scale * tot_cwnd * ----------------------- (4)
/ rtt_max * cwnd_i \ 2 / rtt_max * cwnd_i \ 2
| sum -----------------| | sum -----------------|
\ i rtt_i / \ i rtt_i /
Note that the formula for computing alpha does not take into account Note that the calculation of alpha does not take into account path
path mss, and is the same for stacks that keep cwnd in bytes or mss, and is the same for stacks that keep cwnd in bytes or packets.
packets. With this formula, the algorithm for computing alpha will With this formula, the algorithm for computing alpha will match the
match the rate of TCP on the best path in B/s for byte-oriented rate of TCP on the best path in B/s for byte-oriented stacks, and in
stacks, and in packets/s in packet-based stacks. In practice, mss packets/s in packet-based stacks. In practice, mss rarely changes
rarely changes between paths so this shouldn't be a problem. between paths so this shouldn't be a problem.
However, it is simple to derive formulae allowing packet-based stacks However, it is simple to derive formulae allowing packet-based stacks
to achieve byte rate fairness (and viceversa) if needed. In to achieve byte rate fairness (and viceversa) if needed. In
particular, for packet-based stacks wanting byte-rate fairness, the particular, for packet-based stacks wanting byte-rate fairness,
formula above changes as follows: cwnd_max is replaced by cwnd_max * equation (4) above changes as follows: cwnd_max is replaced by
mss_max * mss_max, while cwnd_i is replaced with cwnd_i * mss_i. cwnd_max * mss_max * mss_max, while cwnd_i is replaced with cwnd_i *
mss_i.
5. Discussion 5. Discussion
The algorithm we've presented fully achieves Goals 1 and 2, but does The algorithm we've presented fully achieves Goals 1 and 2, but does
not achieve full resource pooling (Goal 3). Resource pooling not achieve full resource pooling (Goal 3). Resource pooling
requires that no traffic should be transferred on links with higher requires that no traffic should be transferred on links with higher
loss rates. To achieve perfect resource pooling, one must couple loss rates. To achieve perfect resource pooling, one must couple
both increase and decrease of congestion windows across subflows, as both increase and decrease of congestion windows across subflows, as
in [KELLY]. in [KELLY].
skipping to change at page 11, line 7 skipping to change at page 11, line 14
constant, for all i. Thus, when the loss rates of the subflows are constant, for all i. Thus, when the loss rates of the subflows are
equal, each subflow will get an equal window, removing flappiness. equal, each subflow will get an equal window, removing flappiness.
When the loss rates differ, progressively more window will be When the loss rates differ, progressively more window will be
allocated to the flow with the lower loss rate. In contrast, perfect allocated to the flow with the lower loss rate. In contrast, perfect
resource pooling requires that all the window should be allocated on resource pooling requires that all the window should be allocated on
the path with the lowest loss rate. Further details can be found in the path with the lowest loss rate. Further details can be found in
[NSDI]. [NSDI].
6. Security Considerations 6. Security Considerations
None. One security concern relates to what we call the traffic-shifting
attack: on-path attackers can drop packets belonging to a multipath
subflow, which in turn makes the path seem congested and will force
the sender's congestion controller to avoid that path and push more
data over alternate subflows.
Detailed security analysis for the Multipath TCP protocol itself is The attacker's goal is to create congestion on the corresponding
included in [I-D.ford-mptcp-multiaddressed] and RFC 6181 [RFC6181] alternative paths. This behaviour is entirely feasible, but will
only have minor effects: by design, the coupled congestion controller
is less (or similarly) aggressive on any of its paths than a single
TCP flow. Thus, the biggest effect this attack can have is to make a
multipath subflow be as aggressive as a single TCP flow.
Another effect of the traffic-shifting attack is that the new path
can monitor all the traffic, whereas before it could only see a
subset of traffic. We believe that if privacy is needed, splitting
traffic across multiple paths with MPTCP is not the right solution in
the first place; end-to-end encryption should be used instead.
Besides the traffic-shifting attack mentioned above, the coupled
congestion control algorithm defined in this draft adds no other
security considerations to those found in
[I-D.ietf-mptcp-multiaddressed] and [RFC6181]. Detailed security
analysis for the Multipath TCP protocol itself is included in
[I-D.ietf-mptcp-multiaddressed] and [RFC6181].
7. Acknowledgements 7. Acknowledgements
We thank Christoph Paasch for his suggestions for computing alpha in We thank Christoph Paasch for his suggestions for computing alpha in
packet-based stacks. The authors are supported by Trilogy packet-based stacks. The authors are supported by Trilogy
(http://www.trilogy-project.org), a research project (ICT-216372) (http://www.trilogy-project.org), a research project (ICT-216372)
partially funded by the European Community under its Seventh partially funded by the European Community under its Seventh
Framework Program. The views expressed here are those of the Framework Program. The views expressed here are those of the
author(s) only. The European Commission is not liable for any use author(s) only. The European Commission is not liable for any use
that may be made of the information in this document. that may be made of the information in this document.
skipping to change at page 11, line 41 skipping to change at page 12, line 24
RFC 793, September 1981. RFC 793, September 1981.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009. Control", RFC 5681, September 2009.
9.2. Informative References 9.2. Informative References
[I-D.ford-mptcp-multiaddressed] [I-D.ietf-mptcp-multiaddressed]
Ford, A., Raiciu, C., and M. Handley, "TCP Extensions for Ford, A., Raiciu, C., Handley, M., and O. Bonaventure,
Multipath Operation with Multiple Addresses", "TCP Extensions for Multipath Operation with Multiple
draft-ford-mptcp-multiaddressed-04 (work in progress), Addresses", draft-ietf-mptcp-multiaddressed-04 (work in
March 2010. progress), July 2011.
[KELLY] Kelly, F. and T. Voice, "Stability of end-to-end [KELLY] Kelly, F. and T. Voice, "Stability of end-to-end
algorithms for joint routing and rate control", ACM algorithms for joint routing and rate control", ACM
SIGCOMM CCR vol. 35 num. 2, pp. 5-12, 2005, SIGCOMM CCR vol. 35 num. 2, pp. 5-12, 2005,
<http://portal.acm.org/citation.cfm?id=1064415>. <http://portal.acm.org/citation.cfm?id=1064415>.
[NSDI] Wischik, D., Raiciu, C., Greenhalgh, A., and M. Handley, [NSDI] Wischik, D., Raiciu, C., Greenhalgh, A., and M. Handley,
"Design, Implementation and Evaluation of Congestion "Design, Implementation and Evaluation of Congestion
Control for Multipath TCP", Usenix NSDI , March 2011, <htt Control for Multipath TCP", Usenix NSDI , March 2011, <htt
p://www.cs.ucl.ac.uk/staff/c.raiciu/files/mptcp-nsdi.pdf>. p://www.cs.ucl.ac.uk/staff/c.raiciu/files/mptcp-nsdi.pdf>.
 End of changes. 24 change blocks. 
59 lines changed or deleted 89 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/