draft-ietf-ippm-model-based-metrics-03.txt   draft-ietf-ippm-model-based-metrics-04.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: January 4, 2015 AT&T Labs Expires: September 10, 2015 AT&T Labs
July 3, 2014 March 9, 2015
Model Based Bulk Performance Metrics Model Based Bulk Performance Metrics
draft-ietf-ippm-model-based-metrics-03.txt draft-ietf-ippm-model-based-metrics-04.txt
Abstract Abstract
We introduce a new class of model based metrics designed to determine We introduce a new class of model based metrics designed to determine
if an end-to-end Internet path can meet predefined transport if an end-to-end Internet path can meet predefined bulk transport
performance targets by applying a suite of IP diagnostic tests to performance targets by applying a suite of IP diagnostic tests to
successive subpaths. The subpath-at-a-time tests can be robustly successive subpaths. The subpath-at-a-time tests can be robustly
applied to key infrastructure, such as interconnects, to accurately applied to key infrastructure, such as interconnects, to accurately
detect if it will prevent the full end-to-end paths that traverse it detect if any part of the infrastructure will prevent the full end-
from meeting the specified target performance. to-end paths traversing them from meeting the specified target
performance.
Each IP diagnostic test consists of a precomputed traffic pattern and The diagnostic tests consist of precomputed traffic patterns and
a statistical criteria for evaluating packet delivery. The traffic statistical criteria for evaluating packet delivery. The traffic
patterns are precomputed to mimic TCP or other transport protocol patterns are precomputed to mimic TCP or other transport protocol
over a long path but are independent of the actual details of the over a long path but are constructed in such a way that they are
subpath under test. Likewise the success criteria depends on the independent of the actual details of the subpath under test, end
target performance for the long path and not the details of the systems or applications. Likewise the success criteria depends on
subpath. This makes the measurements open loop, which introduces the packet delivery statistics of the subpath, as evaluated against a
several important new properties and eliminates most of the protocol model applied to the target performance. The success
difficulties encountered by traditional bulk transport metrics. criteria also does not depend on the details of the subpath,
endsystems or application. This makes the measurements open loop,
eliminating most of the difficulties encountered by traditional bulk
transport metrics.
This document does not define diagnostic tests, but provides a Model based metrics exhibit several important new properties not
framework for designing suites of diagnostics tests that are tailored present in other Bulk Capacity Metrics, including the ability to
the confirming the target performance. reason about concatenated or overlapping subpaths. The results are
vantage independent which is critical for supporting independent
validation of tests results from multiple Measurement Points.
Interim DRAFT Formatted: Thu Jul 3 20:19:04 PDT 2014 This document does not define diagnostic tests directly, but provides
a framework for designing suites of diagnostics tests that are
tailored to confirming that infrastructure can meet the target
performance.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 4, 2015. This Internet-Draft will expire on September 10, 2015.
Copyright Notice Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. New requirements relative to RFC 2330 . . . . . . . . . . . . 11 3. New requirements relative to RFC 2330 . . . . . . . . . . . . 11
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 12 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 13
4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 14 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 14
5. Common Models and Parameters . . . . . . . . . . . . . . . . . 15 5. Common Models and Parameters . . . . . . . . . . . . . . . . . 15
5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 15 5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 16
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 16 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 16
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 17 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 17
6. Common testing procedures . . . . . . . . . . . . . . . . . . 17 6. Common testing procedures . . . . . . . . . . . . . . . . . . 18
6.1. Traffic generating techniques . . . . . . . . . . . . . . 17 6.1. Traffic generating techniques . . . . . . . . . . . . . . 18
6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 17 6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 18
6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 18 6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 19
6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 19 6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 19
6.1.4. Concurrent or channelized testing . . . . . . . . . . 19 6.1.4. Concurrent or channelized testing . . . . . . . . . . 20
6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 20 6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 21
6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 20 6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 21
6.2.2. Statistical criteria for measuring run_length . . . . 22 6.2.2. Statistical criteria for estimating run_length . . . . 22
6.2.2.1. Alternate criteria for measuring run_length . . . 23 6.2.3. Reordering Tolerance . . . . . . . . . . . . . . . . . 24
6.2.3. Reordering Tolerance . . . . . . . . . . . . . . . . . 25
6.3. Test Preconditions . . . . . . . . . . . . . . . . . . . . 25 6.3. Test Preconditions . . . . . . . . . . . . . . . . . . . . 25
7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 26 7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 25
7.1. Basic Data Rate and Delivery Statistics Tests . . . . . . 26 7.1. Basic Data Rate and Delivery Statistics Tests . . . . . . 26
7.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 27 7.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 26
7.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 27 7.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 27
7.1.3. Background Delivery Statistics Tests . . . . . . . . . 27 7.1.3. Background Delivery Statistics Tests . . . . . . . . . 27
7.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 28 7.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 27
7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 29 7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 29
7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 29 7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 29
7.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 30 7.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 30
7.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 30 7.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 30
7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 30 7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 30
7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 31 7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 31
7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 31 7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 31
7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 31 7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 31
7.5. Combined Tests . . . . . . . . . . . . . . . . . . . . . . 32 7.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 32
7.5.1. Sustained burst test . . . . . . . . . . . . . . . . . 32 7.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 32
7.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 33 7.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 33
8. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 34 8. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 34
9. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 35 9. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37 10. Security Considerations . . . . . . . . . . . . . . . . . . . 37
11. Informative References . . . . . . . . . . . . . . . . . . . . 37 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37
12. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 38
13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 38
13.1. Normative References . . . . . . . . . . . . . . . . . . . 38
13.2. Informative References . . . . . . . . . . . . . . . . . . 38
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 40 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 40
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 40 A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 41
A.2. CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 42 Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 42
Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 43 Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 43
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43
1. Introduction 1. Introduction
Bulk performance metrics evaluate an Internet path's ability to carry Bulk performance metrics evaluate an Internet path's ability to carry
bulk data. Model based bulk performance metrics rely on mathematical bulk data. Model based bulk performance metrics rely on mathematical
TCP models to design a targeted diagnostic suite (TDS) of IP TCP models to design a targeted diagnostic suite (TDS) of IP
performance tests which can be applied independently to each subpath performance tests which can be applied independently to each subpath
skipping to change at page 5, line 51 skipping to change at page 5, line 51
deterministically in ways that minimize the extent to which test deterministically in ways that minimize the extent to which test
methodology, measurement points, measurement vantage or path methodology, measurement points, measurement vantage or path
partitioning affect the details of the measurement traffic. partitioning affect the details of the measurement traffic.
Mathematical models are also used to compute the bounds on the packet Mathematical models are also used to compute the bounds on the packet
delivery statistics for acceptable IP performance. Since these delivery statistics for acceptable IP performance. Since these
statistics, such as packet loss, are typically aggregated from all statistics, such as packet loss, are typically aggregated from all
subpaths of the end-to-end path, the end-to-end statistical bounds subpaths of the end-to-end path, the end-to-end statistical bounds
need to be apportioned as a separate bound for each subpath. Note need to be apportioned as a separate bound for each subpath. Note
that links that are expected to be bottlenecks are expected to that links that are expected to be bottlenecks are expected to
contribute more packet loss and/or delay. In compensation, other contribute a larger fraction of the total packet loss and/or delay.
links have to be constrained to contribute less packet loss and In compensation, other links have to be constrained to contribute
delay. The criteria for passing each test of a TDS is an apportioned less packet loss and delay. The criteria for passing each test of a
share of the total bound determined by the mathematical model from TDS is an apportioned share of the total bound determined by the
the end-to-end target performance. mathematical model from the end-to-end target performance.
In addition to passing or failing, a test can be deemed to be In addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons including: the precomputed inconclusive for a number of reasons including: the precomputed
traffic pattern was not accurately generated; the measurement results traffic pattern was not accurately generated; the measurement results
were not statistically significant; and others such as failing to were not statistically significant; and others such as failing to
meet some required test preconditions. meet some required test preconditions.
This document describes a framework for deriving traffic patterns and This document describes a framework for deriving traffic patterns and
delivery statistics for model based metrics. It does not fully delivery statistics for model based metrics. It does not fully
specify any measurement techniques. Important details such as packet specify any measurement techniques. Important details such as packet
skipping to change at page 6, line 51 skipping to change at page 6, line 51
intercarrier exchanges have sufficient performance and capacity to intercarrier exchanges have sufficient performance and capacity to
deliver HD video between ISPs. deliver HD video between ISPs.
There exists a small risk that model based metric itself might yield There exists a small risk that model based metric itself might yield
a false pass result, in the sense that every subpath of an end-to-end a false pass result, in the sense that every subpath of an end-to-end
path passes every IP diagnostic test and yet a real application fails path passes every IP diagnostic test and yet a real application fails
to attain the performance target over the end-to-end path. If this to attain the performance target over the end-to-end path. If this
happens, then the validation procedure described in Section 9 needs happens, then the validation procedure described in Section 9 needs
to be used to prove and potentially revise the models. to be used to prove and potentially revise the models.
Future documents will define model based metrics for other traffic Future documents may define model based metrics for other traffic
classes and application types, such as real time streaming media. classes and application types, such as real time streaming media.
1.1. TODO 1.1. TODO
This section to be removed prior to publication.
Please send comments about this draft to ippm@ietf.org. See Please send comments about this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Thu Jul 3 20:19:04 PDT 2014 Formatted: Mon Mar 9 14:37:24 PDT 2015
2. Terminology 2. Terminology
Terminology about paths, etc. See [RFC2330] and The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
[I-D.ietf-ippm-lmap-path]. "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
[data] sender Host sending data and receiving ACKs. Terminology about paths, etc. See [RFC2330] and [RFC7398].
[data] receiver Host receiving data and sending ACKs.
subpath A portion of the full path. Note that there is no [data] sender: Host sending data and receiving ACKs.
[data] receiver: Host receiving data and sending ACKs.
subpath: A portion of the full path. Note that there is no
requirement that subpaths be non-overlapping. requirement that subpaths be non-overlapping.
Measurement Point Measurement points as described in Measurement Point: Measurement points as described in [RFC7398].
[I-D.ietf-ippm-lmap-path]. test path: A path between two measurement points that includes a
test path A path between two measurement points that includes a
subpath of the end-to-end path under test, and could include subpath of the end-to-end path under test, and could include
infrastructure between the measurement points and the subpath. infrastructure between the measurement points and the subpath.
[Dominant] Bottleneck The Bottleneck that generally dominates [Dominant] Bottleneck: The Bottleneck that generally dominates
traffic statistics for the entire path. It typically determines a traffic statistics for the entire path. It typically determines a
flow's self clock timing, packet loss and ECN marking rate. See flow's self clock timing, packet loss and ECN marking rate. See
Section 4.1. Section 4.1.
front path The subpath from the data sender to the dominant front path: The subpath from the data sender to the dominant
bottleneck. bottleneck.
back path The subpath from the dominant bottleneck to the receiver. back path: The subpath from the dominant bottleneck to the receiver.
return path The path taken by the ACKs from the data receiver to the return path: The path taken by the ACKs from the data receiver to
data sender. the data sender.
cross traffic Other, potentially interfering, traffic competing for cross traffic: Other, potentially interfering, traffic competing for
network resources (bandwidth and/or queue capacity). network resources (bandwidth and/or queue capacity).
Properties determined by the end-to-end path and application. They Properties determined by the end-to-end path and application. They
are described in more detail in Section 5.1. are described in more detail in Section 5.1.
Application Data Rate General term for the data rate as seen by the Application Data Rate: General term for the data rate as seen by the
application above the transport layer. This is the payload data application above the transport layer. This is the payload data
rate, and excludes transport and lower level headers(TCP/IP or rate, and excludes transport and lower level headers(TCP/IP or
other protocols) and as well as retransmissions and other data other protocols) and as well as retransmissions and other data
that does not contribute to the total quantity of data delivered that does not contribute to the total quantity of data delivered
to the application. to the application.
Link Data Rate: General term for the data rate as seen by the link
Link Data Rate General term for the data rate as seen by the link or or lower layers. The link data rate includes transport and IP
lower layers. The link data rate includes transport and IP headers, retransmissions and other transport layer overhead. This
headers, retransmits and other transport layer overhead. This
document is agnostic as to whether the link data rate includes or document is agnostic as to whether the link data rate includes or
excludes framing, MAC, or other lower layer overheads, except that excludes framing, MAC, or other lower layer overheads, except that
they must be treated uniformly. they must be treated uniformly.
end-to-end target parameters: Application or transport performance end-to-end target parameters: Application or transport performance
goals for the end-to-end path. They include the target data rate, goals for the end-to-end path. They include the target data rate,
RTT and MTU described below. RTT and MTU described below.
Target Data Rate: The application data rate, typically the ultimate Target Data Rate: The application data rate, typically the ultimate
user's performance goal. user's performance goal.
Target RTT (Round Trip Time): The baseline (minimum) RTT of the Target RTT (Round Trip Time): The baseline (minimum) RTT of the
longest end-to-end path over which the application expects to be longest end-to-end path over which the application expects to be
skipping to change at page 9, line 5 skipping to change at page 9, line 8
of each MTU not available for carrying application payload. of each MTU not available for carrying application payload.
Without loss of generality this is assumed to be the size for Without loss of generality this is assumed to be the size for
returning acknowledgements (ACKs). For TCP, the Maximum Segment returning acknowledgements (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header_overhead. Size (MSS) is the Target MTU minus the header_overhead.
Basic parameters common to models and subpath tests. They are Basic parameters common to models and subpath tests. They are
described in more detail in Section 5.2. Note that these are mixed described in more detail in Section 5.2. Note that these are mixed
between application transport performance (excludes headers) and link between application transport performance (excludes headers) and link
IP performance (includes headers). IP performance (includes headers).
pipe size A general term for number of packets needed in flight (the pipe size: A general term for number of packets needed in flight
window size) to exactly fill some network path or subpath. This (the window size) to exactly fill some network path or subpath.
is the window size which is normally the onset of queueing. This is the window size which is normally the onset of queueing.
target_pipe_size: The number of packets in flight (the window size) target_pipe_size: The number of packets in flight (the window size)
needed to exactly meet the target rate, with a single stream and needed to exactly meet the target rate, with a single stream and
no cross traffic for the specified application target data rate, no cross traffic for the specified application target data rate,
RTT, and MTU. It is the amount of circulating data required to RTT, and MTU. It is the amount of circulating data required to
meet the target data rate, and implies the scale of the bursts meet the target data rate, and implies the scale of the bursts
that the network might experience. that the network might experience.
Delivery Statistics Raw or summary statistics about packet delivery, run length: A general term for the observed, measured, or specified
packet losses, ECN marks, reordering, or any other properties of
packet delivery that may be germane to transport performance.
run length A general term for the observed, measured, or specified
number of packets that are (to be) delivered between losses or ECN number of packets that are (to be) delivered between losses or ECN
marks. Nominally one over the loss or ECN marking probability, if marks. Nominally one over the loss or ECN marking probability, if
there are independently and identically distributed. there are independently and identically distributed.
target_run_length The target_run_length is an estimate of the target_run_length: The target_run_length is an estimate of the
minimum required headway between losses or ECN marks necessary to minimum number of good packets needed between losses or ECN marks
attain the target_data_rate over a path with the specified necessary to attain the target_data_rate over a path with the
target_RTT and target_MTU, as computed by a mathematical model of specified target_RTT and target_MTU, as computed by a mathematical
TCP congestion control. A reference calculation is show in model of TCP congestion control. A reference calculation is shown
Section 5.2 and alternatives in Appendix A in Section 5.2 and alternatives in Appendix A
Ancillary parameters used for some tests Ancillary parameters used for some tests
derating: Under some conditions the standard models are too derating: Under some conditions the standard models are too
conservative. The modeling framework permits some latitude in conservative. The modeling framework permits some latitude in
relaxing or "derating" some test parameters as described in relaxing or "derating" some test parameters as described in
Section 5.3 in exchange for a more stringent TDS validation Section 5.3 in exchange for a more stringent TDS validation
procedures, described in Section 9. procedures, described in Section 9.
subpath_data_rate The maximum IP data rate supported by a subpath. subpath_data_rate: The maximum IP data rate supported by a subpath.
This typically includes TCP/IP overhead, including headers, This typically includes TCP/IP overhead, including headers,
retransmits, etc. retransmits, etc.
test_path_RTT The RTT between two measurement points using test_path_RTT: The RTT between two measurement points using
appropriate data and ACK packet sizes. appropriate data and ACK packet sizes.
test_path_pipe The amount of data necessary to fill a test path. test_path_pipe: The amount of data necessary to fill a test path.
Nominally the test path RTT times the subpath_data_rate (which Nominally the test path RTT times the subpath_data_rate (which
should be part of the end-to-end subpath). should be part of the end-to-end subpath).
test_window The window necessary to meet the target_rate over a test_window: The window necessary to meet the target_rate over a
subpath. Typically test_window=target_data_rate*test_RTT/ subpath. Typically test_window=target_data_rate*test_RTT/
(target_MTU - header_overhead). (target_MTU - header_overhead).
Tests can be classified into groups according to their applicability. Tests can be classified into groups according to their applicability.
Capacity tests determine if a network subpath has sufficient Capacity tests: determine if a network subpath has sufficient
capacity to deliver the target performance. As long as the test capacity to deliver the target performance. As long as the test
traffic is within the proper envelope for the target end-to-end traffic is within the proper envelope for the target end-to-end
performance, the average packet losses or ECN must be below the performance, the average packet losses or ECN marks must be below
threshold computed by the model. As such, capacity tests reflect the threshold computed by the model. As such, capacity tests
parameters that can transition from passing to failing as a reflect parameters that can transition from passing to failing as
consequence of cross traffic, additional presented load or the a consequence of cross traffic, additional presented load or the
actions of other network users. By definition, capacity tests actions of other network users. By definition, capacity tests
also consume significant network resources (data capacity and/or also consume significant network resources (data capacity and/or
buffer space), and the test schedules must be balanced by their buffer space), and the test schedules must be balanced by their
cost. cost.
Monitoring tests are designed to capture the most important aspects Monitoring tests: are designed to capture the most important aspects
of a capacity test, but without presenting excessive ongoing load of a capacity test, but without presenting excessive ongoing load
themselves. As such they may miss some details of the network's themselves. As such they may miss some details of the network's
performance, but can serve as a useful reduced-cost proxy for a performance, but can serve as a useful reduced-cost proxy for a
capacity test. capacity test.
Engineering tests evaluate how network algorithms (such as AQM and Engineering tests: evaluate how network algorithms (such as AQM and
channel allocation) interact with TCP-style self clocked protocols channel allocation) interact with TCP-style self clocked protocols
and adaptive congestion control based on packet loss and ECN and adaptive congestion control based on packet loss and ECN
marks. These tests are likely to have complicated interactions marks. These tests are likely to have complicated interactions
with other traffic and under some conditions can be inversely with cross traffic and under some conditions can be inversely
sensitive to load. For example a test to verify that an AQM sensitive to load. For example a test to verify that an AQM
algorithm causes ECN marks or packet drops early enough to limit algorithm causes ECN marks or packet drops early enough to limit
queue occupancy may experience a false pass result in the presence queue occupancy may experience a false pass result in the presence
of bursty cross traffic. It is important that engineering tests of cross traffic. It is important that engineering tests be
be performed under a wide range of conditions, including both in performed under a wide range of conditions, including both in situ
situ and bench testing, and over a wide variety of load and bench testing, and over a wide variety of load conditions.
conditions. Ongoing monitoring is less likely to be useful for Ongoing monitoring is less likely to be useful for engineering
engineering tests, although sparse in situ testing might be tests, although sparse in situ testing might be appropriate.
appropriate.
General Terminology: General Terminology:
Targeted Diagnostic Test (TDS) A set of IP Diagnostics designed to Targeted Diagnostic Test (TDS): A set of IP Diagnostics designed to
determine if a subpath can sustain flows at a specific determine if a subpath can sustain flows at a specific
target_data_rate over a path that has a target_RTT using target_data_rate over a path that has a target_RTT using
target_MTU sided packets. target_MTU sided packets.
Fully Specified Targeted Diagnostic Test A TDS together with Fully Specified Targeted Diagnostic Test: A TDS together with
additional specification such as "type-p", etc which are out of additional specification such as "type-p", etc which are out of
scope for this document, but need to be drawn from other standards scope for this document, but need to be drawn from other standards
documents. documents.
apportioned To divide and allocate, as in budgeting packet loss apportioned: To divide and allocate, as in budgeting packet loss
rates across multiple subpaths to accumulate below a specified rates across multiple subpaths to accumulate below a specified
end-to-end loss rate. end-to-end loss rate.
open loop: A control theory term used to describe a class of
techniques where systems that naturally exhibit circular
dependencies can be analyzed by suppressing some of the
dependences, such that the resulting dependency graph is acyclic.
open loop A control theory term used to describe a class of Bulk performance metrics: Bulk performance metrics evaluate an
techniques where systems that exhibit circular dependencies can be Internet path's ability to carry bulk data, such as transporting
analyzed by suppressing some of the dependences, such that the large files, streaming (non-real time) video, and at some scales,
resulting dependency graph is acyclic. web images and content. (For very fast network, web performance
is dominated by pure RTT effects). The metrics presented in this
document reflect the evolution of [RFC3148].
traffic patterns: The temporal patterns or statistics of traffic
generated by applications over transport protocols such as TCP.
There are several mechanisms that cause bursts at various time
scales. Our goal here is to mimic the range of common patterns
(burst sizes and rates, etc), without tieing our applicability to
specific applications, implementations or technologies, which are
sure to become stale.
delivery Statistics: Raw or summary statistics about packet delivery
properties of the IP layer including packet losses, ECN marks,
reordering, or any other properties that may be germane to
transport performance.
IP performance tests: Measurements or diagnostic tests to determine
delivery statistics.
3. New requirements relative to RFC 2330 3. New requirements relative to RFC 2330
Model Based Metrics are designed to fulfill some additional Model Based Metrics are designed to fulfill some additional
requirement that were not recognized at the time RFC 2330 was written requirement that were not recognized at the time RFC 2330 was written
[RFC2330]. These missing requirements may have significantly [RFC2330]. These missing requirements may have significantly
contributed to policy difficulties in the IP measurement space. Some contributed to policy difficulties in the IP measurement space. Some
additional requirements are: additional requirements are:
o IP metrics must be actionable by the ISP - they have to be o IP metrics must be actionable by the ISP - they have to be
interpreted in terms of behaviors or properties at the IP or lower interpreted in terms of behaviors or properties at the IP or lower
layers, that an ISP can test, repair and verify. layers, that an ISP can test, repair and verify.
o Metrics should be spatially composable, such that measures of
concatenated paths should be predictable from subpaths. Ideally
they should also be differentiable: the metrics of a subpath
should be
o Metrics must be vantage point invariant over a significant range o Metrics must be vantage point invariant over a significant range
of measurement point choices, including off path measurement of measurement point choices, including off path measurement
points. The only requirements on MP selection should be that the points. The only requirements on MP selection should be that the
portion of the test path that is not under test is effectively portion of the test path that is not under test between the MP and
ideal (or is non ideal in ways that can be calibrated out of the the part that under tests is effectively ideal, or is non ideal in
measurements) and the test RTT between the MPs is below some ways that can be calibrated out of the measurements and the test
reasonable bound. RTT between the MPs is below some reasonable bound.
o Metrics must be repeatable by multiple parties with no specialized o Metrics must be repeatable by multiple parties with no specialized
access to MPs or diagnostic infrastructure. It must be possible access to MPs or diagnostic infrastructure. It must be possible
for different parties to make the same measurement and observe the for different parties to make the same measurement and observe the
same results. In particular it is specifically important that same results. In particular it is specifically important that
both a consumer (or their delegate) and ISP be able to perform the both a consumer (or their delegate) and ISP be able to perform the
same measurement and get the same result. same measurement and get the same result. Note that vantage
independence is key to this requirement.
NB: All of the metric requirements in RFC 2330 should be reviewed and
potentially revised. If such a document is opened soon enough, this
entire section should be dropped.
4. Background 4. Background
At the time the IPPM WG was chartered, sound Bulk Transport Capacity At the time the IPPM WG was chartered, sound Bulk Transport Capacity
measurement was known to be beyond our capabilities. By hindsight it measurement was known to be way beyond our capabilities. By
is now clear why it is such a hard problem: hindsight it is now clear why it is such a hard problem:
o TCP is a control system with circular dependencies - everything o TCP is a control system with circular dependencies - everything
affects performance, including components that are explicitly not affects performance, including components that are explicitly not
part of the test. part of the test.
o Congestion control is an equilibrium process, such that transport o Congestion control is an equilibrium process, such that transport
protocols change the network (raise loss probability and/or RTT) protocols change the network (raise loss probability and/or RTT)
to conform to their behavior. to conform to their behavior.
o TCP's ability to compensate for network flaws is directly o TCP's ability to compensate for network flaws is directly
proportional to the number of roundtrips per second (i.e. proportional to the number of roundtrips per second (i.e.
inversely proportional to the RTT). As a consequence a flawed inversely proportional to the RTT). As a consequence a flawed
link may pass a short RTT local test even though it fails when the link may pass a short RTT local test even though it fails when the
path is extended by a perfect network to some larger RTT. path is extended by a perfect network to some larger RTT.
o TCP has a meta Heisenberg problem - Measurement and cross traffic o TCP has a meta Heisenberg problem - Measurement and cross traffic
interact in unknown and ill defined ways. The situation is interact in unknown and ill defined ways. The situation is
actually worse than the traditional physics problem where you can actually worse than the traditional physics problem where you can
at least estimate the relative momentum of the measurement and at least estimate bounds on the relative momentum of the
measured particles. For network measurement you can not in measurement and measured particles. For network measurement you
general determine the relative "elasticity" of the measurement can not in general determine the relative "elasticity" of the
traffic and cross traffic, so you can not even gauge the relative measurement traffic and cross traffic, so you can not even gauge
magnitude of their effects on each other. the relative magnitude of their effects on each other.
These properties are a consequence of the equilibrium behavior These properties are a consequence of the equilibrium behavior
intrinsic to how all throughput optimizing protocols interact with intrinsic to how all throughput optimizing protocols interact with
the network. The protocols rely on control systems based on multiple the Internet. The protocols rely on control systems based on
network estimators to regulate the quantity of data sent into the multiple network estimators to regulate the quantity of data traffic
network. The data in turn alters network and the properties observed sent into the network. The data traffic in turn alters network and
by the estimators, such that there are circular dependencies between the properties observed by the estimators, such that there are
every component and every property. Since some of these estimators circular dependencies between every component and every property.
are non-linear, the entire system is nonlinear, and any change Since some of these properties are non-linear, the entire system is
anywhere causes difficult to predict changes in every parameter. nonlinear, and any change anywhere causes difficult to predict
changes in every parameter.
Model Based Metrics overcome these problems by forcing the Model Based Metrics overcome these problems by forcing the
measurement system to be open loop: the delivery statistics (akin to measurement system to be open loop: the delivery statistics (akin to
the network estimators) do not affect the traffic. The traffic and the network estimators) do not affect the traffic or traffic patterns
traffic patterns (bursts) are computed on the basis of the target (bursts), which computed on the basis of the target performance. In
performance. In order for a network to pass, the resulting delivery order for a network to pass, the resulting delivery statistics and
statistics and corresponding network estimators have to be such that corresponding network estimators have to be such that they would not
they would not cause the control systems slow the traffic below the cause the control systems slow the traffic below the target rate.
target rate.
4.1. TCP properties 4.1. TCP properties
TCP and SCTP are self clocked protocols. The dominant steady state TCP and SCTP are self clocked protocols. The dominant steady state
behavior is to have an approximately fixed quantity of data and behavior is to have an approximately fixed quantity of data and
acknowledgements (ACKs) circulating in the network. The receiver acknowledgements (ACKs) circulating in the network. The receiver
reports arriving data by returning ACKs to the data sender, the data reports arriving data by returning ACKs to the data sender, the data
sender typically responds by sending exactly the same quantity of sender typically responds by sending exactly the same quantity of
data back into the network. The total quantity of data plus the data data back into the network. The total quantity of data plus the data
represented by ACKs circulating in the network is referred to as the represented by ACKs circulating in the network is referred to as the
skipping to change at page 14, line 4 skipping to change at page 14, line 18
To verify that a path can meet a performance target, it is necessary To verify that a path can meet a performance target, it is necessary
to independently confirm that the path can tolerate bursts in the to independently confirm that the path can tolerate bursts in the
dimensions that can be caused by these mechanisms. Three cases are dimensions that can be caused by these mechanisms. Three cases are
likely to be sufficient: likely to be sufficient:
o Slowstart bursts sufficient to get connections started properly. o Slowstart bursts sufficient to get connections started properly.
o Frequent sender interface rate bursts that are small enough where o Frequent sender interface rate bursts that are small enough where
they can be assumed not to significantly affect delivery they can be assumed not to significantly affect delivery
statistics. (Implicitly derated by selecting the burst size). statistics. (Implicitly derated by selecting the burst size).
o Infrequent sender interface rate full target_pipe_size bursts that o Infrequent sender interface rate full target_pipe_size bursts that
do affect the delivery statistics. (Target_run_length is do affect the delivery statistics. (Target_run_length may be
derated). derated).
4.2. Diagnostic Approach 4.2. Diagnostic Approach
The MBM approach is to open loop TCP by precomputing traffic patterns The MBM approach is to open loop TCP by precomputing traffic patterns
that are typically generated by TCP operating at the given target that are typically generated by TCP operating at the given target
parameters, and evaluating delivery statistics (packet loss, ECN parameters, and evaluating delivery statistics (packet loss, ECN
marks and delay). In this approach the measurement software marks and delay). In this approach the measurement software
explicitly controls the data rate, transmission pattern or cwnd explicitly controls the data rate, transmission pattern or cwnd
(TCP's primary congestion control state variables) to create (TCP's primary congestion control state variables) to create
skipping to change at page 14, line 44 skipping to change at page 15, line 9
In typical networks, the dominant bottleneck contributes the majority In typical networks, the dominant bottleneck contributes the majority
of the packet loss and ECN marks. Often the rest of the path makes of the packet loss and ECN marks. Often the rest of the path makes
insignificant contribution to these properties. A TDS should insignificant contribution to these properties. A TDS should
apportion the end-to-end budget for the specified parameters apportion the end-to-end budget for the specified parameters
(primarily packet loss and ECN marks) to each subpath or group of (primarily packet loss and ECN marks) to each subpath or group of
subpaths. For example the dominant bottleneck may be permitted to subpaths. For example the dominant bottleneck may be permitted to
contribute 90% of the loss budget, while the rest of the path is only contribute 90% of the loss budget, while the rest of the path is only
permitted to contribute 10%. permitted to contribute 10%.
A TDS or FSTDS MUST apportion all relevant packet delivery statistics A TDS or FSTDS MUST apportion all relevant packet delivery statistics
between different subpaths, such that the spatial composition of the between successive subpaths, such that the spatial composition of the
apportioned metrics yields end-to-end statics which are within the apportioned metrics will yield end-to-end statics which are within
bounds determined by the models. the bounds determined by the models.
A network is expected to be able to sustain a Bulk TCP flow of a A network is expected to be able to sustain a Bulk TCP flow of a
given data rate, MTU and RTT when the following conditions are met: given data rate, MTU and RTT when all of the following conditions are
o The raw link rate is higher than the target data rate. met:
1. The raw link rate is higher than the target data rate. See
Section 7.1 or any number of data rate tests outside of MBM.
2. The observed packet delivery statistics are better than required
by a suitable TCP performance model (e.g. fewer losses or ECN
marks). See Section 7.1 or any number of low rate packet loss
tests outside of MBM.
3. There is sufficient buffering at the dominant bottleneck to
absorb a slowstart rate burst large enough to get the flow out of
slowstart at a suitable window size. See Section 7.3.
4. There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in
the ACK path or any other mechanisms. See Section 7.4.
5. When there is a standing queue at a bottleneck for a shared media
subpath (e.g. half duplex), there are suitable bounds on how the
data and ACKs interact, for example due to the channel
arbitration mechanism. See Section 7.2.4.
6. When there is a slowly rising standing queue at the bottleneck
the onset of packet loss has to be at an appropriate point (time
or queue depth) and progressive. See Section 7.2.
o The observed delivery statistics are better than required by a Note that conditions 1 through 4 require load tests for confirmation,
suitable TCP performance model (e.g. fewer losses). and thus need to be monitored on an ongoing basis. Conditions 5 and
o There is sufficient buffering at the dominant bottleneck to absorb 6 require engineering tests. They won't generally fail due to load,
a slowstart rate burst large enough to get the flow out of but may fail in the field due to configuration errors, etc. and
slowstart at a suitable window size. should be spot checked.
o There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in the
ACK path or other mechanisms.
o When there is a standing queue at a bottleneck for a shared media
subpath, there are suitable bounds on how the data and ACKs
interact, for example due to the channel arbitration mechanism.
o When there is a slowly rising standing queue at the bottleneck the
onset of packet loss has to be at an appropriate point (time or
queue depth) and progressive. This typically requires some form
of Automatic Queue Management [RFC2309].
We are developing a tool that can perform many of the tests described We are developing a tool that can perform many of the tests described
here[MBMSource]. here[MBMSource].
5. Common Models and Parameters 5. Common Models and Parameters
5.1. Target End-to-end parameters 5.1. Target End-to-end parameters
The target end-to-end parameters are the target data rate, target RTT The target end-to-end parameters are the target data rate, target RTT
and target MTU as defined in Section 2. These parameters are and target MTU as defined in Section 2. These parameters are
determined by the needs of the application or the ultimate end user determined by the needs of the application or the ultimate end user
and the end-to-end Internet path over which the application is and the end-to-end Internet path over which the application is
expected to operate. The target parameters are in units that make expected to operate. The target parameters are in units that make
sense to upper layers: payload bytes delivered to the application, sense to upper layers: payload bytes delivered to the application,
above TCP. They exclude overheads associated with TCP and IP above TCP. They exclude overheads associated with TCP and IP
headers, retransmits and other protocols (e.g. DNS). headers, retransmits and other protocols (e.g. DNS).
skipping to change at page 15, line 46 skipping to change at page 16, line 23
headers, retransmits and other protocols (e.g. DNS). headers, retransmits and other protocols (e.g. DNS).
Other end-to-end parameters defined in Section 2 include the Other end-to-end parameters defined in Section 2 include the
effective bottleneck data rate, the sender interface data rate and effective bottleneck data rate, the sender interface data rate and
the TCP/IP header sizes (overhead). the TCP/IP header sizes (overhead).
The target data rate must be smaller than all link data rates by The target data rate must be smaller than all link data rates by
enough headroom to carry the transport protocol overhead, explicitly enough headroom to carry the transport protocol overhead, explicitly
including retransmissions and an allowance for fluctuations in the including retransmissions and an allowance for fluctuations in the
actual data rate, needed to meet the specified average rate. actual data rate, needed to meet the specified average rate.
Specifying a target rate with insufficient headroom are likely to Specifying a target rate with insufficient headroom is likely to
result in brittle measurements having little predictive value. result in brittle measurements having little predictive value.
Note that the target parameters can be specified for a hypothetical Note that the target parameters can be specified for a hypothetical
path, for example to construct TDS designed for bench testing in the path, for example to construct TDS designed for bench testing in the
absence of a real application, or for a real physical test, for in absence of a real application, or for a real physical test, for in
situ testing of production infrastructure. situ testing of production infrastructure.
The number of concurrent connections is explicitly not a parameter to The number of concurrent connections is explicitly not a parameter to
this model. If a subpath requires multiple connections in order to this model. If a subpath requires multiple connections in order to
meet the specified performance, that must be stated explicitly and meet the specified performance, that must be stated explicitly and
skipping to change at page 16, line 20 skipping to change at page 16, line 45
5.2. Common Model Calculations 5.2. Common Model Calculations
The end-to-end target parameters are used to derive the The end-to-end target parameters are used to derive the
target_pipe_size and the reference target_run_length. target_pipe_size and the reference target_run_length.
The target_pipe_size, is the average window size in packets needed to The target_pipe_size, is the average window size in packets needed to
meet the target rate, for the specified target RTT and MTU. It is meet the target rate, for the specified target RTT and MTU. It is
given by: given by:
target_pipe_size = target_rate * target_RTT / ( target_MTU - target_pipe_size = ceiling( target_rate * target_RTT / ( target_MTU -
header_overhead ) header_overhead ) )
Target_run_length is an estimate of the minimum required headway Target_run_length is an estimate of the minimum required number of
between losses or ECN marks, as computed by a mathematical model of unmarked packets that must be delivered between losses or ECN marks,
TCP congestion control. The derivation here follows [MSMO97], and by as computed by a mathematical model of TCP congestion control. The
design is quite conservative. The alternate models described in derivation here follows [MSMO97], and by design is quite
Appendix A generally yield smaller run_lengths (higher loss rates), conservative. The alternate models described in Appendix A generally
but may not apply in all situations. In any case alternate models yield smaller run_lengths (higher acceptable loss or ECN marking
should be compared to the reference target_run_length computed here. rates), but may not apply in all situations. A FSTDS that uses an
alternate model MUST compare it to the reference target_run_length
computed here.
Reference target_run_length is derived as follows: assume the Reference target_run_length is derived as follows: assume the
subpath_data_rate is infinitesimally larger than the target_data_rate subpath_data_rate is infinitesimally larger than the target_data_rate
plus the required header_overhead. Then target_pipe_size also plus the required header_overhead. Then target_pipe_size also
predicts the onset of queueing. A larger window will cause a predicts the onset of queueing. A larger window will cause a
standing queue at the bottleneck. standing queue at the bottleneck.
Assume the transport protocol is using standard Reno style Additive Assume the transport protocol is using standard Reno style Additive
Increase, Multiplicative Decrease congestion control [RFC5681] (but Increase, Multiplicative Decrease congestion control [RFC5681] (but
not Appropriate Byte Counting [RFC3465]) and the receiver is using not Appropriate Byte Counting [RFC3465]) and the receiver is using
standard delayed ACKs. Reno increases the window by one packet every standard delayed ACKs. Reno increases the window by one packet every
pipe_size worth of ACKs. With delayed ACKs this takes 2 Round Trip pipe_size worth of ACKs. With delayed ACKs this takes 2 Round Trip
Times per increase. To exactly fill the pipe losses must be no Times per increase. To exactly fill the pipe, losses must be no
closer than when the peak of the AIMD sawtooth reached exactly twice closer than when the peak of the AIMD sawtooth reached exactly twice
the target_pipe_size otherwise the multiplicative window reduction the target_pipe_size otherwise the multiplicative window reduction
triggered by the loss would cause the network to be underfilled. triggered by the loss would cause the network to be underfilled.
Following [MSMO97] the number of packets between losses must be the Following [MSMO97] the number of packets between losses must be the
area under the AIMD sawtooth. They must be no more frequent than area under the AIMD sawtooth. They must be no more frequent than
every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets, every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets,
which simplifies to: which simplifies to:
target_run_length = 3*(target_pipe_size^2) target_run_length = 3*(target_pipe_size^2)
Note that this calculation is very conservative and is based on a Note that this calculation is very conservative and is based on a
number of assumptions that may not apply. Appendix A discusses these number of assumptions that may not apply. Appendix A discusses these
assumptions and provides some alternative models. If a different assumptions and provides some alternative models. If a different
model is used, a fully specified TDS or FSTDS MUST document the model is used, a fully specified TDS or FSTDS MUST document the
actual method for computing target_run_length along with the actual method for computing target_run_length and ratio between
rationale for the underlying assumptions and the ratio of chosen alternate target_run_length and the reference target_run_length
target_run_length to the reference target_run_length calculated calculated above, along with a discussion of the rationale for the
above. underlying assumptions.
These two parameters, target_pipe_size and target_run_length, These two parameters, target_pipe_size and target_run_length,
directly imply most of the individual parameters for the tests in directly imply most of the individual parameters for the tests in
Section 7. Section 7.
5.3. Parameter Derating 5.3. Parameter Derating
Since some aspects of the models are very conservative, this Since some aspects of the models are very conservative, the MBM
framework permits some latitude in derating test parameters. Rather framework permits some latitude in derating test parameters. Rather
than trying to formalize more complicated models we permit some test than trying to formalize more complicated models we permit some test
parameters to be relaxed as long as they meet some additional parameters to be relaxed as long as they meet some additional
procedural constraints: procedural constraints:
o The TDS or FSTDS MUST document and justify the actual method used o The TDS or FSTDS MUST document and justify the actual method used
compute the derated metric parameters. compute the derated metric parameters.
o The validation procedures described in Section 9 must be used to o The validation procedures described in Section 9 must be used to
demonstrate the feasibility of meeting the performance targets demonstrate the feasibility of meeting the performance targets
with infrastructure that infinitesimally passes the derated tests. with infrastructure that infinitesimally passes the derated tests.
o The validation process itself must be documented is such a way o The validation process itself must be documented is such a way
that other researchers can duplicate the validation experiments. that other researchers can duplicate the validation experiments.
Except as noted, all tests below assume no derating. Tests where Except as noted, all tests below assume no derating. Tests where
there is not currently a well established model for the required there is not currently a well established model for the required
skipping to change at page 18, line 4 skipping to change at page 18, line 28
6. Common testing procedures 6. Common testing procedures
6.1. Traffic generating techniques 6.1. Traffic generating techniques
6.1.1. Paced transmission 6.1.1. Paced transmission
Paced (burst) transmissions: send bursts of data on a timer to meet a Paced (burst) transmissions: send bursts of data on a timer to meet a
particular target rate and pattern. In all cases the specified data particular target rate and pattern. In all cases the specified data
rate can either be the application or link rates. Header overheads rate can either be the application or link rates. Header overheads
must be included in the calculations as appropriate. must be included in the calculations as appropriate.
Headway: Time interval between packets or bursts, specified from the
start of one to the start of the next. e.g. If packets are sent
with a 1 mS headway, there will be exactly 1000 packets per
second.
Paced single packets: Send individual packets at the specified rate Paced single packets: Send individual packets at the specified rate
or headway. or headway.
Burst: Send sender interface rate bursts on a timer. Specify any 3 Burst: Send sender interface rate bursts on a timer. Specify any 3
of: average rate, packet size, burst size (number of packets) and of: average rate, packet size, burst size (number of packets) and
burst headway (burst start to start). These bursts are typically burst headway (burst start to start). These bursts are typically
sent as back-to-back packets at the testers interface rate. sent as back-to-back packets at the testers interface rate.
Slowstart bursts: Send 4 packet sender interface rate bursts at an Slowstart bursts: Send 4 packet sender interface rate bursts at an
average data rate equal to twice effective bottleneck link rate average data rate equal to twice effective bottleneck link rate
(but not more than the sender interface rate). This corresponds (but not more than the sender interface rate). This corresponds
to the average rate during a TCP slowstart when Appropriate Byte to the average rate during a TCP slowstart when Appropriate Byte
Counting [RFC3465] is present or delayed ack is disabled. Note Counting [RFC3465] is present or delayed ack is disabled. Note
that if the effective bottleneck link rate is more than half of that if the effective bottleneck link rate is more than half of
the sender interface rate, slowstart bursts become sender the sender interface rate, slowstart rate bursts become sender
interface rate bursts. interface rate bursts.
Repeated Slowstart bursts: Slowstart bursts are typically part of Repeated Slowstart bursts: Slowstart bursts are typically part of
larger scale pattern of repeated bursts, such as sending larger scale pattern of repeated bursts, such as sending
target_pipe_size packets as slowstart bursts on a target_RTT target_pipe_size packets as slowstart bursts on a target_RTT
headway (burst start to burst start). Such a stream has three headway (burst start to burst start). Such a stream has three
different average rates, depending on the averaging interval. At different average rates, depending on the averaging interval. At
the finest time scale the average rate is the same as the sender the finest time scale the average rate is the same as the sender
interface rate, at a medium scale the average rate is twice the interface rate, at a medium scale the average rate is twice the
effective bottleneck link rate and at the longest time scales the effective bottleneck link rate and at the longest time scales the
average rate is equal to the target data rate. average rate is equal to the target data rate.
Note that in conventional measurement theory, exponential Note that in conventional measurement theory, exponential
distributions are often used to eliminate many sorts of correlations. distributions are often used to eliminate many sorts of correlations.
For the procedures above, the correlations are created by the network For the procedures above, the correlations are created by the network
elements and accurately reflect their behavior. At some point in the elements and accurately reflect their behavior. At some point in the
future, it may be desirable to introduce noise sources into the above future, it will be desirable to introduce noise sources into the
pacing models, but the are not warranted at this time. above pacing models, but they are not warranted at this time.
6.1.2. Constant window pseudo CBR 6.1.2. Constant window pseudo CBR
Implement pseudo constant bit rate by running a standard protocol Implement pseudo constant bit rate by running a standard protocol
such as TCP with a fixed window size. The rate is only maintained in such as TCP with a fixed window size, such that it is self clocked.
Data packets arriving at the receiver trigger acknowledgements (ACKs)
which travel back to the sender where they trigger additional
transmissions. The window size is computed from the target_data_rate
and the actual RTT of the test path. The rate is only maintained in
average over each RTT, and is subject to limitations of the transport average over each RTT, and is subject to limitations of the transport
protocol. protocol.
The window size is computed from the target_data_rate and the actual Since the window size is constrained to be an integer number of
RTT of the test path. packets, for small RTTs or low data rates there may not be
sufficiently precise control over the data rate. Rounding the window
size up (the default) is likely to be result in data rates that are
higher than the target rate, but reducing the window by one packet
may result in data rates that are too small. Also cross traffic
potentially raises the RTT, implicitly reducing the rate. Cross
traffic that raises the RTT nearly always makes the test more
strenuous. A FSTDS specifying a constant window CBR tests MUST
explicitly indicate under what conditions errors in the data cause
tests to inconclusive. See the discussion of test outcomes in
Section 6.2.1.
If the transport protocol fails to maintain the test rate within Since constant window pseudo CBR testing is sensitive to RTT
prescribed limits the test would typically be considered inconclusive fluctuations it can not accurately control the data rate in
or failing, depending on what mechanism caused the reduced rate. See environments with fluctuating delays.
the discussion of test outcomes in Section 6.2.1.
6.1.3. Scanned window pseudo CBR 6.1.3. Scanned window pseudo CBR
Same as the above, except the window is scanned across a range of Scanned window pseudo CBR is similar to the constant window CBR
sizes designed to include two key events, the onset of queueing and described above, except the window is scanned across a range of sizes
the onset of packet loss or ECN marks. The window is scanned by designed to include two key events, the onset of queueing and the
incrementing it by one packet for every 2*target_pipe_size delivered onset of packet loss or ECN marks. The window is scanned by
incrementing it by one packet every 2*target_pipe_size delivered
packets. This mimics the additive increase phase of standard TCP packets. This mimics the additive increase phase of standard TCP
congestion avoidance and normally separates the the window increases congestion avoidance when delayed ACKs are in effect. It normally
by approximately twice the target_RTT. separates the the window increases by approximately twice the
target_RTT.
There are two versions of this test: one built by applying a window There are two ways to implement this test: one built by applying a
clamp to standard congestion control and the other built by window clamp to standard congestion control in a standard protocol
stiffening a non-standard transport protocol. When standard such as TCP and the other built by stiffening a non-standard
congestion control is in effect, any losses or ECN marks cause the transport protocol. When standard congestion control is in effect,
transport to revert to a window smaller than the clamp such that the any losses or ECN marks cause the transport to revert to a window
scanning clamp loses control the window size. The NPAD pathdiag tool smaller than the clamp such that the scanning clamp loses control the
is an example of this class of algorithms [Pathdiag]. window size. The NPAD pathdiag tool is an example of this class of
algorithms [Pathdiag].
Alternatively a non-standard congestion control algorithm can respond Alternatively a non-standard congestion control algorithm can respond
to losses by transmitting extra data, such that it maintains the to losses by transmitting extra data, such that it maintains the
specified window size independent of losses or ECN marks. Such a specified window size independent of losses or ECN marks. Such a
stiffened transport explicitly violates mandatory Internet congestion stiffened transport explicitly violates mandatory Internet congestion
control and is not suitable for in situ testing. It is only control and is not suitable for in situ testing. [RFC5681] It is
appropriate for engineering testing under laboratory conditions. The only appropriate for engineering testing under laboratory conditions.
Windowed Ping tools implemented such a test [WPING]. The tool The Windowed Ping tool implements such a test [WPING]. The tool
described in the paper has been updated.[mpingSource] described in the paper has been updated.[mpingSource]
The test procedures in Section 7.2 describe how to the partition the The test procedures in Section 7.2 describe how to the partition the
scans into regions and how to interpret the results. scans into regions and how to interpret the results.
6.1.4. Concurrent or channelized testing 6.1.4. Concurrent or channelized testing
The procedures described in this document are only directly The procedures described in this document are only directly
applicable to single stream performance measurement, e.g. one TCP applicable to single stream performance measurement, e.g. one TCP
connection. In an ideal world, we would disallow all performance connection. In an ideal world, we would disallow all performance
claims based multiple concurrent streams, but this is not practical claims based multiple concurrent streams, but this is not practical
due to at least two different issues. First, many very high rate due to at least two different issues. First, many very high rate
link technologies are channelized and pin individual flows to link technologies are channelized and pin individual flows to
specific channels to minimize reordering or other problems and specific channels to minimize reordering or other problems and
second, TCP itself has scaling limits. Although the former problem second, TCP itself has scaling limits. Although the former problem
might be overcome through different design decisions, the later might be overcome through different design decisions, the later
problem is more deeply rooted. problem is more deeply rooted.
All standard [RFC5681] and de facto standard congestion control All congestion control algorithms that are philosophically aligned
algorithms [CUBIC] have scaling limits, in the sense that as a long with the standard [RFC5681] (e.g. claim some level of TCP
fast network (LFN) with a fixed RTT and MTU gets faster, these friendliness) have scaling limits, in the sense that as a long fast
congestion control algorithms get less accurate and as a consequence network (LFN) with a fixed RTT and MTU gets faster, these congestion
have difficulty filling the network[CCscaling]. These properties are control algorithms get less accurate and as a consequence have
a consequence of the original Reno AIMD congestion control design and difficulty filling the network[CCscaling]. These properties are a
consequence of the original Reno AIMD congestion control design and
the requirement in [RFC5681] that all transport protocols have the requirement in [RFC5681] that all transport protocols have
uniform response to congestion. uniform response to congestion.
There are a number of reasons to want to specify performance in term There are a number of reasons to want to specify performance in term
of multiple concurrent flows, however this approach is not of multiple concurrent flows, however this approach is not
recommended for data rates below several megabits per second, which recommended for data rates below several megabits per second, which
can be attained with run lengths under 10000 packets. Since the can be attained with run lengths under 10000 packets. Since the
required run length goes as the square of the data rate, at higher required run length goes as the square of the data rate, at higher
rates the run lengths can be unreasonably large, and multiple rates the run lengths can be unreasonably large, and multiple
connection might be the only feasible approach. connection might be the only feasible approach.
skipping to change at page 20, line 50 skipping to change at page 21, line 45
preconditions for the test. preconditions for the test.
For example consider a test that implements Constant Window Pseudo For example consider a test that implements Constant Window Pseudo
CBR (Section 6.1.2) by adding rate controls and detailed traffic CBR (Section 6.1.2) by adding rate controls and detailed traffic
instrumentation to TCP (e.g. [RFC4898]). TCP includes built in instrumentation to TCP (e.g. [RFC4898]). TCP includes built in
control systems which might interfere with the sending data rate. If control systems which might interfere with the sending data rate. If
such a test meets the required delivery statistics (e.g. run length) such a test meets the required delivery statistics (e.g. run length)
while failing to attain the specified data rate it must be treated as while failing to attain the specified data rate it must be treated as
an inconclusive result, because we can not a priori determine if the an inconclusive result, because we can not a priori determine if the
reduced data rate was caused by a TCP problem or a network problem, reduced data rate was caused by a TCP problem or a network problem,
or if the reduced data rate had a material effect on the delivery or if the reduced data rate had a material effect on the observed
statistics themselves. delivery statistics.
Note that for load tests such as this example, an if the observed Note that for load tests, if the observed delivery statistics fail to
delivery statistics fail to meet the targets, the test can can be meet the targets, the test can can be considered to have failed
considered to have failed the test because it doesn't really matter because it doesn't really matter that the test didn't attain the
that the test didn't attain the required data rate. required data rate.
The really important new properties of MBM, such as vantage The really important new properties of MBM, such as vantage
independence, are a direct consequence of opening the control loops independence, are a direct consequence of opening the control loops
in the protocols, such that the test traffic does not depend on in the protocols, such that the test traffic does not depend on
network conditions or traffic received. Any mechanism that network conditions or traffic received. Any mechanism that
introduces feedback between the traffic measurements and the traffic introduces feedback between the paths measurements and the traffic
generation is at risk of introducing nonlinearities that spoil these generation is at risk of introducing nonlinearities that spoil these
properties. Any exceptional event that indicates that such feedback properties. Any exceptional event that indicates that such feedback
has happened should cause the test to be considered inconclusive. has happened should cause the test to be considered inconclusive.
One way to view inconclusive tests is that they reflect situations One way to view inconclusive tests is that they reflect situations
where a test outcome is ambiguous between limitations of the network where a test outcome is ambiguous between limitations of the network
and some unknown limitation of the diagnostic test itself, which may and some unknown limitation of the diagnostic test itself, which may
have been caused by some uncontrolled feedback from the network. have been caused by some uncontrolled feedback from the network.
Note that procedures that attempt to sweep the target parameter space Note that procedures that attempt to sweep the target parameter space
to find the limits on some parameter (for example to find the highest to find the limits on some parameter such as target_data_rate are at
data rate for a subpath) are likely to break the location independent risk of breaking the location independent properties of Model Based
properties of Model Based Metrics, because the boundary between Metrics, if the boundary between passing and inconclusive is at all
passing and inconclusive is generally sensitive to RTT. This sensitive to RTT.
interaction is because TCP's ability to compensate for flaws in the
network scales with the number of round trips per second. Repeating
the same procedure from a different vantage point with a larger RTT
is likely get a different result, because with the larger TCP will
less accurately control the data rate.
One of the goals for evolving TDS designs will be to keep sharpening One of the goals for evolving TDS designs will be to keep sharpening
distinction between inconclusive, passing and failing tests. The distinction between inconclusive, passing and failing tests. The
criteria for for passing, failing and inconclusive tests MUST be criteria for for passing, failing and inconclusive tests MUST be
explicitly stated for every test in the TDS or FSTDS. explicitly stated for every test in the TDS or FSTDS.
One of the goals of evolving the testing process, procedures tools One of the goals of evolving the testing process, procedures, tools
and measurement point selection should be to minimize the number of and measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
It may be useful to keep raw data delivery statistics for deeper It may be useful to keep raw data delivery statistics for deeper
study of the behavior of the network path and to measure the tools. study of the behavior of the network path and to measure the tools
Raw delivery statistics can help to drive tool evolution. Under some themselves. Raw delivery statistics can help to drive tool
conditions it might be possible to reevaluate the raw data for evolution. Under some conditions it might be possible to reevaluate
satisfying alternate performance targets. However it is important to the raw data for satisfying alternate performance targets. However
guard against sampling bias and other implicit feedback which can it is important to guard against sampling bias and other implicit
cause false results and exhibit measurement point vantage feedback which can cause false results and exhibit measurement point
sensitivity. vantage sensitivity.
6.2.2. Statistical criteria for measuring run_length 6.2.2. Statistical criteria for estimating run_length
When evaluating the observed run_length, we need to determine When evaluating the observed run_length, we need to determine
appropriate packet stream sizes and acceptable error levels for appropriate packet stream sizes and acceptable error levels for
efficient measurement. In practice, can we compare the empirically efficient measurement. In practice, can we compare the empirically
estimated packet loss and ECN marking probabilities with the targets estimated packet loss and ECN marking probabilities with the targets
as the sample size grows? How large a sample is needed to say that as the sample size grows? How large a sample is needed to say that
the measurements of packet transfer indicate a particular run length the measurements of packet transfer indicate a particular run length
is present? is present?
The generalized measurement can be described as recursive testing: The generalized measurement can be described as recursive testing:
send packets (individually or in patterns) and observe the packet send packets (individually or in patterns) and observe the packet
delivery performance (loss ratio or other metric, any marking we delivery performance (loss ratio or other metric, any marking we
define). define).
As each packet is sent and measured, we have an ongoing estimate of As each packet is sent and measured, we have an ongoing estimate of
the performance in terms of the ratio of packet loss or ECN mark to the performance in terms of the ratio of packet loss or ECN mark to
total packets (i.e. an empirical probability). We continue to send total packets (i.e. an empirical probability). We continue to send
until conditions support a conclusion or a maximum sending limit has until conditions support a conclusion or a maximum sending limit has
been reached. been reached.
skipping to change at page 23, line 20 skipping to change at page 24, line 11
The Sequential Probability Ratio Test also starts with a pair of The Sequential Probability Ratio Test also starts with a pair of
hypothesis specified as above: hypothesis specified as above:
H0: p0 = one defect in target_run_length H0: p0 = one defect in target_run_length
H1: p1 = one defect in target_run_length/4 H1: p1 = one defect in target_run_length/4
As packets are sent and measurements collected, the tester evaluates As packets are sent and measurements collected, the tester evaluates
the cumulative defect count against two boundaries representing H0 the cumulative defect count against two boundaries representing H0
Acceptance or Rejection (and acceptance of H1): Acceptance or Rejection (and acceptance of H1):
Acceptance line: Xa = -h1 + sn Acceptance line: Xa = -h1 + s*n
Rejection line: Xr = h2 + sn Rejection line: Xr = h2 + s*n
where n increases linearly for each packet sent and where n increases linearly for each packet sent and
h1 = { log((1-alpha)/beta) }/k h1 = { log((1-alpha)/beta) }/k
h2 = { log((1-beta)/alpha) }/k h2 = { log((1-beta)/alpha) }/k
k = log{ (p1(1-p0)) / (p0(1-p1)) } k = log{ (p1(1-p0)) / (p0(1-p1)) }
s = [ log{ (1-p0)/(1-p1) } ]/k s = [ log{ (1-p0)/(1-p1) } ]/k
for p0 and p1 as defined in the null and alternative Hypotheses for p0 and p1 as defined in the null and alternative Hypotheses
statements above, and alpha and beta as the Type I and Type II error. statements above, and alpha and beta as the Type I and Type II
errors.
The SPRT specifies simple stopping rules: The SPRT specifies simple stopping rules:
o Xa < defect_count(n) < Xb: continue testing o Xa < defect_count(n) < Xb: continue testing
o defect_count(n) <= Xa: Accept H0 o defect_count(n) <= Xa: Accept H0
o defect_count(n) >= Xb: Accept H1 o defect_count(n) >= Xb: Accept H1
The calculations above are implemented in the R-tool for Statistical The calculations above are implemented in the R-tool for Statistical
Analysis [Rtool] , in the add-on package for Cross-Validation via Analysis [Rtool] , in the add-on package for Cross-Validation via
Sequential Testing (CVST) [CVST] . Sequential Testing (CVST) [CVST] .
Using the equations above, we can calculate the minimum number of Using the equations above, we can calculate the minimum number of
packets (n) needed to accept H0 when x defects are observed. For packets (n) needed to accept H0 when x defects are observed. For
example, when x = 0: example, when x = 0:
Xa = 0 = -h1 + sn Xa = 0 = -h1 + s*n
and n = h1 / s and n = h1 / s
6.2.2.1. Alternate criteria for measuring run_length
An alternate calculation, contributed by Alex Gilgur (Google).
The probability of failure within an interval whose length is
target_run_length is given by an exponential distribution with rate =
1 / target_run_length (a memoryless process). The implication of
this is that it will be different, depending on the total count of
packets that have been through the pipe, the formula being:
P(t1 < T < t2) = R(t1) - R(t2),
where
T = number of packets at which a failure will occur with probability P;
t = number of packets:
t1 = number of packets (e.g., when failure last occurred)
t2 = t1 + target_run_length
R = failure rate:
R(t1) = exp (-t1/target_run_length)
R(t2) = exp (-t2/target_run_length)
The algorithm:
initialize the packet.counter = 0
initialize the failed.packet.counter = 0
start the loop
if paket_response = ACK:
increment the packet.counter
else:
### The packet failed
increment the packet.counter
increment the failed.packet.counter
P_fail_observed = failed.packet.counter/packet.counter
upper_bound = packet.counter + target.run.length / 2
lower_bound = packet.counter - target.run.length / 2
R1 = exp( -upper_bound / target.run.length)
R0 = R(max(0, lower_bound)/ target.run.length)
P_fail_predicted = R1-R0
Compare P_fail_observed vs. P_fail_predicted
end-if
continue the loop
This algorithm allows accurate comparison of the observed failure
probability with the corresponding values predicted based on a fixed
target_failure_rate, which is equal to 1.0 / target_run_length.
6.2.3. Reordering Tolerance 6.2.3. Reordering Tolerance
All tests must be instrumented for packet level reordering [RFC4737]. All tests must be instrumented for packet level reordering [RFC4737].
However, there is no consensus for how much reordering should be However, there is no consensus for how much reordering should be
acceptable. Over the last two decades the general trend has been to acceptable. Over the last two decades the general trend has been to
make protocols and applications more tolerant to reordering (see for make protocols and applications more tolerant to reordering (see for
example [RFC4015]), in response to the gradual increase in reordering example [RFC4015]), in response to the gradual increase in reordering
in the network. This increase has been due to the gradual deployment in the network. This increase has been due to the deployment of
of technologies such as multi threaded routing lookups and Equal Cost technologies such as multi threaded routing lookups and Equal Cost
Multipath (ECMP) routing. These techniques increase parallelism in MultiPath (ECMP) routing. These techniques increase parallelism in
network and are critical to enabling overall Internet growth to network and are critical to enabling overall Internet growth to
exceed Moore's Law. exceed Moore's Law.
Note that transport retransmission strategies can trade off Note that transport retransmission strategies can trade off
reordering tolerance vs how quickly can repair losses vs overhead reordering tolerance vs how quickly they can repair losses vs
from spurious retransmissions. In advance of new retransmission overhead from spurious retransmissions. In advance of new
strategies we propose the following strawman: Transport protocols retransmission strategies we propose the following strawman:
should be able to adapt to reordering as long as the reordering Transport protocols should be able to adapt to reordering as long as
extent is no more than the maximum of one half window or 1 mS, the reordering extent is no more than the maximum of one quarter
whichever is larger. Within this limit on reorder extent, there window or 1 mS, whichever is larger. Within this limit on reorder
should be no bound on reordering density. extent, there should be no bound on reordering density.
By implication, recording which is less than these bounds should not By implication, recording which is less than these bounds should not
be treated as a network impairment. However [RFC4737] still applies: be treated as a network impairment. However [RFC4737] still applies:
reordering should be instrumented and the maximum reordering that can reordering should be instrumented and the maximum reordering that can
be properly characterized by the test (e.g. bound on history buffers) be properly characterized by the test (e.g. bound on history buffers)
should be recorded with the measurement results. should be recorded with the measurement results.
Reordering tolerance and diagnostic bounds must be specified in a Reordering tolerance and diagnostic limitations, such as history
FSTDS. buffer size, MUST be specified in a FSTDS.
6.3. Test Preconditions 6.3. Test Preconditions
Many tests have preconditions which are required to assure their Many tests have preconditions which are required to assure their
validity. For example the presence or nonpresence of cross traffic validity. For example the presence or nonpresence of cross traffic
on specific subpaths, or appropriate preloading to put reactive on specific subpaths, or appropriate preloading to put reactive
network elements into the proper states[I-D.ietf-ippm-2330-update]) network elements into the proper states[RFC7312]). If preconditions
If preconditions are not properly satisfied for some reason, the are not properly satisfied for some reason, the tests should be
tests should be considered to be inconclusive. In general it is considered to be inconclusive. In general it is useful to preserve
useful to preserve diagnostic information about why the preconditions diagnostic information about why the preconditions were not met, and
were not met, and the test data that was collected, if any. any test data that was collected even if it is not useful for the
intended test. Such diagnostic information and partial test data may
be useful for improving the test in the future.
It is important to preserve the record that a test was scheduled, It is important to preserve the record that a test was scheduled,
because otherwise precondition enforcement mechanisms can introduce because otherwise precondition enforcement mechanisms can introduce
sampling bias. For example, canceling tests due to load on sampling bias. For example, canceling tests due to cross traffic on
subscriber access links may introduce sampling bias for tests of the subscriber access links might introduce sampling bias of tests of the
rest of the network by reducing the number of tests during peak rest of the network by reducing the number of tests during peak
network load. network load.
Test preconditions and failure actions must be specified in a FSTDS. Test preconditions and failure actions MUST be specified in a FSTDS.
7. Diagnostic Tests 7. Diagnostic Tests
The diagnostic tests below are organized by traffic pattern: basic The diagnostic tests below are organized by traffic pattern: basic
data rate and delivery statistics, standing queues, slowstart bursts, data rate and delivery statistics, standing queues, slowstart bursts,
and sender rate bursts. We also introduce some combined tests which and sender rate bursts. We also introduce some combined tests which
are more efficient when networks are expected to pass, but conflate are more efficient when networks are expected to pass, but conflate
diagnostic signatures when they fail. diagnostic signatures when they fail.
There are a number of test details which are not fully defined here. There are a number of test details which are not fully defined here.
skipping to change at page 28, line 10 skipping to change at page 27, line 45
changes in the observed subpath run length without disrupting users. changes in the observed subpath run length without disrupting users.
It should be used in conjunction with one of the above full rate It should be used in conjunction with one of the above full rate
tests because it does not confirm that the subpath can support raw tests because it does not confirm that the subpath can support raw
data rate. data rate.
RFC 6673 [RFC6673] is appropriate for measuring background delivery RFC 6673 [RFC6673] is appropriate for measuring background delivery
statistics. statistics.
7.2. Standing Queue Tests 7.2. Standing Queue Tests
These test confirm that the bottleneck is well behaved across the These engineering tests confirm that the bottleneck is well behaved
onset of packet loss, which typically follows after the onset of across the onset of packet loss, which typically follows after the
queueing. Well behaved generally means lossless for transient onset of queueing. Well behaved generally means lossless for
queues, but once the queue has been sustained for a sufficient period transient queues, but once the queue has been sustained for a
of time (or reaches a sufficient queue depth) there should be a small sufficient period of time (or reaches a sufficient queue depth) there
number of losses to signal to the transport protocol that it should should be a small number of losses to signal to the transport
reduce its window. Losses that are too early can prevent the protocol that it should reduce its window. Losses that are too early
transport from averaging at the target_data_rate. Losses that are can prevent the transport from averaging at the target_data_rate.
too late indicate that the queue might be subject to bufferbloat Losses that are too late indicate that the queue might be subject to
[wikiBloat] and inflict excess queuing delays on all flows sharing bufferbloat [wikiBloat] and inflict excess queuing delays on all
the bottleneck queue. Excess losses (more than a few per RTT) make flows sharing the bottleneck queue. Excess losses (more than half of
loss recovery problematic for the transport protocol. Non-linear or the window) at the onset of congestion make loss recovery problematic
erratic RTT fluctuations suggest poor interactions between the for the transport protocol. Non-linear, erratic or excessive RTT
channel acquisition algorithms and the transport self clock. All of increases suggest poor interactions between the channel acquisition
the tests in this section use the same basic scanning algorithm, algorithms and the transport self clock. All of the tests in this
described here, but score the link on the basis of how well it avoids section use the same basic scanning algorithm, described here, but
each of these problems. score the link on the basis of how well it avoids each of these
problems.
For some technologies the data might not be subject to increasing For some technologies the data might not be subject to increasing
delays, in which case the data rate will vary with the window size delays, in which case the data rate will vary with the window size
all the way up to the onset of load induced losses or ECN marks. For all the way up to the onset of load induced losses or ECN marks. For
theses technologies, the discussion of queueing does not apply, but theses technologies, the discussion of queueing does not apply, but
it is still required that the onset of losses (or ECN marks) be at an it is still required that the onset of losses or ECN marks be at an
appropriate point and progressive. appropriate point and progressive.
Use the procedure in Section 6.1.3 to sweep the window across the Use the procedure in Section 6.1.3 to sweep the window across the
onset of queueing and the onset of loss. The tests below all assume onset of queueing and the onset of loss. The tests below all assume
that the scan emulates standard additive increase and delayed ACK by that the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_pipe_size incrementing the window by one packet for every 2*target_pipe_size
packets delivered. A scan can typically be divided into three packets delivered. A scan can typically be divided into three
regions: below the onset of queueing, a standing queue, and at or regions: below the onset of queueing, a standing queue, and at or
beyond the onset of loss. beyond the onset of loss.
skipping to change at page 29, line 28 skipping to change at page 29, line 16
stiffened transport protocols case (with non-standard, aggressive stiffened transport protocols case (with non-standard, aggressive
congestion control algorithms) the details of periodic losses will be congestion control algorithms) the details of periodic losses will be
dominated by how the the window increase function responds to loss. dominated by how the the window increase function responds to loss.
7.2.1. Congestion Avoidance 7.2.1. Congestion Avoidance
A link passes the congestion avoidance standing queue test if more A link passes the congestion avoidance standing queue test if more
than target_run_length packets are delivered between the onset of than target_run_length packets are delivered between the onset of
queueing (as determined by the window with the maximum network power) queueing (as determined by the window with the maximum network power)
and the first loss or ECN mark. If this test is implemented using a and the first loss or ECN mark. If this test is implemented using a
standards congestion control algorithm with a clamp, it can be used standards congestion control algorithm with a clamp, it can be
in situ in the production internet as a capacity test. For an performed in situ in the production internet as a capacity test. For
example of such a test see [Pathdiag]. an example of such a test see [Pathdiag].
For technologies that do not have conventional queues, use the For technologies that do not have conventional queues, use the
test_window inplace of the onset of queueing. i.e. A link passes the test_window inplace of the onset of queueing. i.e. A link passes the
congestion avoidance standing queue test if more than congestion avoidance standing queue test if more than
target_run_length packets are delivered between start of the scan at target_run_length packets are delivered between start of the scan at
test_window and the first loss or ECN mark. test_window and the first loss or ECN mark.
7.2.2. Bufferbloat 7.2.2. Bufferbloat
This test confirms that there is some mechanism to limit buffer This test confirms that there is some mechanism to limit buffer
occupancy (e.g. that prevents bufferbloat). Note that this is not occupancy (e.g. that prevents bufferbloat). Note that this is not
strictly a requirement for single stream bulk performance, however if strictly a requirement for single stream bulk performance, however if
there is no mechanism to limit buffer queue occupancy then a single there is no mechanism to limit buffer queue occupancy then a single
stream with sufficient data to deliver is likely to cause the stream with sufficient data to deliver is likely to cause the
problems described in [RFC2309] and [wikiBloat]. This may cause only problems described in [RFC2309], [I-D.ietf-aqm-recommendation] and
minor symptoms for the dominant flow, but has the potential to make [wikiBloat]. This may cause only minor symptoms for the dominant
the link unusable for other flows and applications. flow, but has the potential to make the link unusable for other flows
and applications.
Pass if the onset of loss occurs before a standing queue has Pass if the onset of loss occurs before a standing queue has
introduced more delay than than twice target_RTT, or other well introduced more delay than than twice target_RTT, or other well
defined and specified limit. Note that there is not yet a model for defined and specified limit. Note that there is not yet a model for
how much standing queue is acceptable. The factor of two chosen here how much standing queue is acceptable. The factor of two chosen here
reflects a rule of thumb. In conjunction with the previous test, reflects a rule of thumb. In conjunction with the previous test,
this test implies that the first loss should occur at a queueing this test implies that the first loss should occur at a queueing
delay which is between one and two times the target_RTT. delay which is between one and two times the target_RTT.
Specified RTT limits that are larger than twice the target_RTT must Specified RTT limits that are larger than twice the target_RTT must
skipping to change at page 30, line 23 skipping to change at page 30, line 17
This test confirm that the onset of loss is not excessive. Pass if This test confirm that the onset of loss is not excessive. Pass if
losses are equal or less than the increase in the cross traffic plus losses are equal or less than the increase in the cross traffic plus
the test traffic window increase on the previous RTT. This could be the test traffic window increase on the previous RTT. This could be
restated as non-decreasing link throughput at the onset of loss, restated as non-decreasing link throughput at the onset of loss,
which is easy to meet as long as discarding packets in not more which is easy to meet as long as discarding packets in not more
expensive than delivering them. (Note when there is a transient drop expensive than delivering them. (Note when there is a transient drop
in link throughput, outside of a standing queue test, a link that in link throughput, outside of a standing queue test, a link that
passes other queue tests in this document will have sufficient queue passes other queue tests in this document will have sufficient queue
space to hold one RTT worth of data). space to hold one RTT worth of data).
Note that conventional Internet traffic policers will not pass this
test, which is correct. TCP often fails to come into equilibrium at
more than a small fraction of the available capacity, if the capacity
is enforced by a policer. [Citation Pending].
7.2.4. Duplex Self Interference 7.2.4. Duplex Self Interference
This engineering test confirms a bound on the interactions between This engineering test confirms a bound on the interactions between
the forward data path and the ACK return path. the forward data path and the ACK return path.
Some historical half duplex technologies had the property that each Some historical half duplex technologies had the property that each
direction held the channel until it completely drains its queue. direction held the channel until it completely drains its queue.
When a self clocked transport protocol, such as TCP, has data and When a self clocked transport protocol, such as TCP, has data and
acks passing in opposite directions through such a link, the behavior acks passing in opposite directions through such a link, the behavior
often reverts to stop-and-wait. Each additional packet added to the often reverts to stop-and-wait. Each additional packet added to the
window raises the observed RTT by two forward path packet times, once window raises the observed RTT by two forward path packet times, once
as it passes through the data path, and once for the additional delay as it passes through the data path, and once for the additional delay
incurred by the ACK waiting on the return path. incurred by the ACK waiting on the return path.
The duplex self interference test fails if the RTT rises by more than The duplex self interference test fails if the RTT rises by more than
some fixed bound above the expected queueing time computed from trom some fixed bound above the expected queueing time computed from trom
the excess window divided by the link data rate. the excess window divided by the link data rate. This bound must be
smaller than target_RTT/2 to avoid reverting to stop and wait
behavior. (e.g. Packets have to be released at least twice per RTT,
to avoid stop and wait behavior.)
7.3. Slowstart tests 7.3. Slowstart tests
These tests mimic slowstart: data is sent at twice the effective These tests mimic slowstart: data is sent at twice the effective
bottleneck rate to exercise the queue at the dominant bottleneck. bottleneck rate to exercise the queue at the dominant bottleneck.
In general they are deemed inconclusive if the elapsed time to send In general they are deemed inconclusive if the elapsed time to send
the data burst is not less than half of the time to receive the ACKs. the data burst is not less than half of the time to receive the ACKs.
(i.e. sending data too fast is ok, but sending it slower than twice (i.e. sending data too fast is ok, but sending it slower than twice
the actual bottleneck rate as indicated by the ACKs is deemed the actual bottleneck rate as indicated by the ACKs is deemed
skipping to change at page 31, line 13 skipping to change at page 31, line 14
equal to the target_data_rate. equal to the target_data_rate.
7.3.1. Full Window slowstart test 7.3.1. Full Window slowstart test
This is a capacity test to confirm that slowstart is not likely to This is a capacity test to confirm that slowstart is not likely to
exit prematurely. Send slowstart bursts that are target_pipe_size exit prematurely. Send slowstart bursts that are target_pipe_size
total packets. total packets.
Accumulate packet delivery statistics as described in Section 6.2.2 Accumulate packet delivery statistics as described in Section 6.2.2
to score the outcome. Pass if it is statistically significant that to score the outcome. Pass if it is statistically significant that
the observed interval between losses or ECN marks is larger than the the observed number of good packets delivered between losses or ECN
target_run_length. Fail if it is statistically significant that the marks is larger than the target_run_length. Fail if it is
observed interval between losses or ECN marks is smaller than the statistically significant that the observed interval between losses
target_run_length. or ECN marks is smaller than the target_run_length.
Note that these are the same parameters as the Sender Full Window Note that these are the same parameters as the Sender Full Window
burst test, except the burst rate is at slowestart rate, rather than burst test, except the burst rate is at slowestart rate, rather than
sender interface rate. sender interface rate.
7.3.2. Slowstart AQM test 7.3.2. Slowstart AQM test
Do a continuous slowstart (send data continuously at slowstart_rate), Do a continuous slowstart (send data continuously at slowstart_rate),
until the first loss, stop, allow the network to drain and repeat, until the first loss, stop, allow the network to drain and repeat,
gathering statistics on the last packet delivered before the loss, gathering statistics on the last packet delivered before the loss,
skipping to change at page 31, line 44 skipping to change at page 31, line 45
This is an engineering test: It would be best performed on a This is an engineering test: It would be best performed on a
quiescent network or testbed, since cross traffic has the potential quiescent network or testbed, since cross traffic has the potential
to change the results. to change the results.
7.4. Sender Rate Burst tests 7.4. Sender Rate Burst tests
These tests determine how well the network can deliver bursts sent at These tests determine how well the network can deliver bursts sent at
sender's interface rate. Note that this test most heavily exercises sender's interface rate. Note that this test most heavily exercises
the front path, and is likely to include infrastructure may be out of the front path, and is likely to include infrastructure may be out of
scope for a subscriber ISP. scope for an access ISP, even though the bursts might be caused by
ACK compression, thinning or channel arbitration in the access ISP.
See Appendix B.
Also, there are a several details that are not precisely defined. Also, there are a several details that are not precisely defined.
For starters there is not a standard server interface rate. 1 Gb/s For starters there is not a standard server interface rate. 1 Gb/s
and 10 Gb/s are very common today, but higher rates will become cost and 10 Gb/s are very common today, but higher rates will become cost
effective and can be expected to be dominant some time in the future. effective and can be expected to be dominant some time in the future.
Current standards permit TCP to send a full window bursts following Current standards permit TCP to send a full window bursts following
an application pause. (Congestion Window Validation [RFC2861], is an application pause. (Congestion Window Validation [RFC2861], is
not required, but even if was, it does not take effect until an not required, but even if was, it does not take effect until an
application pause is longer than an RTO.) Since full window bursts application pause is longer than an RTO.) Since full window bursts
skipping to change at page 32, line 26 skipping to change at page 32, line 30
larger network bursts, which increase the stress on network buffer larger network bursts, which increase the stress on network buffer
memory. memory.
There is not yet theory to unify these costs or to provide a There is not yet theory to unify these costs or to provide a
framework for trying to optimize global efficiency. We do not yet framework for trying to optimize global efficiency. We do not yet
have a model for how much the network should tolerate server rate have a model for how much the network should tolerate server rate
bursts. Some bursts must be tolerated by the network, but it is bursts. Some bursts must be tolerated by the network, but it is
probably unreasonable to expect the network to be able to efficiently probably unreasonable to expect the network to be able to efficiently
deliver all data as a series of bursts. deliver all data as a series of bursts.
For this reason, this is the only test for which we explicitly For this reason, this is the only test for which we encourage
encourage derating. A TDS should include a table of pairs of derating. A TDS could include a table of pairs of derating
derating parameters: what burst size to use as a fraction of the parameters: what burst size to use as a fraction of the
target_pipe_size, and how much each burst size is permitted to reduce target_pipe_size, and how much each burst size is permitted to reduce
the run length, relative to to the target_run_length. the run length, relative to to the target_run_length.
7.5. Combined Tests 7.5. Combined and Implicit Tests
Combined tests efficiently confirm multiple network properties in a Combined tests efficiently confirm multiple network properties in a
single test, possibly as a side effect of production content single test, possibly as a side effect of normally content delivery.
delivery. They require less measurement traffic than other testing They require less measurement traffic than other testing strategies
strategies at the cost of conflating diagnostic signatures when they at the cost of conflating diagnostic signatures when they fail.
fail. These are by far the most efficient for testing networks that These are by far the most efficient for monitoring networks that are
are expected to pass all tests. nominally expected to pass all tests.
7.5.1. Sustained burst test 7.5.1. Sustained Bursts Test
The sustained burst test implements a combined worst case version of The sustained burst test implements a combined worst case version of
all of the capacity tests above. In its simplest form send all of the load tests above. It is simply:
target_pipe_size bursts of packets at server interface rate with
Send target_pipe_size bursts of packets at server interface rate with
target_RTT headway (burst start to burst start). Verify that the target_RTT headway (burst start to burst start). Verify that the
observed delivery statistics meets the target_run_length. Key observed delivery statistics meets the target_run_length.
observations:
Key observations:
o The subpath under test is expected to go idle for some fraction of o The subpath under test is expected to go idle for some fraction of
the time: (subpath_data_rate-target_rate)/subpath_data_rate. the time: (subpath_data_rate-target_rate)/subpath_data_rate.
Failing to do so indicates a problem with the procedure and an Failing to do so indicates a problem with the procedure and an
inconclusive test result. inconclusive test result.
o The burst sensitivity can be derated by sending smaller bursts o The burst sensitivity can be derated by sending smaller bursts
more frequently. E.g. send target_pipe_size*derate packet bursts more frequently. E.g. send target_pipe_size*derate packet bursts
every target_RTT*derate. every target_RTT*derate.
o When not derated this test is more strenuous than the slowstart o When not derated, this test is the most strenuous load test.
capacity tests.
o A link that passes this test is likely to be able to sustain o A link that passes this test is likely to be able to sustain
higher rates (close to subpath_data_rate) for paths with RTTs higher rates (close to subpath_data_rate) for paths with RTTs
significantly smaller than the target_RTT. Offsetting this significantly smaller than the target_RTT.
performance underestimation is part of the rationale behind
permitting derating in general.
o This test can be implemented with instrumented TCP [RFC4898], o This test can be implemented with instrumented TCP [RFC4898],
using a specialized measurement application at one end [MBMSource] using a specialized measurement application at one end [MBMSource]
and a minimal service at the other end [RFC0863] [RFC0864]. A and a minimal service at the other end [RFC0863] [RFC0864].
prototype tool exists and is under evaluation .
o This test is efficient to implement, since it does not require o This test is efficient to implement, since it does not require
per-packet timers, and can make use of TSO in modern NIC hardware. per-packet timers, and can make use of TSO in modern NIC hardware.
o This test is not completely sufficient: the standing window o This test by itself is not sufficient: the standing window
engineering tests are also needed to ensure that the link is well engineering tests are also needed to ensure that the link is well
behaved at and beyond the onset of congestion. Links that exhibit behaved at and beyond the onset of congestion.
punitive behaviors such as sudden high loss under overload may not
interact well with TCP's self clock.
o Assuming the link passes relevant standing window engineering o Assuming the link passes relevant standing window engineering
tests (particularly that it has a progressive onset of loss at an tests (particularly that it has a progressive onset of loss at an
appropriate queue depth) the passing sustained burst test is appropriate queue depth) the passing sustained burst test is
(believed to be) a sufficient verify that the subpath will not (believed to be) a sufficient verify that the subpath will not
impair stream at the target performance under all conditions. impair stream at the target performance under all conditions.
Proving this statement is the subject of ongoing research. Proving this statement will be subject of ongoing research.
Note that this test is clearly independent of the subpath RTT, or Note that this test is clearly independent of the subpath RTT, or
other details of the measurement infrastructure, as long as the other details of the measurement infrastructure, as long as the
measurement infrastructure can accurately and reliably deliver the measurement infrastructure can accurately and reliably deliver the
required bursts to the subpath under test. required bursts to the subpath under test.
7.5.2. Streaming Media 7.5.2. Streaming Media
Model Based Metrics can be implemented as a side effect of serving Model Based Metrics can be implicitly implemented as a side effect of
any non-throughput maximizing traffic*, such as streaming media, with serving any non-throughput maximizing traffic, such as streaming
some additional controls and instrumentation in the servers. The media, with some additional controls and instrumentation in the
essential requirement is that the traffic be constrained such that servers. The essential requirement is that the traffic be
even with arbitrary application pauses, bursts and data rate constrained such that even with arbitrary application pauses, bursts
fluctuations, the traffic stays within the envelope defined by the and data rate fluctuations, the traffic stays within the envelope
individual tests described above. defined by the individual tests described above.
If the serving_data_rate is less than or equal to the If the application's serving_data_rate is less than or equal to the
target_data_rate and the serving_RTT (the RTT between the sender and target_data_rate and the serving_RTT (the RTT between the sender and
client) is less than the target_RTT, this constraint is most easily client) is less than the target_RTT, this constraint is most easily
implemented by clamping the transport window size to be no larger implemented by clamping the transport window size to be no larger
than: than:
serving_window_clamp=target_data_rate*serving_RTT/ serving_window_clamp=target_data_rate*serving_RTT/
(target_MTU-header_overhead) (target_MTU-header_overhead)
Under the above constraints the serving_window_clamp will limit the Under the above constraints the serving_window_clamp will limit the
both the serving data rate and burst sizes to be no larger than the both the serving data rate and burst sizes to be no larger than the
procedures in Section 7.1.2 and Section 7.4 or Section 7.5.1. Since procedures in Section 7.1.2 and Section 7.4 or Section 7.5.1. Since
the serving RTT is smaller than the target_RTT, the worst case bursts the serving RTT is smaller than the target_RTT, the worst case bursts
that might be generated under these conditions will be smaller than that might be generated under these conditions will be smaller than
called for by Section 7.4 and the sender rate burst sizes are called for by Section 7.4 and the sender rate burst sizes are
implicitly derated by the serving_window_clamp divided by the implicitly derated by the serving_window_clamp divided by the
target_pipe_size at the very least. (The traffic might be smoother target_pipe_size at the very least. (Depending on the application
than specified by the sender interface rate bursts test.) behavior, the data traffic might be significantly smoother than
specified by any of the burst tests.)
Note that it is important that the target_data_rate be above the Note that it is important that the target_data_rate be above the
actual average rate needed by the application so it can recover after actual average rate needed by the application so it can recover after
transient pauses caused by congestion or the application itself. transient pauses caused by congestion or the application itself.
In an alternative implementation the data rate and bursts might be In an alternative implementation the data rate and bursts might be
explicitly controlled by a host shaper or pacing at the sender. This explicitly controlled by a host shaper or pacing at the sender. This
would provide better control over transmissions but it is would provide better control over transmissions but it is
substantially more complicated to implement and would be likely to substantially more complicated to implement and would be likely to
have a higher CPU overhead. have a higher CPU overhead.
* Note that these techniques can be applied to any content delivery Note that these techniques can be applied to any content delivery
that can be subjected to a reduced data rate in order to inhibit TCP that can be subjected to a reduced data rate in order to inhibit TCP
equilibrium behavior. equilibrium behavior.
8. An Example 8. An Example
In this section a we illustrate a TDS designed to confirm that an In this section a we illustrate a TDS designed to confirm that an
access ISP can reliably deliver HD video from multiple content access ISP can reliably deliver HD video from multiple content
providers to all of their customers. With modern codecs HD video providers to all of their customers. With modern codecs, minimal HD
generally fits in 2.5 Mb/s [@@HDvideo]. Due to their geographical video (720p) generally fits in 2.5 Mb/s. Due to their geographical
size, network topology and modem designs the ISP determines that most size, network topology and modem designs the ISP determines that most
content is within a 50 mS RTT from their users (This is a sufficient content is within a 50 mS RTT from their users (This is a sufficient
RTT to cover continental Europe or either US coast from a single to cover continental Europe or either US coast from a single serving
serving site.) site.)
2.5 Mb/s over a 50 ms path 2.5 Mb/s over a 50 ms path
+----------------------+-------+---------+ +----------------------+-------+---------+
| End to End Parameter | value | units | | End to End Parameter | value | units |
+----------------------+-------+---------+ +----------------------+-------+---------+
| target_rate | 2.5 | Mb/s | | target_rate | 2.5 | Mb/s |
| target_RTT | 50 | ms | | target_RTT | 50 | ms |
| target_MTU | 1500 | bytes | | target_MTU | 1500 | bytes |
| header_overhead | 64 | bytes | | header_overhead | 64 | bytes |
| target_pipe_size | 11 | packets | | target_pipe_size | 11 | packets |
skipping to change at page 35, line 37 skipping to change at page 35, line 37
loss rate across subpaths. For example 50% of the losses might be loss rate across subpaths. For example 50% of the losses might be
allocated to the access or last mile link to the user, 40% to the allocated to the access or last mile link to the user, 40% to the
interconnects with other ISPs and 1% to each internal hop (assuming interconnects with other ISPs and 1% to each internal hop (assuming
no more than 10 internal hops). Then all of the subpaths can be no more than 10 internal hops). Then all of the subpaths can be
tested independently, and the spatial composition of passing subpaths tested independently, and the spatial composition of passing subpaths
would be expected to be within the end-to-end loss budget. would be expected to be within the end-to-end loss budget.
Testing interconnects has generally been problematic: conventional Testing interconnects has generally been problematic: conventional
performance tests run between Measurement Points adjacent to either performance tests run between Measurement Points adjacent to either
side of the interconnect, are not generally useful. Unconstrained side of the interconnect, are not generally useful. Unconstrained
TCP tests, such as netperf tests [@@netperf] are typically overly TCP tests, such as iperf [iperf] are usually overly aggressive
aggressive because the RTT is so small (often less than 1 mS). These because the RTT is so small (often less than 1 mS). With a short RTT
tools are likely to report inflated numbers by pushing other traffic these tools are likely to report inflated numbers because for short
off of the network. As a consequence they are useless for predicting RTTs these tools can tolerate very hight loss rates and can push
actual user performance, and may themselves be quite disruptive. other cross traffic off of the network. As a consequence they are
Model Based Metrics solves this problem. The same test pattern as useless for predicting actual user performance, and may themselves be
used on other links can be applied to the interconnect. For our quite disruptive. Model Based Metrics solves this problem. The same
example, when apportioned 40% of the losses, 11 packet bursts sent test pattern as used on other links can be applied to the
every 50mS should have fewer than one loss per 82 bursts (902 interconnect. For our example, when apportioned 40% of the losses,
packets). 11 packet bursts sent every 50mS should have fewer than one loss per
82 bursts (902 packets).
9. Validation 9. Validation
Since some aspects of the models are likely to be too conservative, Since some aspects of the models are likely to be too conservative,
Section 5.2 permits alternate protocol models and Section 5.3 permits Section 5.2 permits alternate protocol models and Section 5.3 permits
test parameter derating. If either of these techniques are used, we test parameter derating. If either of these techniques are used, we
require demonstrations that such a TDS can robustly detect links that require demonstrations that such a TDS can robustly detect links that
will prevent authentic applications using state-of-the-art protocol will prevent authentic applications using state-of-the-art protocol
implementations from meeting the specified performance targets. This implementations from meeting the specified performance targets. This
correctness criteria is potentially difficult to prove, because it correctness criteria is potentially difficult to prove, because it
skipping to change at page 37, line 17 skipping to change at page 37, line 20
To the extent that a TDS is used to inform public dialog it should be To the extent that a TDS is used to inform public dialog it should be
fully publicly documented, including the details of the tests, what fully publicly documented, including the details of the tests, what
assumptions were used and how it was derived. All of the details of assumptions were used and how it was derived. All of the details of
the validation experiment should also be published with sufficient the validation experiment should also be published with sufficient
detail for the experiments to be replicated by other researchers. detail for the experiments to be replicated by other researchers.
All components should either be open source of fully described All components should either be open source of fully described
proprietary implementations that are available to the research proprietary implementations that are available to the research
community. community.
10. Acknowledgements 10. Security Considerations
Measurement is often used to inform business and policy decisions,
and as a consequence is potentially subject to manipulation for
illicit gains. Model Based Metrics are expected to be a huge step
forward because equivalent measurements can be performed from
multiple vantage points, such that performance claims can be
independently validated by multiple parties.
Much of the acrimony in the Net Neutrality debate is due by the
historical lack of any effective vantage independent tools to
characterize network performance. Traditional methods for measuring
bulk transport capacity are sensitive to RTT and as a consequence
often yield very different results local to an ISP and end-to-end.
Neither the ISP nor customer can repeat the other's measurements
leading to high levels of distrust and acrimony. Model Based Metrics
are expected to greatly improve this situation.
This document only describes a framework for designing Fully
Specified Targeted Diagnostic Suite. Each FSTDS MUST include its own
security section.
11. Acknowledgements
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. Alex Gilgur for helping with probability in the target run length. Alex Gilgur for helping with
the statistics and contributing and alternate model. the statistics.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
This work was inspired by Measurement Lab: open tools running on an This work was inspired by Measurement Lab: open tools running on an
open platform, using open tools to collect open data. See open platform, using open tools to collect open data. See
http://www.measurementlab.net/ http://www.measurementlab.net/
11. Informative References 12. IANA Considerations
This document has no actions for IANA.
13. References
13.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
13.2. Informative References
[RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983.
[RFC0864] Postel, J., "Character Generator Protocol", STD 22, [RFC0864] Postel, J., "Character Generator Protocol", STD 22,
RFC 864, May 1983. RFC 864, May 1983.
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering,
S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G.,
Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, Partridge, C., Peterson, L., Ramakrishnan, K., Shenker,
S., Wroclawski, J., and L. Zhang, "Recommendations on S., Wroclawski, J., and L. Zhang, "Recommendations on
skipping to change at page 38, line 34 skipping to change at page 39, line 24
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric
Composition", RFC 5835, April 2010. Composition", RFC 5835, April 2010.
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of
Metrics", RFC 6049, January 2011. Metrics", RFC 6049, January 2011.
[RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673,
August 2012. August 2012.
[I-D.ietf-ippm-2330-update] [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling
Fabini, J. and A. Morton, "Advanced Stream and Sampling Framework for IP Performance Metrics (IPPM)", RFC 7312,
Framework for IPPM", draft-ietf-ippm-2330-update-05 (work August 2014.
in progress), May 2014.
[I-D.ietf-ippm-lmap-path] [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
A. Morton, "A Reference Path and Measurement Points for A. Morton, "A Reference Path and Measurement Points for
LMAP", draft-ietf-ippm-lmap-path-04 (work in progress), Large-Scale Measurement of Broadband Performance",
June 2014. RFC 7398, February 2015.
[I-D.ietf-aqm-recommendation]
Baker, F. and G. Fairhurst, "IETF Recommendations
Regarding Active Queue Management",
draft-ietf-aqm-recommendation-11 (work in progress),
February 2015.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource] [mpingSource]
Fan, X., Mathis, M., and D. Hamon, "Git Repository for Fan, X., Mathis, M., and D. Hamon, "Git Repository for
mping: An IP Level Performance Diagnostic", Sept 2013, mping: An IP Level Performance Diagnostic", Sept 2013,
<https://github.com/m-lab/mping>. <https://github.com/m-lab/mping>.
[MBMSource] [MBMSource]
Hamon, D., "Git Repository for Model Based Metrics", Hamon, D., Stuart, S., and H. Chen, "Git Repository for
Sept 2013, <https://github.com/m-lab/MBM>. Model Based Metrics", Sept 2013,
<https://github.com/m-lab/MBM>.
[Pathdiag] [Pathdiag]
Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen, Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen,
"Pathdiag: Automated TCP Diagnosis", Passive and Active "Pathdiag: Automated TCP Diagnosis", Passive and Active
Measurement , June 2008. Measurement , June 2008.
[iperf] Wikipedia Contributors, "iPerf", Wikipedia, The Free
Encyclopedia , cited March 2015, <http://en.wikipedia.org/
w/index.php?title=Iperf&oldid=649720021>.
[StatQC] Montgomery, D., "Introduction to Statistical Quality [StatQC] Montgomery, D., "Introduction to Statistical Quality
Control - 2nd ed.", ISBN 0-471-51988-X, 1990. Control - 2nd ed.", ISBN 0-471-51988-X, 1990.
[Rtool] R Development Core Team, "R: A language and environment [Rtool] R Development Core Team, "R: A language and environment
for statistical computing. R Foundation for Statistical for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0, URL Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
http://www.R-project.org/", , 2011. http://www.R-project.org/", , 2011.
[CVST] Krueger, T. and M. Braun, "R package: Fast Cross- [CVST] Krueger, T. and M. Braun, "R package: Fast Cross-
Validation via Sequential Testing", version 0.1, 11 2012. Validation via Sequential Testing", version 0.1, 11 2012.
[CUBIC] Ha, S., Rhee, I., and L. Xu, "CUBIC: a new TCP-friendly
high-speed TCP variant", SIGOPS Oper. Syst. Rev. 42, 5,
July 2008.
[LMCUBIC] Ledesma Goyzueta, R. and Y. Chen, "A Deterministic Loss
Model Based Analysis of CUBIC, IEEE International
Conference on Computing, Networking and Communications
(ICNC), E-ISBN : 978-1-4673-5286-4", January 2013.
[AFD] Pan, R., Breslau, L., Prabhakar, B., and S. Shenker, [AFD] Pan, R., Breslau, L., Prabhakar, B., and S. Shenker,
"Approximate fairness through differential dropping", "Approximate fairness through differential dropping",
SIGCOMM Comput. Commun. Rev. 33, 2, April 2003. SIGCOMM Comput. Commun. Rev. 33, 2, April 2003.
[wikiBloat] [wikiBloat]
Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/ Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/
index.php?title=Bufferbloat&oldid=608805474, June 2014. index.php?title=Bufferbloat&oldid=608805474, March 2015.
[CCscaling] [CCscaling]
Fernando, F., Doyle, J., and S. Steven, "Scalable laws for Fernando, F., Doyle, J., and S. Steven, "Scalable laws for
stable network congestion control", Proceedings of stable network congestion control", Proceedings of
Conference on Decision and Conference on Decision and
Control, http://www.ee.ucla.edu/~paganini, December 2001. Control, http://www.ee.ucla.edu/~paganini, December 2001.
Appendix A. Model Derivations Appendix A. Model Derivations
The reference target_run_length described in Section 5.2 is based on The reference target_run_length described in Section 5.2 is based on
very conservative assumptions: that all window above target_pipe_size very conservative assumptions: that all window above target_pipe_size
contributes to a standing queue that raises the RTT, and that classic contributes to a standing queue that raises the RTT, and that classic
Reno congestion control with delayed ACKs are in effect. In this Reno congestion control with delayed ACKs are in effect. In this
section we provide two alternative calculations using different section we provide two alternative calculations using different
assumptions. assumptions.
It may seem out of place to allow such latitude in a measurement It may seem out of place to allow such latitude in a measurement
standard, but the section provides offsetting requirements. standard, but this section provides offsetting requirements.
The estimates provided by these models make the most sense if network The estimates provided by these models make the most sense if network
performance is viewed logarithmically. In the operational Internet, performance is viewed logarithmically. In the operational Internet,
data rates span more than 8 orders of magnitude, RTT spans more than data rates span more than 8 orders of magnitude, RTT spans more than
3 orders of magnitude, and loss probability spans at least 8 orders 3 orders of magnitude, and loss probability spans at least 8 orders
of magnitude. When viewed logarithmically (as in decibels), these of magnitude. When viewed logarithmically (as in decibels), these
correspond to 80 dB of dynamic range. On an 80 db scale, a 3 dB correspond to 80 dB of dynamic range. On an 80 db scale, a 3 dB
error is less than 4% of the scale, even though it might represent a error is less than 4% of the scale, even though it might represent a
factor of 2 in untransformed parameter. factor of 2 in untransformed parameter.
skipping to change at page 40, line 40 skipping to change at page 41, line 31
allocation. Choosing a target_run_length that is substantially allocation. Choosing a target_run_length that is substantially
smaller than the reference target_run_length specified in Section 5.2 smaller than the reference target_run_length specified in Section 5.2
strengthens the argument that it may be appropriate to abandon "TCP strengthens the argument that it may be appropriate to abandon "TCP
friendliness" as the Internet fairness model. This gives developers friendliness" as the Internet fairness model. This gives developers
incentive and permission to develop even more aggressive applications incentive and permission to develop even more aggressive applications
and protocols, for example by increasing the number of connections and protocols, for example by increasing the number of connections
that they open concurrently. that they open concurrently.
A.1. Queueless Reno A.1. Queueless Reno
In Section 5.2 it is assumed that the target rate is the same as the In Section 5.2 it was assumed that the link rate matches the target
link rate, and any excess window causes a standing queue at the rate plus overhead, such that the excess window needed for the AIMD
bottleneck. This might be representative of a non-shared access sawtooth causes a fluctuating queue at the bottleneck.
link. An alternative situation would be a heavily aggregated subpath
where individual flows do not significantly contribute to the
queueing delay, and losses are determined monitoring the average data
rate, for example by the use of a virtual queue as in [AFD]. In such
a scheme the RTT is constant and TCP's AIMD congestion control causes
the data rate to fluctuate in a sawtooth. If the traffic is being
controlled in a manner that is consistent with the metrics here, goal
would be to make the actual average rate equal to the
target_data_rate.
We can derive a model for Reno TCP and delayed ACK under the above An alternate situation would be bottleneck where there is no
set of assumptions: for some value of Wmin, the window will sweep significant queue and losses are caused by some mechanism that does
from Wmin packets to 2*Wmin packets in 2*Wmin RTT. Unlike the not involve extra delay, for example by the use of a virtual queue as
in Approximate Fair Dropping[AFD]. A flow controlled by such a
bottleneck would have a constant RTT and a data rate that fluctuates
in a sawtooth due to AIMD congestion control. Assume the losses are
being controlled to make the average data rate meet some goal which
is equal or greater than the target_rate. The necessary run length
can be computed as follows:
For some value of Wmin, the window will sweep from Wmin packets to
2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the
queueing case where Wmin = Target_pipe_size, we want the average of queueing case where Wmin = Target_pipe_size, we want the average of
Wmin and 2*Wmin to be the target_pipe_size, so the average rate is Wmin and 2*Wmin to be the target_pipe_size, so the average rate is
the target rate. Thus we want Wmin = (2/3)*target_pipe_size. the target rate. Thus we want Wmin = (2/3)*target_pipe_size.
Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin)
packets in 2*Wmin round trip times. packets in 2*Wmin round trip times.
Substituting these together we get: Substituting these together we get:
target_run_length = (4/3)(target_pipe_size^2) target_run_length = (4/3)(target_pipe_size^2)
Note that this is 44% of the reference run length. This makes sense Note that this is 44% of the reference_run_length computed earlier.
because under the assumptions in Section 5.2 the AMID sawtooth caused This makes sense because under the assumptions in Section 5.2 the
a queue at the bottleneck, which raised the effective RTT by 50%. AMID sawtooth caused a queue at the bottleneck, which raised the
effective RTT by 50%.
A.2. CUBIC
CUBIC has three operating regions. The model for the expected value
of window size derived in [LMCUBIC] assumes operation in the
"concave" region only, which is a non-TCP friendly region for long-
lived flows. The authors make the following assumptions: packet loss
probability, p, is independent and periodic, losses occur one at a
time, and they are true losses due to tail drop or corruption. This
definition of p aligns very well with our definition of
target_run_length and the requirement for progressive loss (AQM).
Although CUBIC window increase depends on continuous time, the
authors transform the time to reach the maximum Window size in terms
of RTT and a parameter for the multiplicative rate decrease on
observing loss, beta (whose default value is 0.2 in CUBIC). The
expected value of Window size, E[W], is also dependent on C, a
parameter of CUBIC that determines its window-growth aggressiveness
(values from 0.01 to 4).
E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4
and, further assuming Poisson arrival, the mean throughput, x, is
x = E[W]/RTT
We note that under these conditions (deterministic single losses),
the value of E[W] is always greater than 0.8 of the maximum window
size ~= reference_run_length. @@@@
Appendix B. Complex Queueing Appendix B. Complex Queueing
For many network technologies simple queueing models do not apply: For many network technologies simple queueing models don't apply: the
the network schedules, thins or otherwise alters the timing of ACKs network schedules, thins or otherwise alters the timing of ACKs and
and data, generally to raise the efficiency of the channel allocation data, generally to raise the efficiency of the channel allocation
process when confronted with relatively widely spaced small ACKs. when confronted with relatively widely spaced small ACKs. These
These efficiency strategies are ubiquitous for half duplex, wireless efficiency strategies are ubiquitous for half duplex, wireless and
and broadcast media. broadcast media.
Altering the ACK stream generally has two consequences: it raises the Altering the ACK stream generally has two consequences: it raises the
effective bottleneck data rate, making slowstart burst at higher effective bottleneck data rate, making slowstart burst at higher
rates (possibly as high as the sender's interface rate) and it rates (possibly as high as the sender's interface rate) and it
effectively raises the RTT by the average time that the ACKs were effectively raises the RTT by the average time that the ACKs and data
delayed. The first effect can be partially mitigated by reclocking were delayed. The first effect can be partially mitigated by
ACKs once they are beyond the bottleneck on the return path to the reclocking ACKs once they are beyond the bottleneck on the return
sender, however this further raises the effective RTT. path to the sender, however this further raises the effective RTT.
The most extreme example of this sort of behavior would be a half The most extreme example of this sort of behavior would be a half
duplex channel that is not released as long as end point currently duplex channel that is not released as long as end point currently
holding the channel has queued traffic. Such environments cause self holding the channel has more traffic (data or ACKs) to send. Such
clocked protocols under full load to revert to extremely inefficient environments cause self clocked protocols under full load to revert
stop and wait behavior, where they send an entire window of data as a to extremely inefficient stop and wait behavior, where they send an
single burst, followed by the entire window of ACKs on the return entire window of data as a single burst of the forward path, followed
path. by the entire window of ACKs on the return path. It is important to
note that due to self clocking, ill conceived channel allocation
mechanisms can increase the stress on upstream links in a long path:
they cause large and faster bursts.
If a particular end-to-end path contains a link or device that alters If a particular end-to-end path contains a link or device that alters
the ACK stream, then the entire path from the sender up to the the ACK stream, then the entire path from the sender up to the
bottleneck must be tested at the burst parameters implied by the ACK bottleneck must be tested at the burst parameters implied by the ACK
scheduling algorithm. The most important parameter is the Effective scheduling algorithm. The most important parameter is the Effective
Bottleneck Data Rate, which is the average rate at which the ACKs Bottleneck Data Rate, which is the average rate at which the ACKs
advance snd.una. Note that thinning the ACKs (relying on the advance snd.una. Note that thinning the ACKs (relying on the
cumulative nature of seg.ack to permit discarding some ACKs) is cumulative nature of seg.ack to permit discarding some ACKs) is
implies an effectively infinite bottleneck data rate. It is implies an effectively infinite bottleneck data rate.
important to note that due to the self clock, ill conceived channel
allocation mechanisms can increase the stress on upstream links in a
long path.
Holding data or ACKs for channel allocation or other reasons (such as Holding data or ACKs for channel allocation or other reasons (such as
error correction) always raises the effective RTT relative to the forward error correction) always raises the effective RTT relative to
minimum delay for the path. Therefore it may be necessary to replace the minimum delay for the path. Therefore it may be necessary to
target_RTT in the calculation in Section 5.2 by an effective_RTT, replace target_RTT in the calculation in Section 5.2 by an
which includes the target_RTT reflecting the fixed part of the path effective_RTT, which includes the target_RTT plus a term to account
plus a term to account for the extra delays introduced by these for the extra delays introduced by these mechanisms.
mechanisms.
Appendix C. Version Control Appendix C. Version Control
Formatted: Thu Jul 3 20:19:04 PDT 2014 This section to be removed prior to publication.
Formatted: Mon Mar 9 14:37:24 PDT 2015
Authors' Addresses Authors' Addresses
Matt Mathis Matt Mathis
Google, Inc Google, Inc
1600 Amphitheater Parkway 1600 Amphitheater Parkway
Mountain View, California 94043 Mountain View, California 94043
USA USA
Email: mattmathis@google.com Email: mattmathis@google.com
 End of changes. 148 change blocks. 
489 lines changed or deleted 511 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/