draft-ietf-ippm-model-based-metrics-10.txt   draft-ietf-ippm-model-based-metrics-11.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: September 1, 2017 AT&T Labs Expires: January 1, 2018 AT&T Labs
February 28, 2017 June 30, 2017
Model Based Metrics for Bulk Transport Capacity Model Based Metrics for Bulk Transport Capacity
draft-ietf-ippm-model-based-metrics-10.txt draft-ietf-ippm-model-based-metrics-11.txt
Abstract Abstract
We introduce a new class of Model Based Metrics designed to assess if We introduce a new class of Model Based Metrics designed to assess if
a complete Internet path can be expected to meet a predefined Target a complete Internet path can be expected to meet a predefined Target
Transport Performance by applying a suite of IP diagnostic tests to Transport Performance by applying a suite of IP diagnostic tests to
successive subpaths. The subpath-at-a-time tests can be robustly successive subpaths. The subpath-at-a-time tests can be robustly
applied to critical infrastructure, such as network interconnections applied to critical infrastructure, such as network interconnections
or even individual devices, to accurately detect if any part of the or even individual devices, to accurately detect if any part of the
infrastructure will prevent paths traversing it from meeting the infrastructure will prevent paths traversing it from meeting the
Target Transport Performance. Target Transport Performance.
Model Based Metrics rely on peer-reviewed mathematical models to Model Based Metrics rely on peer-reviewed mathematical models to
specify a Targeted Suite of IP Diagnostic tests, designed to assess specify a Targeted Suite of IP Diagnostic tests, designed to assess
whether common transport protocols can be expected to meet a whether common transport protocols can be expected to meet a
predetermined Target Transport Performance over an Internet path. predetermined Target Transport Performance over an Internet path.
For Bulk Transport Capacity, the IP diagnostics are built on test For Bulk Transport Capacity IP diagnostics are built using test
streams that mimic TCP over the complete path and statistical streams and statistical criteria for evaluating the packet transfer
criteria for evaluating the packet transfer statistics of those that mimic TCP over the complete path. The temporal structure of the
streams. The temporal structure of the test stream (bursts, etc) test stream (bursts, etc) mimic TCP or other transport protocol
mimic TCP or other transport protocol carrying bulk data over a long carrying bulk data over a long path. However they are constructed to
path. However they are constructed to be independent of the details be independent of the details of the subpath under test, end systems
of the subpath under test, end systems or applications. Likewise the or applications. Likewise the success criteria evaluates the packet
success criteria evaluates the packet transfer statistics of the transfer statistics of the subpath against criteria determined by
subpath against criteria determined by protocol performance models protocol performance models applied to the Target Transport
applied to the Target Transport Performance of the complete path. Performance of the complete path. The success criteria also does not
The success criteria also does not depend on the details of the depend on the details of the subpath, end systems or application.
subpath, end systems or application.
Model Based Metrics exhibit several important new properties not Model Based Metrics exhibit several important new properties not
present in other Bulk Transport Capacity Metrics, including the present in other Bulk Transport Capacity Metrics, including the
ability to reason about concatenated or overlapping subpaths. The ability to reason about concatenated or overlapping subpaths. The
results are vantage independent which is critical for supporting results are vantage independent which is critical for supporting
independent validation of tests by comparing results from multiple independent validation of tests by comparing results from multiple
measurement points. measurement points.
This document does not define the IP diagnostic tests, but provides a This document provides a framework for designing suites of IP
framework for designing suites of IP diagnostic tests that are diagnostic tests that are tailored to confirming that infrastructure
tailored to confirming that infrastructure can meet the predetermined can meet the predetermined Target Transport Performance. It does not
Target Transport Performance. fully specify the IP diagnostics tests needed to assure any specific
target performance.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 1, 2017. This Internet-Draft will expire on January 1, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 5 1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 5
2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 10 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 10
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 16 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . 18 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . 18
4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 20
4.3. New requirements relative to RFC 2330 . . . . . . . . . . 20 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 21
5. Common Models and Parameters . . . . . . . . . . . . . . . . 21 5. Common Models and Parameters . . . . . . . . . . . . . . . . 22
5.1. Target End-to-end parameters . . . . . . . . . . . . . . 21 5.1. Target End-to-end parameters . . . . . . . . . . . . . . 22
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 22 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 23
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 23 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 24
5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 23 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 24
6. Generating test streams . . . . . . . . . . . . . . . . . . . 24 6. Generating test streams . . . . . . . . . . . . . . . . . . . 25
6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 25 6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 26
6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . 27 6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . 27
6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 27 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 28
6.4. Concurrent or channelized testing . . . . . . . . . . . . 28 6.4. Concurrent or channelized testing . . . . . . . . . . . . 29
7. Interpreting the Results . . . . . . . . . . . . . . . . . . 29 7. Interpreting the Results . . . . . . . . . . . . . . . . . . 30
7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 29 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 30
7.2. Statistical criteria for estimating run_length . . . . . 30 7.2. Statistical criteria for estimating run_length . . . . . 31
7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 32 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 34
8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 33 8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 34
8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 33 8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 35
8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 34 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 35
8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 34 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 36
8.1.3. Background Packet Transfer Statistics Tests . . . . . 35 8.1.3. Background Packet Transfer Statistics Tests . . . . . 36
8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 35 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 36
8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 36 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 38
8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 37 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 38
8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 37 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 38
8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 38 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 39
8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 38 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 39
8.3.1. Full Window slowstart test . . . . . . . . . . . . . 38 8.3.1. Full Window slowstart test . . . . . . . . . . . . . 39
8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 39 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 40
8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 39 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 40
8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 40 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 41
8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 40 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 41
8.5.2. Passive Measurements . . . . . . . . . . . . . . . . 41 8.5.2. Passive Measurements . . . . . . . . . . . . . . . . 42
9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 42 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 43
9.1. Observations about applicability . . . . . . . . . . . . 43 9.1. Observations about applicability . . . . . . . . . . . . 44
10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 44 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 45
11. Security Considerations . . . . . . . . . . . . . . . . . . . 45 11. Security Considerations . . . . . . . . . . . . . . . . . . . 46
12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 45 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 47
13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 46 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 47
14. References . . . . . . . . . . . . . . . . . . . . . . . . . 46 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 47
14.1. Normative References . . . . . . . . . . . . . . . . . . 46 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 51
14.2. Informative References . . . . . . . . . . . . . . . . . 46 A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 52
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 49 Appendix B. The effects of ACK scheduling . . . . . . . . . . . 53
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 50 Appendix C. Version Control . . . . . . . . . . . . . . . . . . 54
Appendix B. The effects of ACK scheduling . . . . . . . . . . . 51 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 54
Appendix C. Version Control . . . . . . . . . . . . . . . . . . 52
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 52
1. Introduction 1. Introduction
Model Based Metrics (MBM) rely on peer-reviewed mathematical models Model Based Metrics (MBM) rely on peer-reviewed mathematical models
to specify a Targeted Suite of IP Diagnostic tests, designed to to specify a Targeted Suite of IP Diagnostic tests, designed to
assess whether common transport protocols can be expected to meet a assess whether common transport protocols can be expected to meet a
predetermined Target Transport Performance over an Internet path. predetermined Target Transport Performance over an Internet path.
This note describes the modeling framework to derive the test This note describes the modeling framework to derive the test
parameters for assessing an Internet path's ability to support a parameters for assessing an Internet path's ability to support a
predetermined Bulk Transport Capacity. predetermined Bulk Transport Capacity.
skipping to change at page 4, line 48 skipping to change at page 4, line 46
IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic
problems with using TCP or other throughput maximizing protocols for problems with using TCP or other throughput maximizing protocols for
measurement. In particular all throughput maximizing protocols (and measurement. In particular all throughput maximizing protocols (and
TCP congestion control in particular) cause some level of congestion TCP congestion control in particular) cause some level of congestion
in order to detect when they have reached the available capacity in order to detect when they have reached the available capacity
limitation of the network. This self inflicted congestion obscures limitation of the network. This self inflicted congestion obscures
the network properties of interest and introduces non-linear dynamic the network properties of interest and introduces non-linear dynamic
equilibrium behaviors that make any resulting measurements useless as equilibrium behaviors that make any resulting measurements useless as
metrics because they have no predictive value for conditions or paths metrics because they have no predictive value for conditions or paths
different than that of the measurement itself. In order to prevent different than that of the measurement itself. In order to prevent
these effects it is necessary to suppress the effects of TCP these effects it is necessary to avoid the effects of TCP congestion
congestion control in the measurement method. These issues are control in the measurement method. These issues are discussed at
discussed at length in Section 4. Readers whom are unfamiliar with length in Section 4. Readers whom are unfamiliar with basic
basic properties of TCP and TCP-like congestion control may find it properties of TCP and TCP-like congestion control may find it easier
easier to start at Section 4 or Section 4.1. to start at Section 4 or Section 4.1.
A Targeted IP Diagnostic Suite does not have such difficulties. IP A Targeted IP Diagnostic Suite does not have such difficulties. IP
diagnostics can be constructed such that they make strong statistical diagnostics can be constructed such that they make strong statistical
statements about path properties that are independent of the statements about path properties that are independent of the
measurement details, such as vantage and choice of measurement measurement details, such as vantage and choice of measurement
points. Model Based Metrics are designed to bridge the gap between points.
empirical IP measurements and expected TCP performance for multiple
standardized versions of TCP.
1.1. Version Control 1.1. Version Control
RFC Editor: Please remove this entire subsection prior to RFC Editor: Please remove this entire subsection prior to
publication. publication.
REF Editor: The reference to draft-ietf-tcpm-rack is to attribute an
idea. This document should not block waiting for the completion of
that one.
Please send comments about this draft to ippm@ietf.org. See Please send comments about this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Tue Feb 28 14:24:28 PST 2017 Formatted: Thu Jun 29 19:08:08 PDT 2017
Changes since -10 draft:
o A few more nits from various sources.
o (From IETF LC review comments.)
o David Mandelberg: design metrics to prevent DDOS.
o From Robert Sparks:
* Remove all legacy 2119 language.
* Fixed Xr notation inconsistency.
* Adjusted abstract: tests are only partially specified.
* Avoid rather than suppress the effects of congestion control
* Removed the unnecessary, excessively abstract and unclear
thought about IP vs TCP measurements.
* Changed "thwarted" to "not fulfilled".
* Qualified language about burst models.
* Replaced "infinitesimal" with other language.
* Added citations for the reordering strawman.
* Pointed out that psuedo CBR tests depend on self clock.
* Fixed some run on sentences.
o Update language to reflect RFC7567, AQM recommendations.
o Suggestion from Merry Mou (MIT)
Changes since -09 draft: Changes since -09 draft:
o Five last minute editing nits. o Five last minute editing nits.
Changes since -08 draft: Changes since -08 draft:
o Language, spelling and usage nits. o Language, spelling and usage nits.
o Expanded the abstract describe the models. o Expanded the abstract describe the models.
o Remove superfluous standards like language o Remove superfluous standards like language
skipping to change at page 9, line 5 skipping to change at page 9, line 39
----V----------------------------------V--- | ----V----------------------------------V--- |
| | | | | | | | | | | |
V V V V V V V V V V V V
fail/inconclusive pass/fail/inconclusive fail/inconclusive pass/fail/inconclusive
(traffic generation status) (test result) (traffic generation status) (test result)
Overall Modeling Framework Overall Modeling Framework
Figure 1 Figure 1
The mathematical models are used to determine Traffic parameters and Mathematical TCP models are used to determine Traffic parameters and
subsequently to design traffic patterns that mimic TCP or other subsequently to design traffic patterns that mimic TCP or other
transport protocol delivering bulk data and operating at the Target transport protocol delivering bulk data and operating at the Target
Data Rate, MTU and RTT over a full range of conditions, including Data Rate, MTU and RTT over a full range of conditions, including
flows that are bursty at multiple time scales. The traffic patterns flows that are bursty at multiple time scales. The traffic patterns
are generated based on the three Target parameters of complete path are generated based on the three Target parameters of complete path
and independent of the properties of individual subpaths using the and independent of the properties of individual subpaths using the
techniques described in Section 6. As much as possible the test techniques described in Section 6. As much as possible the test
streams are generated deterministically (precomputed) to minimize the streams are generated deterministically (precomputed) to minimize the
extent to which test methodology, measurement points, measurement extent to which test methodology, measurement points, measurement
vantage or path partitioning affect the details of the measurement vantage or path partitioning affect the details of the measurement
traffic. traffic.
Section 7 describes packet transfer statistics and methods to test Section 7 describes packet transfer statistics and methods to test
them against the statistical criteria provided by the mathematical them against the statistical criteria provided by the mathematical
models. Since the statistical criteria typically apply to the models. Since the statistical criteria typically apply to the
complete path (a composition of subpaths) [RFC6049], in situ testing complete path (a composition of subpaths) [RFC6049], in situ testing
requires that the end-to-end statistical criteria be apportioned as requires that the end-to-end statistical criteria be apportioned as
separate criteria for each subpath. Subpaths that are expected to be separate criteria for each subpath. Subpaths that are expected to be
bottlenecks would then be permitted to contribute a larger fraction bottlenecks would then be permitted to contribute a larger fraction
of the end-to-end packet loss budget. In compensation, subpaths that of the end-to-end packet loss budget. In compensation, subpaths that
to not expected exhibit bottlenecks must be constrained to contribute are not expected to exhibit bottlenecks must be constrained to
less packet loss. Thus the statistical criteria for each subpath in contribute less packet loss. Thus the statistical criteria for each
each test of a TIDS is an apportioned share of the end-to-end subpath in each test of a TIDS is an apportioned share of the end-to-
statistical criteria for the complete path which was determined by end statistical criteria for the complete path which was determined
the mathematical model. by the mathematical model.
Section 8 describes the suite of individual tests needed to verify Section 8 describes the suite of individual tests needed to verify
all of required IP delivery properties. A subpath passes if and only all of required IP delivery properties. A subpath passes if and only
if all of the individual IP diagnostic tests pass. Any subpath that if all of the individual IP diagnostic tests pass. Any subpath that
fails any test indicates that some users are likely to fail to attain fails any test indicates that some users are likely to fail to attain
their Target Transport Performance under some conditions. In their Target Transport Performance under some conditions. In
addition to passing or failing, a test can be deemed to be addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons including: the precomputed inconclusive for a number of reasons including: the precomputed
traffic pattern was not accurately generated; the measurement results traffic pattern was not accurately generated; the measurement results
were not statistically significant; and others such as failing to were not statistically significant; and others such as failing to
skipping to change at page 10, line 7 skipping to change at page 10, line 43
can be used to address difficult measurement situations, such as can be used to address difficult measurement situations, such as
confirming that inter-carrier exchanges have sufficient performance confirming that inter-carrier exchanges have sufficient performance
and capacity to deliver HD video between ISPs. and capacity to deliver HD video between ISPs.
Since there is some uncertainty in the modeling process, Section 10 Since there is some uncertainty in the modeling process, Section 10
describes a validation procedure to diagnose and minimize false describes a validation procedure to diagnose and minimize false
positive and false negative results. positive and false negative results.
3. Terminology 3. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
Terms containing underscores (rather than spaces) appear in equations Terms containing underscores (rather than spaces) appear in equations
and typically have algorithmic definitions. and typically have algorithmic definitions.
General Terminology: General Terminology:
Target: A general term for any parameter specified by or derived Target: A general term for any parameter specified by or derived
from the user's application or transport performance requirements. from the user's application or transport performance requirements.
Target Transport Performance: Application or transport performance Target Transport Performance: Application or transport performance
target values for the complete path. For Bulk Transport Capacity target values for the complete path. For Bulk Transport Capacity
defined in this note the Target Transport Performance includes the defined in this note the Target Transport Performance includes the
skipping to change at page 17, line 38 skipping to change at page 18, line 23
These properties are a consequence of the dynamic equilibrium These properties are a consequence of the dynamic equilibrium
behavior intrinsic to how all throughput maximizing protocols behavior intrinsic to how all throughput maximizing protocols
interact with the Internet. These protocols rely on control systems interact with the Internet. These protocols rely on control systems
based on estimated network metrics to regulate the quantity of data based on estimated network metrics to regulate the quantity of data
to send into the network. The packet sending characteristics in turn to send into the network. The packet sending characteristics in turn
alter the network properties estimated by the control system metrics, alter the network properties estimated by the control system metrics,
such that there are circular dependencies between every transmission such that there are circular dependencies between every transmission
characteristic and every estimated metric. Since some of these characteristic and every estimated metric. Since some of these
dependencies are nonlinear, the entire system is nonlinear, and any dependencies are nonlinear, the entire system is nonlinear, and any
change anywhere causes a difficult to predict response in network change anywhere causes a difficult to predict response in network
metrics. As a consequence Bulk Transport Capacity metrics have metrics. As a consequence Bulk Transport Capacity metrics have not
entirely thwarted the analytic framework envisioned in [RFC2330] fulfilled the analytic framework envisioned in [RFC2330]
Model Based Metrics overcome these problems by making the measurement Model Based Metrics overcome these problems by making the measurement
system open loop: the packet transfer statistics (akin to the network system open loop: the packet transfer statistics (akin to the network
estimators) do not affect the traffic or traffic patterns (bursts), estimators) do not affect the traffic or traffic patterns (bursts),
which are computed on the basis of the Target Transport Performance. which are computed on the basis of the Target Transport Performance.
A path or subpath meeting the Target Transfer Performance A path or subpath meeting the Target Transfer Performance
requirements would exhibit packet transfer statistics and estimated requirements would exhibit packet transfer statistics and estimated
metrics that would not cause the control system to slow the traffic metrics that would not cause the control system to slow the traffic
below the Target Data Rate. below the Target Data Rate.
skipping to change at page 19, line 24 skipping to change at page 20, line 9
fraction of an RTT, many TCP implementations catch up to their fraction of an RTT, many TCP implementations catch up to their
earlier window size by sending a burst of data at the full sender earlier window size by sending a burst of data at the full sender
interface rate. To fill a network with a realistic application, the interface rate. To fill a network with a realistic application, the
network has to be able to tolerate sender interface rate bursts large network has to be able to tolerate sender interface rate bursts large
enough to restore the prior window following application pauses. enough to restore the prior window following application pauses.
Although the sender interface rate bursts are typically smaller than Although the sender interface rate bursts are typically smaller than
the last burst of a slowstart, they are at a higher IP rate so they the last burst of a slowstart, they are at a higher IP rate so they
potentially exercise queues at arbitrary points along the front path potentially exercise queues at arbitrary points along the front path
from the data sender up to and including the queue at the dominant from the data sender up to and including the queue at the dominant
bottleneck. There is no model for how frequent or what sizes of bottleneck. It is known that these bursts can hurt network
sender rate bursts the network should tolerate. performance, especially in conjunction with other queue pressure,
however we are not aware of any models for how frequent sender rate
bursts the network should be able to tolerate at various burst sizes.
In conclusion, to verify that a path can meet a Target Transport In conclusion, to verify that a path can meet a Target Transport
Performance, it is necessary to independently confirm that the path Performance, it is necessary to independently confirm that the path
can tolerate bursts at the scales that can be caused by the above can tolerate bursts at the scales that can be caused by the above
mechanisms. Three cases are believed to be sufficient: mechanisms. Three cases are believed to be sufficient:
o Two level slowstart bursts sufficient to get connections started o Two level slowstart bursts sufficient to get connections started
properly. properly.
o Ubiquitous sender interface rate bursts caused by efficiency o Ubiquitous sender interface rate bursts caused by efficiency
algorithms. We assume 4 packet bursts to be the most common case, algorithms. We assume 4 packet bursts to be the most common case,
skipping to change at page 23, line 35 skipping to change at page 24, line 23
Since some aspects of the models are very conservative, the MBM Since some aspects of the models are very conservative, the MBM
framework permits some latitude in derating test parameters. Rather framework permits some latitude in derating test parameters. Rather
than trying to formalize more complicated models we permit some test than trying to formalize more complicated models we permit some test
parameters to be relaxed as long as they meet some additional parameters to be relaxed as long as they meet some additional
procedural constraints: procedural constraints:
o The FS-TIDS must document and justify the actual method used to o The FS-TIDS must document and justify the actual method used to
compute the derated metric parameters. compute the derated metric parameters.
o The validation procedures described in Section 10 must be used to o The validation procedures described in Section 10 must be used to
demonstrate the feasibility of meeting the Target Transport demonstrate the feasibility of meeting the Target Transport
Performance with infrastructure that infinitesimally passes the Performance with infrastructure that just barely passes the
derated tests. derated tests.
o The validation process for a FS-TIDS itself must be documented is o The validation process for a FS-TIDS itself must be documented is
such a way that other researchers can duplicate the validation such a way that other researchers can duplicate the validation
experiments. experiments.
Except as noted, all tests below assume no derating. Tests where Except as noted, all tests below assume no derating. Tests where
there is not currently a well established model for the required there is not currently a well established model for the required
parameters explicitly include derating as a way to indicate parameters explicitly include derating as a way to indicate
flexibility in the parameters. flexibility in the parameters.
skipping to change at page 27, line 14 skipping to change at page 27, line 46
6.2. Constant window pseudo CBR 6.2. Constant window pseudo CBR
Implement pseudo constant bit rate by running a standard self clocked Implement pseudo constant bit rate by running a standard self clocked
protocol such as TCP with a fixed window size. If that window size protocol such as TCP with a fixed window size. If that window size
is test_window, the data rate will be slightly above the target_rate. is test_window, the data rate will be slightly above the target_rate.
Since the test_window is constrained to be an integer number of Since the test_window is constrained to be an integer number of
packets, for small RTTs or low data rates there may not be packets, for small RTTs or low data rates there may not be
sufficiently precise control over the data rate. Rounding the sufficiently precise control over the data rate. Rounding the
test_window up (the default) is likely to result in data rates that test_window up (as defined above) is likely to result in data rates
are higher than the target rate, but reducing the window by one that are higher than the target rate, but reducing the window by one
packet may result in data rates that are too small. Also cross packet may result in data rates that are too small. Also cross
traffic potentially raises the RTT, implicitly reducing the rate. traffic potentially raises the RTT, implicitly reducing the rate.
Cross traffic that raises the RTT nearly always makes the test more Cross traffic that raises the RTT nearly always makes the test more
strenuous (more demanding for the network path). A FS-TIDS strenuous (more demanding for the network path).
specifying a constant window CBR test must explicitly indicate under
what conditions errors in the data rate cause tests to inconclusive.
Since constant window pseudo CBR testing is sensitive to RTT Note that Constant window pseudo CBR (and Scanned window pseudo CBR
fluctuations it will be less accurate at controlling the data rate in in the next section) both rely on a self clock which is at least
environments with fluctuating delays. Conventional paced measurement partially derived from the properties of the subnet under test. This
traffic may be more appropriate for these environments. introduces the possibility that the subnet under test exhibits
behaviors such as extreme RTT fluctuations that prevent these
algorithms from accurately controlling data rates.
A FS-TIDS specifying a constant window CBR test must explicitly
indicate under what conditions errors in the data rate cause tests to
be inconclusive. Conventional paced measurement traffic may be more
appropriate for these environments.
6.3. Scanned window pseudo CBR 6.3. Scanned window pseudo CBR
Scanned window pseudo CBR is similar to the constant window CBR Scanned window pseudo CBR is similar to the constant window CBR
described above, except the window is scanned across a range of sizes described above, except the window is scanned across a range of sizes
designed to include two key events, the onset of queuing and the designed to include two key events, the onset of queuing and the
onset of packet loss or ECN CE marks. The window is scanned by onset of packet loss or ECN CE marks. The window is scanned by
incrementing it by one packet every 2*target_window_size delivered incrementing it by one packet every 2*target_window_size delivered
packets. This mimics the additive increase phase of standard Reno packets. This mimics the additive increase phase of standard Reno
TCP congestion avoidance when delayed ACKs are in effect. Normally TCP congestion avoidance when delayed ACKs are in effect. Normally
skipping to change at page 28, line 44 skipping to change at page 29, line 37
There are a number of reasons to want to specify performance in terms There are a number of reasons to want to specify performance in terms
of multiple concurrent flows, however this approach is not of multiple concurrent flows, however this approach is not
recommended for data rates below several megabits per second, which recommended for data rates below several megabits per second, which
can be attained with run lengths under 10000 packets on many paths. can be attained with run lengths under 10000 packets on many paths.
Since the required run length goes as the square of the data rate, at Since the required run length goes as the square of the data rate, at
higher rates the run lengths can be unreasonably large, and multiple higher rates the run lengths can be unreasonably large, and multiple
flows might be the only feasible approach. flows might be the only feasible approach.
If multiple flows are deemed necessary to meet aggregate performance If multiple flows are deemed necessary to meet aggregate performance
targets then this MUST be stated in both the design of the TIDS and targets then this must be stated in both the design of the TIDS and
in any claims about network performance. The IP diagnostic tests in any claims about network performance. The IP diagnostic tests
MUST be performed concurrently with the specified number of must be performed concurrently with the specified number of
connections. For the tests that use bursty test streams, the bursts connections. For the tests that use bursty test streams, the bursts
should be synchronized across streams unless there is a priori should be synchronized across streams unless there is a priori
knowledge that the applications have some explicit mechanism to knowledge that the applications have some explicit mechanism to
stagger their own bursts. In the absences of an explicit mechanism stagger their own bursts. In the absences of an explicit mechanism
to stagger bursts many network and application artifacts will to stagger bursts many network and application artifacts will
sometimes implicitly synchronize bursts. A test that does not sometimes implicitly synchronize bursts. A test that does not
control burst synchronization may be prone to false pass results for control burst synchronization may be prone to false pass results for
some applications. some applications.
7. Interpreting the Results 7. Interpreting the Results
skipping to change at page 30, line 20 skipping to change at page 31, line 15
may have been caused by some uncontrolled feedback from the network. may have been caused by some uncontrolled feedback from the network.
Note that procedures that attempt to search the target parameter Note that procedures that attempt to search the target parameter
space to find the limits on some parameter such as target_data_rate space to find the limits on some parameter such as target_data_rate
are at risk of breaking the location independent properties of Model are at risk of breaking the location independent properties of Model
Based Metrics, if any part of the boundary between passing and Based Metrics, if any part of the boundary between passing and
inconclusive or failing results is sensitive to RTT (which is inconclusive or failing results is sensitive to RTT (which is
normally the case). For example the maximum data rate for a marginal normally the case). For example the maximum data rate for a marginal
link (e.g. exhibiting excess errors) is likely to be sensitive to link (e.g. exhibiting excess errors) is likely to be sensitive to
the test_path_RTT. The maximum observed data rate over the test path the test_path_RTT. The maximum observed data rate over the test path
has very little predictive value for the maximum rate over a has very little value for predicting the maximum rate over a
different path. different path.
One of the goals for evolving TIDS designs will be to keep sharpening One of the goals for evolving TIDS designs will be to keep sharpening
distinction between inconclusive, passing and failing tests. The distinction between inconclusive, passing and failing tests. The
criteria for for passing, failing and inconclusive tests MUST be criteria for for passing, failing and inconclusive tests must be
explicitly stated for every test in the TIDS or FS-TIDS. explicitly stated for every test in the TIDS or FS-TIDS.
One of the goals of evolving the testing process, procedures, tools One of the goals of evolving the testing process, procedures, tools
and measurement point selection should be to minimize the number of and measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
It may be useful to keep raw packet transfer statistics and ancillary It may be useful to keep raw packet transfer statistics and ancillary
metrics [RFC3148] for deeper study of the behavior of the network metrics [RFC3148] for deeper study of the behavior of the network
path and to measure the tools themselves. Raw packet transfer path and to measure the tools themselves. Raw packet transfer
statistics can help to drive tool evolution. Under some conditions statistics can help to drive tool evolution. Under some conditions
skipping to change at page 31, line 46 skipping to change at page 32, line 41
and we can stop sending packets if measurements support rejecting H0 and we can stop sending packets if measurements support rejecting H0
with the specified Type II error = beta (= 0.05 for example), thus with the specified Type II error = beta (= 0.05 for example), thus
preferring the alternate hypothesis H1. preferring the alternate hypothesis H1.
H0 and H1 constitute the Success and Failure outcomes described H0 and H1 constitute the Success and Failure outcomes described
elsewhere in the memo, and while the ongoing measurements do not elsewhere in the memo, and while the ongoing measurements do not
support either hypothesis the current status of measurements is support either hypothesis the current status of measurements is
inconclusive. inconclusive.
The problem above is formulated to match the Sequential Probability The problem above is formulated to match the Sequential Probability
Ratio Test (SPRT) [StatQC]. Note that as originally framed the Ratio Test (SPRT) [Wald45] and [Montgomery90]. Note that as
events under consideration were all manufacturing defects. In originally framed the events under consideration were all
networking, ECN CE marks and lost packets are not defects but manufacturing defects. In networking, ECN CE marks and lost packets
signals, indicating that the transport protocol should slow down. are not defects but signals, indicating that the transport protocol
should slow down.
The Sequential Probability Ratio Test also starts with a pair of The Sequential Probability Ratio Test also starts with a pair of
hypothesis specified as above: hypothesis specified as above:
H0: p0 = one defect in target_run_length H0: p0 = one defect in target_run_length
H1: p1 = one defect in target_run_length/4 H1: p1 = one defect in target_run_length/4
As packets are sent and measurements collected, the tester evaluates As packets are sent and measurements collected, the tester evaluates
the cumulative defect count against two boundaries representing H0 the cumulative defect count against two boundaries representing H0
Acceptance or Rejection (and acceptance of H1): Acceptance or Rejection (and acceptance of H1):
Acceptance line: Xa = -h1 + s*n Acceptance line: Xa = -h1 + s*n
Rejection line: Xr = h2 + s*n Rejection line: Xr = h2 + s*n
where n increases linearly for each packet sent and where n increases linearly for each packet sent and
h1 = { log((1-alpha)/beta) }/k h1 = { log((1-alpha)/beta) }/k
skipping to change at page 32, line 31 skipping to change at page 33, line 24
h2 = { log((1-beta)/alpha) }/k h2 = { log((1-beta)/alpha) }/k
k = log{ (p1(1-p0)) / (p0(1-p1)) } k = log{ (p1(1-p0)) / (p0(1-p1)) }
s = [ log{ (1-p0)/(1-p1) } ]/k s = [ log{ (1-p0)/(1-p1) } ]/k
for p0 and p1 as defined in the null and alternative Hypotheses for p0 and p1 as defined in the null and alternative Hypotheses
statements above, and alpha and beta as the Type I and Type II statements above, and alpha and beta as the Type I and Type II
errors. errors.
The SPRT specifies simple stopping rules: The SPRT specifies simple stopping rules:
o Xa < defect_count(n) < Xb: continue testing o Xa < defect_count(n) < Xr: continue testing
o defect_count(n) <= Xa: Accept H0 o defect_count(n) <= Xa: Accept H0
o defect_count(n) >= Xb: Accept H1 o defect_count(n) >= Xr: Accept H1
The calculations above are implemented in the R-tool for Statistical The calculations above are implemented in the R-tool for Statistical
Analysis [Rtool] , in the add-on package for Cross-Validation via Analysis [Rtool] , in the add-on package for Cross-Validation via
Sequential Testing (CVST) [CVST]. Sequential Testing (CVST) [CVST].
Using the equations above, we can calculate the minimum number of Using the equations above, we can calculate the minimum number of
packets (n) needed to accept H0 when x defects are observed. For packets (n) needed to accept H0 when x defects are observed. For
example, when x = 0: example, when x = 0:
Xa = 0 = -h1 + s*n Xa = 0 = -h1 + s*n
and n = h1 / s and n = h1 / s
Note that the derivations in [Wald45] and [Montgomery90] differ.
Montgomery's simplified derivation of SPRT may assume a Bernoulli
processes, where the packet loss probabilities are independent and
identically distributed, making the SPRT more accessible. Wald's
seminal paper showed that this assumption is not necessary. It helps
to remember that the goal of SPRT is not to estimate the value of the
packet loss rate, but only whether or not the packet loss ratio is
likely low enough (when we accept the H0 null hypothesis) yielding
success; or too high (when we accept the H1 alternate hypothesis)
yielding failure.
7.3. Reordering Tolerance 7.3. Reordering Tolerance
All tests MUST be instrumented for packet level reordering [RFC4737]. All tests must be instrumented for packet level reordering [RFC4737].
However, there is no consensus for how much reordering should be However, there is no consensus for how much reordering should be
acceptable. Over the last two decades the general trend has been to acceptable. Over the last two decades the general trend has been to
make protocols and applications more tolerant to reordering (see for make protocols and applications more tolerant to reordering (see for
example [RFC4015]), in response to the gradual increase in reordering example [RFC4015]), in response to the gradual increase in reordering
in the network. This increase has been due to the deployment of in the network. This increase has been due to the deployment of
technologies such as multi threaded routing lookups and Equal Cost technologies such as multithreaded routing lookups and Equal Cost
MultiPath (ECMP) routing. These techniques increase parallelism in MultiPath (ECMP) routing. These techniques increase parallelism in
network and are critical to enabling overall Internet growth to network and are critical to enabling overall Internet growth to
exceed Moore's Law. exceed Moore's Law.
Note that transport retransmission strategies can trade off Note that transport retransmission strategies can trade off
reordering tolerance vs how quickly they can repair losses vs reordering tolerance vs how quickly they can repair losses vs
overhead from spurious retransmissions. In advance of new overhead from spurious retransmissions. In advance of new
retransmission strategies we propose the following strawman: retransmission strategies we propose the following strawman:
Transport protocols should be able to adapt to reordering as long as Transport protocols should be able to adapt to reordering as long as
the reordering extent is not more than the maximum of one quarter the reordering extent is not more than the maximum of one quarter
window or 1 mS, whichever is larger. Within this limit on reorder window or 1 mS, whichever is larger. (These values come from
extent, there should be no bound on reordering density. experience prototyping Early Retransmit [RFC5827] and related
algorithms. They agree with the values being proposed for "RACK: a
time-based fast loss detection algorithm" [I-D.ietf-tcpm-rack].)
Within this limit on reorder extent, there should be no bound on
reordering density.
By implication, recording which is less than these bounds should not By implication, recording which is less than these bounds should not
be treated as a network impairment. However [RFC4737] still applies: be treated as a network impairment. However [RFC4737] still applies:
reordering should be instrumented and the maximum reordering that can reordering should be instrumented and the maximum reordering that can
be properly characterized by the test (because of the bound on be properly characterized by the test (because of the bound on
history buffers) should be recorded with the measurement results. history buffers) should be recorded with the measurement results.
Reordering tolerance and diagnostic limitations, such as the size of Reordering tolerance and diagnostic limitations, such as the size of
the history buffer used to diagnose packets that are way out-of- the history buffer used to diagnose packets that are way out-of-
order, MUST be specified in a FSTIDS. order, must be specified in a FSTIDS.
8. IP Diagnostic Tests 8. IP Diagnostic Tests
The IP diagnostic tests below are organized by traffic pattern: basic The IP diagnostic tests below are organized by traffic pattern: basic
data rate and packet transfer statistics, standing queues, slowstart data rate and packet transfer statistics, standing queues, slowstart
bursts, and sender rate bursts. We also introduce some combined bursts, and sender rate bursts. We also introduce some combined
tests which are more efficient when networks are expected to pass, tests which are more efficient when networks are expected to pass,
but conflate diagnostic signatures when they fail. but conflate diagnostic signatures when they fail.
There are a number of test details which are not fully defined here. There are a number of test details which are not fully defined here.
skipping to change at page 35, line 32 skipping to change at page 36, line 45
transfer statistics. transfer statistics.
8.2. Standing Queue Tests 8.2. Standing Queue Tests
These engineering tests confirm that the bottleneck is well behaved These engineering tests confirm that the bottleneck is well behaved
across the onset of packet loss, which typically follows after the across the onset of packet loss, which typically follows after the
onset of queuing. Well behaved generally means lossless for onset of queuing. Well behaved generally means lossless for
transient queues, but once the queue has been sustained for a transient queues, but once the queue has been sustained for a
sufficient period of time (or reaches a sufficient queue depth) there sufficient period of time (or reaches a sufficient queue depth) there
should be a small number of losses or ECN CE marks to signal to the should be a small number of losses or ECN CE marks to signal to the
transport protocol that it should reduce its window. Losses that are transport protocol that it should reduce its window or data rate.
too early can prevent the transport from averaging at the Losses that are too early can prevent the transport from averaging at
target_data_rate. Losses that are too late indicate that the queue the target_data_rate. Losses that are too late indicate that the
might be subject to bufferbloat [wikiBloat] and inflict excess queue might not have an appropriate AQM [RFC7567] and as a
queuing delays on all flows sharing the bottleneck queue. Excess consequence subject to bufferbloat [wikiBloat]. Queues without AQM
losses (more than half of the window) at the onset of congestion make have the potential to inflict excess delays on all flows sharing the
loss recovery problematic for the transport protocol. Non-linear, bottleneck. Excess losses (more than half of the window) at the
erratic or excessive RTT increases suggest poor interactions between onset of loss make loss recovery problematic for the transport
the channel acquisition algorithms and the transport self clock. All protocol. Non-linear, erratic or excessive RTT increases suggest
of the tests in this section use the same basic scanning algorithm, poor interactions between the channel acquisition algorithms and the
described here, but score the link or subpath on the basis of how transport self clock. All of the tests in this section use the same
well it avoids each of these problems. basic scanning algorithm, described here, but score the link or
subpath on the basis of how well it avoids each of these problems.
Some network technologies rely on virtual queues or other techniques Some network technologies rely on virtual queues or other techniques
to meter traffic without adding any queuing delay, in which case the to meter traffic without adding any queuing delay, in which case the
data rate will vary with the window size all the way up to the onset data rate will vary with the window size all the way up to the onset
of load induced packet loss or ECN CE marks. For these technologies, of load induced packet loss or ECN CE marks. For these technologies,
the discussion of queuing in Section 6.3 does not apply, but it is the discussion of queuing in Section 6.3 does not apply, but it is
still necessary to confirm that the onset of losses or ECN CE marks still necessary to confirm that the onset of losses or ECN CE marks
be at an appropriate point and progressive. If the network be at an appropriate point and progressive. If the network
bottleneck does not introduce significant queuing delay, modify the bottleneck does not introduce significant queuing delay, modify the
procedure described in Section 6.3 to start scan at a window equal to procedure described in Section 6.3 to start the scan at a window
or slightly smaller than the test_window. equal to or slightly smaller than the test_window.
Use the procedure in Section 6.3 to sweep the window across the onset Use the procedure in Section 6.3 to sweep the window across the onset
of queuing and the onset of loss. The tests below all assume that of queuing and the onset of loss. The tests below all assume that
the scan emulates standard additive increase and delayed ACK by the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_window_size incrementing the window by one packet for every 2*target_window_size
packets delivered. A scan can typically be divided into three packets delivered. A scan can typically be divided into three
regions: below the onset of queuing, a standing queue, and at or regions: below the onset of queuing, a standing queue, and at or
beyond the onset of loss. beyond the onset of loss.
Below the onset of queuing the RTT is typically fairly constant, and Below the onset of queuing the RTT is typically fairly constant, and
skipping to change at page 38, line 8 skipping to change at page 39, line 16
intended. TCP often stumbles badly if more than a small fraction of intended. TCP often stumbles badly if more than a small fraction of
the packets are dropped in one RTT. Many TCP implementations will the packets are dropped in one RTT. Many TCP implementations will
require a timeout and slowstart to recover their self clock. Even if require a timeout and slowstart to recover their self clock. Even if
they can recover from the massive losses the sudden change in they can recover from the massive losses the sudden change in
available capacity at the bottleneck wastes serving and front path available capacity at the bottleneck wastes serving and front path
capacity until TCP can adapt to the new rate [Policing]. capacity until TCP can adapt to the new rate [Policing].
8.2.4. Duplex Self Interference 8.2.4. Duplex Self Interference
This engineering test confirms a bound on the interactions between This engineering test confirms a bound on the interactions between
the forward data path and the ACK return path. the forward data path and the ACK return path when they share a half
duplex link.
Some historical half duplex technologies had the property that each Some historical half duplex technologies had the property that each
direction held the channel until it completely drained its queue. direction held the channel until it completely drained its queue.
When a self clocked transport protocol, such as TCP, has data and When a self clocked transport protocol, such as TCP, has data and
ACKs passing in opposite directions through such a link, the behavior ACKs passing in opposite directions through such a link, the behavior
often reverts to stop-and-wait. Each additional packet added to the often reverts to stop-and-wait. Each additional packet added to the
window raises the observed RTT by two packet times, once as it passes window raises the observed RTT by two packet times, once as the
through the data path, and once for the additional delay incurred by additional packet passes through the data path, and once for the
the ACK waiting on the return path. additional delay incurred by the ACK waiting on the return path.
The duplex self interference test fails if the RTT rises by more than The duplex self interference test fails if the RTT rises by more than
a fixed bound above the expected queuing time computed from the a fixed bound above the expected queuing time computed from the
excess window divided by the subpath IP Capacity. This bound must be excess window divided by the subpath IP Capacity. This bound must be
smaller than target_RTT/2 to avoid reverting to stop and wait smaller than target_RTT/2 to avoid reverting to stop and wait
behavior. (e.g. Data packets and ACKs both have to be released at behavior. (e.g. Data packets and ACKs both have to be released at
least twice per RTT.) least twice per RTT.)
8.3. Slowstart tests 8.3. Slowstart tests
skipping to change at page 40, line 11 skipping to change at page 41, line 18
interface rate bursts have a cost to the network that has to be interface rate bursts have a cost to the network that has to be
balanced against other costs in the servers themselves. For example balanced against other costs in the servers themselves. For example
TCP Segmentation Offload (TSO) reduces server CPU in exchange for TCP Segmentation Offload (TSO) reduces server CPU in exchange for
larger network bursts, which increase the stress on network buffer larger network bursts, which increase the stress on network buffer
memory. Some newer TCP implementations can pace traffic at scale memory. Some newer TCP implementations can pace traffic at scale
[TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how [TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how
quickly these changes will be deployed. quickly these changes will be deployed.
There is not yet theory to unify these costs or to provide a There is not yet theory to unify these costs or to provide a
framework for trying to optimize global efficiency. We do not yet framework for trying to optimize global efficiency. We do not yet
have a model for how much the network should tolerate server rate have a model for how much server rate bursts should be tolerated by
bursts. Some bursts must be tolerated by the network, but it is the network. Some bursts must be tolerated by the network, but it is
probably unreasonable to expect the network to be able to efficiently probably unreasonable to expect the network to be able to efficiently
deliver all data as a series of bursts. deliver all data as a series of bursts.
For this reason, this is the only test for which we encourage For this reason, this is the only test for which we encourage
derating. A TIDS could include a table of pairs of derating derating. A TIDS could include a table of pairs of derating
parameters: burst sizes and how much each burst size is permitted to parameters: burst sizes and how much each burst size is permitted to
reduce the run length, relative to to the target_run_length. reduce the run length, relative to to the target_run_length.
8.5. Combined and Implicit Tests 8.5. Combined and Implicit Tests
skipping to change at page 44, line 24 skipping to change at page 45, line 40
protocol implementations from meeting the specified Target Transport protocol implementations from meeting the specified Target Transport
Performance. This correctness criteria is potentially difficult to Performance. This correctness criteria is potentially difficult to
prove, because it implicitly requires validating a TIDS against all prove, because it implicitly requires validating a TIDS against all
possible paths and subpaths. The procedures described here are still possible paths and subpaths. The procedures described here are still
experimental. experimental.
We suggest two approaches, both of which should be applied: first, We suggest two approaches, both of which should be applied: first,
publish a fully open description of the TIDS, including what publish a fully open description of the TIDS, including what
assumptions were used and and how it was derived, such that the assumptions were used and and how it was derived, such that the
research community can evaluate the design decisions, test them and research community can evaluate the design decisions, test them and
comment on their applicability; and second, demonstrate that an comment on their applicability; and second, demonstrate that
applications running over an infinitesimally passing testbed do meet applications do meet the Target Transport Performance when running
the performance targets. over a network testbed which has the tightest possible constraints
that still allow the tests in the TIDS to pass.
An infinitesimally passing testbed resembles a epsilon-delta proof in This procedure resembles an epsilon-delta proof in calculus.
calculus. Construct a test network such that all of the individual Construct a test network such that all of the individual tests of the
tests of the TIDS pass by only small (infinitesimal) margins, and TIDS pass by only small (infinitesimal) margins, and demonstrate that
demonstrate that a variety of authentic applications running over a variety of authentic applications running over real TCP
real TCP implementations (or other protocol as appropriate) meets the implementations (or other protocols as appropriate) meets the Target
Target Transport Performance over such a network. The workloads Transport Performance over such a network. The workloads should
should include multiple types of streaming media and transaction include multiple types of streaming media and transaction oriented
oriented short flows (e.g. synthetic web traffic). short flows (e.g. synthetic web traffic).
For example, for the HD streaming video TIDS described in Section 9, For example, for the HD streaming video TIDS described in Section 9,
the IP capacity should be exactly the header_overhead above 2.5 Mb/s, the IP capacity should be exactly the header_overhead above 2.5 Mb/s,
the per packet random background loss ratio should be 1/363, for a the per packet random background loss ratio should be 1/363, for a
run length of 363 packets, the bottleneck queue should be 11 packets run length of 363 packets, the bottleneck queue should be 11 packets
and the front path should have just enough buffering to withstand 11 and the front path should have just enough buffering to withstand 11
packet interface rate bursts. We want every one of the TIDS tests to packet interface rate bursts. We want every one of the TIDS tests to
fail if we slightly increase the relevant test parameter, so for fail if we slightly increase the relevant test parameter, so for
example sending a 12 packet bursts should cause excess (possibly example sending a 12 packet burst should cause excess (possibly
deterministic) packet drops at the dominant queue at the bottleneck. deterministic) packet drops at the dominant queue at the bottleneck.
On this infinitesimally passing network it should be possible for a This network has the tightest possible constraints that can be
real application using a stock TCP implementation in the vendor's expected to pass the TIDS, yet it should be possible for a real
default configuration to attain 2.5 Mb/s over an 50 mS path. application using a stock TCP implementation in the vendor's default
configuration to attain 2.5 Mb/s over an 50 mS path.
The most difficult part of setting up such a testbed is arranging for The most difficult part of setting up such a testbed is arranging for
it to infinitesimally pass the individual tests. Two approaches are it to have the tightest possible constraints that still allow it to
suggested: constraining the network devices not to use all available pass the individual tests. Two approaches are suggested:
resources (e.g. by limiting available buffer space or data rate); and constraining (configuring) the network devices not to use all
pre-loading subpaths with cross traffic. Note that is it important available resources (e.g. by limiting available buffer space or data
that a single environment be constructed which infinitesimally passes rate); and pre-loading subpaths with cross traffic. Note that is it
all tests at the same time, otherwise there is a chance that TCP can important that a single tightly constrained environment just barely
exploit extra latitude in some parameters (such as data rate) to passes all tests, otherwise there is a chance that TCP can exploit
partially compensate for constraints in other parameters (queue extra latitude in some parameters (such as data rate) to partially
space, or vice-versa). compensate for constraints in other parameters (queue space, or vice-
versa).
To the extent that a TIDS is used to inform public dialog it should To the extent that a TIDS is used to inform public dialog it should
be fully publicly documented, including the details of the tests, be fully publicly documented, including the details of the tests,
what assumptions were used and how it was derived. All of the what assumptions were used and how it was derived. All of the
details of the validation experiment should also be published with details of the validation experiment should also be published with
sufficient detail for the experiments to be replicated by other sufficient detail for the experiments to be replicated by other
researchers. All components should either be open source of fully researchers. All components should either be open source of fully
described proprietary implementations that are available to the described proprietary implementations that are available to the
research community. research community.
skipping to change at page 45, line 41 skipping to change at page 47, line 15
Much of the acrimony in the Net Neutrality debate is due to the Much of the acrimony in the Net Neutrality debate is due to the
historical lack of any effective vantage independent tools to historical lack of any effective vantage independent tools to
characterize network performance. Traditional methods for measuring characterize network performance. Traditional methods for measuring
Bulk Transport Capacity are sensitive to RTT and as a consequence Bulk Transport Capacity are sensitive to RTT and as a consequence
often yield very different results when run local to an ISP or often yield very different results when run local to an ISP or
interconnect and when run over a customer's complete path. Neither interconnect and when run over a customer's complete path. Neither
the ISP nor customer can repeat the others measurements, leading to the ISP nor customer can repeat the others measurements, leading to
high levels of distrust and acrimony. Model Based Metrics are high levels of distrust and acrimony. Model Based Metrics are
expected to greatly improve this situation. expected to greatly improve this situation.
Note that in situ measurements sometimes requires sending synthetic
measurement traffic between arbitrary locations in the network, and
as such are potentially attractive platforms for launching DDOS
attacks. All active measurement tools and protocols must be deigned
to minimize the opportunities for these misuses. See the discussion
in section 7 of [RFC7594].
This document only describes a framework for designing Fully This document only describes a framework for designing Fully
Specified Targeted IP Diagnostic Suite. Each FS-TIDS MUST include Specified Targeted IP Diagnostic Suite. Each FS-TIDS must include
its own security section. its own security section.
12. Acknowledgments 12. Acknowledgments
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. Alex Gilgur for helping with probability in the target run length. Alex Gilgur and Merry Mou for
the statistics. helping with the statistics.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
Ruediger Geib provided feedback which greatly improved the document. Ruediger Geib provided feedback which greatly improved the document.
This work was inspired by Measurement Lab: open tools running on an This work was inspired by Measurement Lab: open tools running on an
open platform, using open tools to collect open data. See open platform, using open tools to collect open data. See
http://www.measurementlab.net/ http://www.measurementlab.net/
13. IANA Considerations 13. IANA Considerations
This document has no actions for IANA. This document has no actions for IANA.
14. References 14. References
14.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
14.2. Informative References
[RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983.
[RFC0864] Postel, J., "Character Generator Protocol", STD 22, [RFC0864] Postel, J., "Character Generator Protocol", STD 22,
RFC 864, May 1983. RFC 864, May 1983.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, May "Framework for IP Performance Metrics", RFC 2330, May
1998. 1998.
[RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion [RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion
skipping to change at page 47, line 11 skipping to change at page 48, line 35
[RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP [RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP
Extended Statistics MIB", RFC 4898, May 2007. Extended Statistics MIB", RFC 4898, May 2007.
[RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity",
RFC 5136, February 2008. RFC 5136, February 2008.
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009. Control", RFC 5681, September 2009.
[RFC5827] Allman, M., Avrachenkov, K., Ayesta, U., Blanton, J., and
P. Hurtig, "Early Retransmit for TCP and Stream Control
Transmission Protocol (SCTP)", RFC 5827,
DOI 10.17487/RFC5827, May 2010,
<http://www.rfc-editor.org/info/rfc5827>.
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric
Composition", RFC 5835, April 2010. Composition", RFC 5835, April 2010.
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of
Metrics", RFC 6049, January 2011. Metrics", RFC 6049, January 2011.
[RFC6576] Geib, R., Ed., Morton, A., Fardid, R., and A. Steinmitz, [RFC6576] Geib, R., Ed., Morton, A., Fardid, R., and A. Steinmitz,
"IP Performance Metrics (IPPM) Standard Advancement "IP Performance Metrics (IPPM) Standard Advancement
Testing", BCP 176, RFC 6576, DOI 10.17487/RFC6576, March Testing", BCP 176, RFC 6576, DOI 10.17487/RFC6576, March
2012, <http://www.rfc-editor.org/info/rfc6576>. 2012, <http://www.rfc-editor.org/info/rfc6576>.
skipping to change at page 47, line 44 skipping to change at page 49, line 27
[RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
A. Morton, "A Reference Path and Measurement Points for A. Morton, "A Reference Path and Measurement Points for
Large-Scale Measurement of Broadband Performance", Large-Scale Measurement of Broadband Performance",
RFC 7398, February 2015. RFC 7398, February 2015.
[RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF
Recommendations Regarding Active Queue Management", Recommendations Regarding Active Queue Management",
BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015,
<http://www.rfc-editor.org/info/rfc7567>. <http://www.rfc-editor.org/info/rfc7567>.
[RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T.,
Aitken, P., and A. Akhter, "A Framework for Large-Scale
Measurement of Broadband Performance (LMAP)", RFC 7594,
DOI 10.17487/RFC7594, September 2015,
<http://www.rfc-editor.org/info/rfc7594>.
[RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating [RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating
TCP to Support Rate-Limited Traffic", RFC 7661, TCP to Support Rate-Limited Traffic", RFC 7661,
DOI 10.17487/RFC7661, October 2015, DOI 10.17487/RFC7661, October 2015,
<http://www.rfc-editor.org/info/rfc7661>. <http://www.rfc-editor.org/info/rfc7661>.
[RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton,
Ed., "A One-Way Loss Metric for IP Performance Metrics Ed., "A One-Way Loss Metric for IP Performance Metrics
(IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January
2016, <http://www.rfc-editor.org/info/rfc7680>. 2016, <http://www.rfc-editor.org/info/rfc7680>.
[RFC7799] Morton, A., "Active and Passive Metrics and Methods (with [RFC7799] Morton, A., "Active and Passive Metrics and Methods (with
Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799, Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799,
May 2016, <http://www.rfc-editor.org/info/rfc7799>. May 2016, <http://www.rfc-editor.org/info/rfc7799>.
[I-D.ietf-tcpm-rack]
Cheng, Y., Cardwell, N., and N. Dukkipati, "RACK: a time-
based fast loss detection algorithm for TCP", draft-ietf-
tcpm-rack-02 (work in progress), March 2017.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource] [mpingSource]
Fan, X., Mathis, M., and D. Hamon, "Git Repository for Fan, X., Mathis, M., and D. Hamon, "Git Repository for
skipping to change at page 48, line 37 skipping to change at page 50, line 33
[Pathdiag] [Pathdiag]
Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen, Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen,
"Pathdiag: Automated TCP Diagnosis", Passive and Active "Pathdiag: Automated TCP Diagnosis", Passive and Active
Measurement , June 2008. Measurement , June 2008.
[iPerf] Wikipedia Contributors, , "iPerf", Wikipedia, The Free [iPerf] Wikipedia Contributors, , "iPerf", Wikipedia, The Free
Encyclopedia , cited March 2015, Encyclopedia , cited March 2015,
<http://en.wikipedia.org/w/ <http://en.wikipedia.org/w/
index.php?title=Iperf&oldid=649720021>. index.php?title=Iperf&oldid=649720021>.
[StatQC] Montgomery, D., "Introduction to Statistical Quality [Wald45] Wald, A., "Sequential Tests of Statistical Hypotheses",
The Annals of Mathematical Statistics, Vol. 16, No. 2, pp.
117-186, Published by: Institute of Mathematical
Statistics, Stable URL:
http://www.jstor.org/stable/2235829, June 1945.
[Montgomery90]
Montgomery, D., "Introduction to Statistical Quality
Control - 2nd ed.", ISBN 0-471-51988-X, 1990. Control - 2nd ed.", ISBN 0-471-51988-X, 1990.
[Rtool] R Development Core Team, , "R: A language and environment [Rtool] R Development Core Team, , "R: A language and environment
for statistical computing. R Foundation for Statistical for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0, URL Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
http://www.R-project.org/", , 2011. http://www.R-project.org/", , 2011.
[CVST] Krueger, T. and M. Braun, "R package: Fast Cross- [CVST] Krueger, T. and M. Braun, "R package: Fast Cross-
Validation via Sequential Testing", version 0.1, 11 2012. Validation via Sequential Testing", version 0.1, 11 2012.
 End of changes. 54 change blocks. 
172 lines changed or deleted 242 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/