draft-ietf-ippm-model-based-metrics-07.txt   draft-ietf-ippm-model-based-metrics-08.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: April 21, 2016 AT&T Labs Expires: January 9, 2017 AT&T Labs
Oct 19, 2015 July 8, 2016
Model Based Metrics for Bulk Transport Capacity Model Based Metrics for Bulk Transport Capacity
draft-ietf-ippm-model-based-metrics-07.txt draft-ietf-ippm-model-based-metrics-08.txt
Abstract Abstract
We introduce a new class of Model Based Metrics designed to assess if We introduce a new class of Model Based Metrics designed to assess if
a complete Internet path can be expected to meet a predefined Target a complete Internet path can be expected to meet a predefined Target
Transport Performance by applying a suite of IP diagnostic tests to Transport Performance by applying a suite of IP diagnostic tests to
successive subpaths. The subpath-at-a-time tests can be robustly successive subpaths. The subpath-at-a-time tests can be robustly
applied to key infrastructure, such as interconnects or even applied to key infrastructure, such as interconnects or even
individual devices, to accurately detect if any part of the individual devices, to accurately detect if any part of the
infrastructure will prevent paths traversing it from meeting the infrastructure will prevent paths traversing it from meeting the
skipping to change at page 1, line 48 skipping to change at page 2, line 5
ability to reason about concatenated or overlapping subpaths. The ability to reason about concatenated or overlapping subpaths. The
results are vantage independent which is critical for supporting results are vantage independent which is critical for supporting
independent validation of tests by comparing results from multiple independent validation of tests by comparing results from multiple
measurement points. measurement points.
This document does not define the IP diagnostic tests, but provides a This document does not define the IP diagnostic tests, but provides a
framework for designing suites of IP diagnostic tests that are framework for designing suites of IP diagnostic tests that are
tailored to confirming that infrastructure can meet the predetermined tailored to confirming that infrastructure can meet the predetermined
Target Transport Performance. Target Transport Performance.
Status of this Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 21, 2016. This Internet-Draft will expire on January 9, 2017.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 6 1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 5
2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 17 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . 17
4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19
4.3. New requirements relative to RFC 2330 . . . . . . . . . . 19 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 20
5. Common Models and Parameters . . . . . . . . . . . . . . . . . 20 5. Common Models and Parameters . . . . . . . . . . . . . . . . 20
5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 20 5.1. Target End-to-end parameters . . . . . . . . . . . . . . 21
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 21 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 21
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 22 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 22
5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . . 22 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 23
6. Generating test streams . . . . . . . . . . . . . . . . . . . 23 6. Generating test streams . . . . . . . . . . . . . . . . . . . 23
6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 24 6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 24
6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . . 25 6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . 25
6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 25 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 26
6.4. Concurrent or channelized testing . . . . . . . . . . . . 26 6.4. Concurrent or channelized testing . . . . . . . . . . . . 27
7. Interpreting the Results . . . . . . . . . . . . . . . . . . . 27 7. Interpreting the Results . . . . . . . . . . . . . . . . . . 27
7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 27 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 27
7.2. Statistical criteria for estimating run_length . . . . . . 29 7.2. Statistical criteria for estimating run_length . . . . . 29
7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . . 31 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 31
8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 31 8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 32
8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 32 8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 32
8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 32 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 33
8.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 33 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 33
8.1.3. Background Packet Transfer Statistics Tests . . . . . 33 8.1.3. Background Packet Transfer Statistics Tests . . . . . 33
8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 33 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 34
8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 35 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 35
8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 35 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 35
8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 35 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 36
8.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 36 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 36
8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 36 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 37
8.3.1. Full Window slowstart test . . . . . . . . . . . . . . 36 8.3.1. Full Window slowstart test . . . . . . . . . . . . . 37
8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 37 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 37
8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 37 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 38
8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 38 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 39
8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 38 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 39
8.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 39 8.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 40
9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 40 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 40
10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 41 9.1. Observations about applicability . . . . . . . . . . . . 41
11. Security Considerations . . . . . . . . . . . . . . . . . . . 42 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 42
12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 43 11. Security Considerations . . . . . . . . . . . . . . . . . . . 43
13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 43 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 44
14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 43 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 44
14.1. Normative References . . . . . . . . . . . . . . . . . . . 43 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 44
14.2. Informative References . . . . . . . . . . . . . . . . . . 44 14.1. Normative References . . . . . . . . . . . . . . . . . . 44
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 46 14.2. Informative References . . . . . . . . . . . . . . . . . 44
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 47 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 48
Appendix B. The effects of ACK scheduling . . . . . . . . . . . . 48 A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 48
Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 49 Appendix B. The effects of ACK scheduling . . . . . . . . . . . 49
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 49 Appendix C. Version Control . . . . . . . . . . . . . . . . . . 50
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 50
1. Introduction 1. Introduction
Model Based Metrics (MBM) rely on mathematical models to specify a Model Based Metrics (MBM) rely on peer-reviewed mathematical models
Targeted Suite of IP Diagnostic tests, designed to assess whether to specify a Targeted Suite of IP Diagnostic tests, designed to
common transport protocols can be expected to meet a predetermined assess whether common transport protocols can be expected to meet a
Target Transport Performance over an Internet path. This note predetermined Target Transport Performance over an Internet path.
describes the modeling framework to derive the test parameters for This note describes the modeling framework to derive the test
accessing an Internet path's ability to support a predetermined Bulk parameters for assessing an Internet path's ability to support a
Transport Capacity. predetermined Bulk Transport Capacity.
Each test in the Targeted IP Diagnostic Suite (TIDS) measures some Each test in the Targeted IP Diagnostic Suite (TIDS) measures some
aspect of IP packet transfer needed to meet the Target Transport aspect of IP packet transfer needed to meet the Target Transport
Performance. For Bulk Transport Capacity the TIDS includes IP Performance. For Bulk Transport Capacity the TIDS includes IP
diagnostic tests to verify that there is: sufficient IP capacity diagnostic tests to verify that there is: sufficient IP capacity
(data rate); sufficient queue space at bottlenecks to absorb and (data rate); sufficient queue space at bottlenecks to absorb and
deliver typical transport bursts; and that the background packet loss deliver typical transport bursts; and that the background packet loss
ratio is low enough not to interfere with congestion control; and ratio is low enough not to interfere with congestion control; and
other properties described below. Unlike typical IPPM metrics which other properties described below. Unlike typical IPPM metrics which
yield measures of network properties, Model Based Metrics nominally yield measures of network properties, Model Based Metrics nominally
yield pass/fail evaluations of the ability of standard transport yield pass/fail evaluations of the ability of standard transport
protocols to meet the specific performance objective over some protocols to meet the specific performance objective over some
network path. network path.
In most cases the IP diagnostic tests can be implemented by combining In most cases, the IP diagnostic tests can be implemented by
existing IPPM metrics with additional controls for generating test combining existing IPPM metrics with additional controls for
streams having a specified temporal structure (busts or standing generating test streams having a specified temporal structure (bursts
queues, etc) and statistical criteria for evaluating packet transfer. or standing queues caused by constant bit rate streams, etc.) and
The temporal structure of the test streams mimic transport protocol statistical criteria for evaluating packet transfer. The temporal
behavior over the complete path, the statistical criteria models the structure of the test streams mimic transport protocol behavior over
transport protocol's response to less than ideal IP packet transfer. the complete path, the statistical criteria models the transport
protocol's response to less than ideal IP packet transfer.
This note describes an alternative to the approach presented in "A This note addresses Bulk Transport Capacity. It describes an
Framework for Defining Empirical Bulk Transfer Capacity Metrics" alternative to the approach presented in "A Framework for Defining
[RFC3148]. In the future, other Model Based Metrics may cover other Empirical Bulk Transfer Capacity Metrics" [RFC3148]. In the future,
applications and transports, such as VoIP over RTP. other Model Based Metrics may cover other applications and
transports, such as VoIP over UDP and RTP, and new transport
protocols.
The MBM approach, mapping Target Transport Performance to a Targeted The MBM approach, mapping Target Transport Performance to a Targeted
IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic
problems with using TCP or other throughput maximizing protocols for problems with using TCP or other throughput maximizing protocols for
measurement. In particular all throughput maximizing protocols (and measurement. In particular all throughput maximizing protocols (and
TCP congestion control in particular) cause some level of congestion TCP congestion control in particular) cause some level of congestion
in order to detect when they have filled the network. This self in order to detect when they have reached the available capacity
inflicted congestion obscures the network properties of interest and limitation of the network. This self inflicted congestion obscures
introduces non-linear equilibrium behaviors that make any resulting the network properties of interest and introduces non-linear dynamic
measurements useless as metrics because they have no predictive value equilibrium behaviors that make any resulting measurements useless as
for conditions or paths different than that of the measurement metrics because they have no predictive value for conditions or paths
itself. These problems are discussed at length in Section 4. different than that of the measurement itself. In order to prevent
these effects it is necessary to suppress the effects of TCP
congestion control in the measurement method. These issues are
discussed at length in Section 4.
A Targeted IP Diagnostic Suite does not have such difficulties. IP A Targeted IP Diagnostic Suite does not have such difficulties. IP
diagnostics can be constructed such that they make strong statistical diagnostics can be constructed such that they make strong statistical
statements about path properties that are independent of the statements about path properties that are independent of the
measurement details, such as vantage and choice of measurement measurement details, such as vantage and choice of measurement
points. Model Based Metrics are designed to bridge the gap between points. Model Based Metrics are designed to bridge the gap between
empirical IP measurements and expected TCP performance. empirical IP measurements and expected TCP performance for multiple
standardized versions of TCP.
1.1. Version Control 1.1. Version Control
RFC Editor: Please remove this entire subsection prior to RFC Editor: Please remove this entire subsection prior to
publication. publication.
Please send comments about this draft to ippm@ietf.org. See Please send comments about this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Mon Oct 19 15:59:51 PDT 2015 Formatted: Fri Jul 8 16:00:10 PDT 2016
Changes since -07 draft:
o Sharpened the use of "statistical criteria"
o Sharpened the definition of test_window, and removed related
redundant text in several places
o Clarified "equilibrium" as "dynamic equilibrium, similar to
processes observed in chemistry"
o Properly explained "Heisenberg" as "observer effect"
o Added the observation from RFC 6576 that HW and SW congestion
control implementations do not generally give the same results.
o Noted that IP and application metrics differ as to how overhead is
handled. MBM is explicit about how it handles overhead.
o Clarified the language and added a new reference about the
problems caused by token bucket policers.
o Added an subsection in the example that comments on some of issues
that need to be mentioned in a future usage or applicability doc.
o Updated ippm-2680-bis to RFC7680
o Many terminology, punctuation and spelling nits.
Changes since -06 draft: Changes since -06 draft:
o More language nits: o More language nits:
* "Targeted IP Diagnostic Suite (TIDS)" replaces "Targeted * "Targeted IP Diagnostic Suite (TIDS)" replaces "Targeted
Diagnostic Suite (TDS)". Diagnostic Suite (TDS)".
* "implied bottleneck IP capacity" replaces "implied bottleneck * "implied bottleneck IP capacity" replaces "implied bottleneck
IP rate". IP rate".
* Updated to ECN CE Marks. * Updated to ECN CE Marks.
* Added "specified temporal structure" * Added "specified temporal structure"
* "test stream" replaces "test traffic" * "test stream" replaces "test traffic"
* "packet transfer" replaces "packet delivery" * "packet transfer" replaces "packet delivery"
* Reworked discussion of slowstart, bursts and pacing. * Reworked discussion of slowstart, bursts and pacing.
* RFC 7567 replaces RFC 2309. * RFC 7567 replaces RFC 2309.
skipping to change at page 7, line 30 skipping to change at page 7, line 10
* end-to-end path -> complete path * end-to-end path -> complete path
* [end-to-end][target] performance -> Target Transport * [end-to-end][target] performance -> Target Transport
Performance Performance
* load test -> capacity test * load test -> capacity test
2. Overview 2. Overview
This document describes a modeling framework for deriving a Targeted This document describes a modeling framework for deriving a Targeted
IP Diagnostic Suite from a predetermined Target Transport IP Diagnostic Suite from a predetermined Target Transport
Performance. It is not a complete specification, and relies on other Performance. It is not a complete specification, and relies on other
standards documents to define important details such as packet type-p standards documents to define important details such as packet Type-P
selection, sampling techniques, vantage selection, etc. We imagine selection, sampling techniques, vantage selection, etc. We imagine
Fully Specified Targeted IP Diagnostic Suites (FSTIDS), that define Fully Specified - Targeted IP Diagnostic Suites (FS-TIDS), that
all of these details. We use Targeted IP Diagnostic Suite (TIDS) to define all of these details. We use Targeted IP Diagnostic Suite
refer to the subset of such a specification that is in scope for this (TIDS) to refer to the subset of such a specification that is in
document. This terminology is defined in Section 3. scope for this document. This terminology is defined in Section 3.
Section 4 describes some key aspects of TCP behavior and what they Section 4 describes some key aspects of TCP behavior and what they
imply about the requirements for IP packet transfer. Most of the IP imply about the requirements for IP packet transfer. Most of the IP
diagnostic tests needed to confirm that the path meets these diagnostic tests needed to confirm that the path meets these
properties can be built on existing IPPM metrics, with the addition properties can be built on existing IPPM metrics, with the addition
of statistical criteria for evaluating packet transfer and in a few of statistical criteria for evaluating packet transfer and in a few
cases, new mechanisms to implement the required temporal structure. cases, new mechanisms to implement the required temporal structure.
(One group of tests, the standing queue tests described in (One group of tests, the standing queue tests described in
Section 8.2, don't correspond to existing IPPM metrics, but suitable Section 8.2, don't correspond to existing IPPM metrics, but suitable
metrics can be patterned after existing tools.) metrics can be patterned after the existing definitions.)
Figure 1 shows the MBM modeling and measurement framework. The Figure 1 shows the MBM modeling and measurement framework. The
Target Transport Performance, at the top of the figure, is determined Target Transport Performance, at the top of the figure, is determined
by the needs of the user or application, outside the scope of this by the needs of the user or application, outside the scope of this
document. For Bulk Transport Capacity, the main performance document. For Bulk Transport Capacity, the main performance
parameter of interest is the Target Data Rate. However, since TCP's parameter of interest is the Target Data Rate. However, since TCP's
ability to compensate for less than ideal network conditions is ability to compensate for less than ideal network conditions is
fundamentally affected by the Round Trip Time (RTT) and the Maximum fundamentally affected by the Round Trip Time (RTT) and the Maximum
Transmission Unit (MTU) of the complete path, these parameters must Transmission Unit (MTU) of the complete path, these parameters must
also be specified in advance based on knowledge about the intended also be specified in advance based on knowledge about the intended
application setting. They may reflect a specific application over application setting. They may reflect a specific application over a
real path through the Internet or an idealized application and real path through the Internet or an idealized application and
hypothetical path representing a typical user community. Section 5 hypothetical path representing a typical user community. Section 5
describes the common parameters and models derived from the Target describes the common parameters and models derived from the Target
Transport Performance. Transport Performance.
Target Transport Performance Target Transport Performance
(Target Data Rate, Target RTT and Target MTU) (Target Data Rate, Target RTT and Target MTU)
| |
________V_________ ________V_________
| mathematical | | mathematical |
| models | | models |
| | | |
------------------ ------------------
Traffic parameters | | Statistical criteria Traffic parameters | | Statistical criteria
| | | |
_______V____________V____Targeted_______ _______V____________V____Targeted_______
| | * * * | Diagnostic Suite | | | * * * | Diagnostic Suite |
_____|_______V____________V________________ | _____|_______V____________V________________ |
__|____________V____________V______________ | | __|____________V____________V______________ | |
| IP diagnostic test | | | | IP diagnostic tests | | |
| | | | | | | | | | | |
| _____________V__ __V____________ | | | | _____________V__ __V____________ | | |
| | test | | Delivery | | | | | | traffic | | Delivery | | | |
| | stream | | Evaluation | | | | | | pattern | | Evaluation | | | |
| | generation | | | | | | | | generation | | | | | |
| -------v-------- ------^-------- | | | | -------v-------- ------^-------- | | |
| | v test stream via ^ | | |-- | | v test stream via ^ | | |--
| | -->======================>-- | | | | | -->======================>-- | | |
| | subpath under test | |- | | subpath under test | |-
----V----------------------------------V--- | ----V----------------------------------V--- |
| | | | | | | | | | | |
V V V V V V V V V V V V
fail/inconclusive pass/fail/inconclusive fail/inconclusive pass/fail/inconclusive
Overall Modeling Framework Overall Modeling Framework
Figure 1 Figure 1
The mathematical models are used to design traffic patterns that The mathematical models are used to determine Traffic parameters and
mimic TCP or other transport protocol delivering bulk data and subsequently to design traffic patterns that mimic TCP or other
operating at the Target Data Rate, MTU and RTT over a full range of transport protocol delivering bulk data and operating at the Target
conditions, including flows that are bursty at multiple time scales. Data Rate, MTU and RTT over a full range of conditions, including
The traffic patterns are generated based on the three target flows that are bursty at multiple time scales. The traffic patterns
parameters of complete path and independent of the properties of are generated based on the three Target parameters of complete path
individual subpaths using the techniques described in Section 6. As and independent of the properties of individual subpaths using the
much as possible the measurement traffic is generated techniques described in Section 6. As much as possible the test
deterministically (precomputed) to minimize the extent to which test stream is generated deterministically (precomputed) to minimize the
methodology, measurement points, measurement vantage or path extent to which test methodology, measurement points, measurement
partitioning affect the details of the measurement traffic. vantage or path partitioning affect the details of the measurement
traffic.
Section 7 describes packet transfer statistics and methods test them Section 7 describes packet transfer statistics and methods to test
against the bounds provided by the mathematical models. Since these them against the statistical criteria provided by the mathematical
statistics are typically the composition of subpaths of the complete models. Since the statistical criteria are typically for the
path [RFC6049] , in situ testing requires that the end-to-end complete path (a composition of subpaths) [RFC6049], in situ testing
statistical bounds be apportioned as separate bounds for each requires that the end-to-end statistical criteria be apportioned as
subpath. Subpaths that are expected to be bottlenecks may be separate criteria for each subpath. Subpaths that are expected to be
expected to contribute a larger fraction of the total packet loss. bottlenecks would then be permitted to contribute a larger fraction
In compensation, non-bottlenecked subpaths have to be constrained to of the end-to-end packet loss budget. In compensation, non-
contribute less packet loss. The criteria for passing each test of a bottlenecked subpaths have to be constrained to contribute less
TIDS is an apportioned share of the total bound determined by the packet loss. Thus the statistical criteria for each subpath in each
mathematical model from the Target Transport Performance. test of a TIDS is an apportioned share of the end-to-end statistical
criteria for the complete path which was determined by the
mathematical model.
Section 8 describes the suite of individual tests needed to verify Section 8 describes the suite of individual tests needed to verify
all of required IP delivery properties. A subpath passes if and only all of required IP delivery properties. A subpath passes if and only
if all of the individual IP diagnostic tests pass. Any subpath that if all of the individual IP diagnostic tests pass. Any subpath that
fails any test indicates that some users are likely fail to attain fails any test indicates that some users are likely to fail to attain
their Target Transport Performance under some conditions. In their Target Transport Performance under some conditions. In
addition to passing or failing, a test can be deemed to be addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons including: the precomputed inconclusive for a number of reasons including: the precomputed
traffic pattern was not accurately generated; the measurement results traffic pattern was not accurately generated; the measurement results
were not statistically significant; and others such as failing to were not statistically significant; and others such as failing to
meet some required test preconditions. If all tests pass, except meet some required test preconditions. If all tests pass but some
some are inconclusive then the entire suite is deemed to be are inconclusive, then the entire suite is deemed to be inconclusive.
inconclusive.
In Section 9 we present an example TIDS that might be representative In Section 9 we present an example TIDS that might be representative
of HD video, and illustrate how Model Based Metrics can be used to of HD video, and illustrate how Model Based Metrics can be used to
address difficult measurement situations, such as confirming that address difficult measurement situations, such as confirming that
intercarrier exchanges have sufficient performance and capacity to inter-carrier exchanges have sufficient performance and capacity to
deliver HD video between ISPs. deliver HD video between ISPs.
Since there is some uncertainty in the modeling process, Section 10 Since there is some uncertainty in the modeling process, Section 10
describes a validation procedure to diagnose and minimize false describes a validation procedure to diagnose and minimize false
positive and false negative results. positive and false negative results.
3. Terminology 3. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
skipping to change at page 10, line 13 skipping to change at page 10, line 4
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in [RFC2119].
Note that terms containing underscores (rather than spaces) appear in Note that terms containing underscores (rather than spaces) appear in
equations in the modeling sections. In some cases both forms are equations in the modeling sections. In some cases both forms are
used for aesthetic reasons, they do not have different meanings. used for aesthetic reasons, they do not have different meanings.
General Terminology: General Terminology:
Target: A general term for any parameter specified by or derived Target: A general term for any parameter specified by or derived
from the user's application or transport performance requirements. from the user's application or transport performance requirements.
Target Transport Performance: Application or transport performance Target Transport Performance: Application or transport performance
goals for the complete path. For Bulk Transport Capacity defined target values for the complete path. For Bulk Transport Capacity
in this note the Target Transport Performance includes the Target defined in this note the Target Transport Performance includes the
Data Rate, Target RTT and Target MTU as described below. Target Data Rate, Target RTT and Target MTU as described below.
Target Data Rate: The specified application data rate required for Target Data Rate: The specified application data rate required for
an application's proper operation. Conventional BTC metrics are an application's proper operation. Conventional BTC metrics are
focused on the Target Data Rate, however these metrics had little focused on the Target Data Rate, however these metrics had little
or no predictive value because they do not consider the effects of or no predictive value because they do not consider the effects of
the other two parameters of the Target Transport Performance, the the other two parameters of the Target Transport Performance, the
RTT and MTU of the complete paths. RTT and MTU of the complete paths.
Target RTT (Round Trip Time): The specified baseline (minimum) RTT Target RTT (Round Trip Time): The specified baseline (minimum) RTT
of the longest complete path over which the user expects to be of the longest complete path over which the user expects to be
able meet the target performance. TCP and other transport able meet the target performance. TCP and other transport
protocol's ability to compensate for path problems is generally protocol's ability to compensate for path problems is generally
proportional to the number of round trips per second. The Target proportional to the number of round trips per second. The Target
RTT determines both key parameters of the traffic patterns (e.g. RTT determines both key parameters of the traffic patterns (e.g.
burst sizes) and the thresholds on acceptable IP packet transfer burst sizes) and the thresholds on acceptable IP packet transfer
statistics. The Target RTT must be specified considering statistics. The Target RTT must be specified considering
appropriate packets sizes: MTU sized packets on the forward path, appropriate packets sizes: MTU sized packets on the forward path,
ACK sized packets (typically header_overhead) on the return path. ACK sized packets (typically header_overhead) on the return path.
Note that Target RTT is specified and not measured, it determines Note that Target RTT is specified and not measured, MBM
the applicability of MBM measuremets for paths that are different measurements derived for a given target_RTT will be applicable to
than the measured path. any path with a smaller RTTs.
Target MTU (Maximum Transmission Unit): The specified maximum MTU Target MTU (Maximum Transmission Unit): The specified maximum MTU
supported by the complete path the over which the application supported by the complete path the over which the application
expects to meet the target performance. Assume 1500 Byte MTU expects to meet the target performance. Assume 1500 Byte MTU
unless otherwise specified. If some subpath has a smaller MTU, unless otherwise specified. If some subpath has a smaller MTU,
then it becomes the Target MTU for the complete path, and all then it becomes the Target MTU for the complete path, and all
model calculations and subpath tests must use the same smaller model calculations and subpath tests must use the same smaller
MTU. MTU.
Targeted IP Diagnostic Suite (TIDS): A set of IP diagnostic tests Targeted IP Diagnostic Suite (TIDS): A set of IP diagnostic tests
designed to determine if an otherwise ideal complete path designed to determine if an otherwise ideal complete path
containing the subpath under test can sustain flows at a specific containing the subpath under test can sustain flows at a specific
target_data_rate using target_MTU sized packets when the RTT of target_data_rate using target_MTU sized packets when the RTT of
the complete path is target_RTT. the complete path is target_RTT.
Fully Specified Targeted IP Diagnostic Suite: A TIDS together with Fully Specified Targeted IP Diagnostic Suite (FS-TIDS): A TIDS
additional specification such as "type-p", etc which are out of together with additional specification such as "type-p", etc which
scope for this document, but need to be drawn from other standards are out of scope for this document, but need to be drawn from
documents. other standards documents.
Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an
Internet path's ability to carry bulk data, such as large files, Internet path's ability to carry bulk data, such as large files,
streaming (non-real time) video, and under some conditions, web streaming (non-real time) video, and under some conditions, web
images and other content. Prior efforts to define BTC metrics images and other content. Prior efforts to define BTC metrics
have been based on [RFC3148], which predates our understanding of have been based on [RFC3148], which predates our understanding of
TCP and the requirements described in Section 4 TCP and the requirements described in Section 4
IP diagnostic tests: Measurements or diagnostics to determine if IP diagnostic tests: Measurements or diagnostics to determine if
packet transfer statistics meet some precomputed target. packet transfer statistics meet some precomputed target.
traffic patterns: The temporal patterns or burstiness of traffic traffic patterns: The temporal patterns or burstiness of traffic
generated by applications over transport protocols such as TCP. generated by applications over transport protocols such as TCP.
There are several mechanisms that cause bursts at various time There are several mechanisms that cause bursts at various time
scales as described in Section 4.1. Our goal here is to mimic the scales as described in Section 4.1. Our goal here is to mimic the
range of common patterns (burst sizes and rates, etc), without range of common patterns (burst sizes and rates, etc), without
tying our applicability to specific applications, implementations tying our applicability to specific applications, implementations
or technologies, which are sure to become stale. or technologies, which are sure to become stale.
packet transfer statistics: Raw, detailed or summary statistics packet transfer statistics: Raw, detailed or summary statistics
about packet transfer properties of the IP layer including packet about packet transfer properties of the IP layer including packet
losses, ECN Congestion Experienced (CE) marks, reordering, or any losses, ECN Congestion Experienced (CE) marks, reordering, or any
skipping to change at page 11, line 24 skipping to change at page 11, line 16
generated by applications over transport protocols such as TCP. generated by applications over transport protocols such as TCP.
There are several mechanisms that cause bursts at various time There are several mechanisms that cause bursts at various time
scales as described in Section 4.1. Our goal here is to mimic the scales as described in Section 4.1. Our goal here is to mimic the
range of common patterns (burst sizes and rates, etc), without range of common patterns (burst sizes and rates, etc), without
tying our applicability to specific applications, implementations tying our applicability to specific applications, implementations
or technologies, which are sure to become stale. or technologies, which are sure to become stale.
packet transfer statistics: Raw, detailed or summary statistics packet transfer statistics: Raw, detailed or summary statistics
about packet transfer properties of the IP layer including packet about packet transfer properties of the IP layer including packet
losses, ECN Congestion Experienced (CE) marks, reordering, or any losses, ECN Congestion Experienced (CE) marks, reordering, or any
other properties that may be germane to transport performance. other properties that may be germane to transport performance.
packet loss ratio: As defined in [I-D.ietf-ippm-2680-bis]. packet loss ratio: As defined in [RFC7680].
apportioned: To divide and allocate, for example budgeting packet apportioned: To divide and allocate, for example budgeting packet
loss across multiple subpaths such that the losses will accumulate loss across multiple subpaths such that the losses will accumulate
to less than a specified end-to-end loss ratio. Apportioning to less than a specified end-to-end loss ratio. Apportioning
metrics is essentially the inverse of the process described in metrics is essentially the inverse of the process described in
[RFC5835]. [RFC5835].
open loop: A control theory term used to describe a class of open loop: A control theory term used to describe a class of
techniques where systems that naturally exhibit circular techniques where systems that naturally exhibit circular
dependencies can be analyzed by suppressing some of the dependencies can be analyzed by suppressing some of the
dependencies, such that the resulting dependency graph is acyclic. dependencies, such that the resulting dependency graph is acyclic.
skipping to change at page 13, line 15 skipping to change at page 13, line 4
is not altering the ACK stream, then the implied bottleneck IP is not altering the ACK stream, then the implied bottleneck IP
capacity will be the same as the bottleneck IP capacity. See capacity will be the same as the bottleneck IP capacity. See
Section 4.1 and Appendix B for more details. Section 4.1 and Appendix B for more details.
sender interface rate: The IP rate which corresponds to the IP sender interface rate: The IP rate which corresponds to the IP
capacity of the data sender's interface. Due to sender efficiency capacity of the data sender's interface. Due to sender efficiency
algorithms including technologies such as TCP segmentation offload algorithms including technologies such as TCP segmentation offload
(TSO), nearly all moderns servers deliver data in bursts at full (TSO), nearly all moderns servers deliver data in bursts at full
interface link rate. Today 1 or 10 Gb/s are typical. interface link rate. Today 1 or 10 Gb/s are typical.
Header_overhead: The IP and TCP header sizes, which are the portion Header_overhead: The IP and TCP header sizes, which are the portion
of each MTU not available for carrying application payload. of each MTU not available for carrying application payload.
Without loss of generality this is assumed to be the size for Without loss of generality this is assumed to be the size for
returning acknowledgements (ACKs). For TCP, the Maximum Segment returning acknowledgments (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header_overhead. Size (MSS) is the Target MTU minus the header_overhead.
Basic parameters common to models and subpath tests are defined here Basic parameters common to models and subpath tests are defined here
are described in more detail in Section 5.2. Note that these are are described in more detail in Section 5.2. Note that these are
mixed between application transport performance (excludes headers) mixed between application transport performance (excludes headers)
and IP performance (which include TCP headers and retransmissions as and IP performance (which include TCP headers and retransmissions as
part of the IP payload). part of the IP payload).
Window [size]: The total quantity of data plus the data represented Window [size]: The total quantity of data plus the data represented
by ACKs circulating in the network is referred to as the window. by ACKs circulating in the network is referred to as the window.
skipping to change at page 14, line 4 skipping to change at page 13, line 42
losses or ECN Congestion Experienced (CE) marks. Nominally one losses or ECN Congestion Experienced (CE) marks. Nominally one
over the sum of the loss and ECN CE marking probabilities, if over the sum of the loss and ECN CE marking probabilities, if
there are independently and identically distributed. there are independently and identically distributed.
target_run_length: The target_run_length is an estimate of the target_run_length: The target_run_length is an estimate of the
minimum number of non-congestion marked packets needed between minimum number of non-congestion marked packets needed between
losses or ECN Congestion Experienced (CE) marks necessary to losses or ECN Congestion Experienced (CE) marks necessary to
attain the target_data_rate over a path with the specified attain the target_data_rate over a path with the specified
target_RTT and target_MTU, as computed by a mathematical model of target_RTT and target_MTU, as computed by a mathematical model of
TCP congestion control. A reference calculation is shown in TCP congestion control. A reference calculation is shown in
Section 5.2 and alternatives in Appendix A Section 5.2 and alternatives in Appendix A
reference target_run_length: target_run_length computed precisely by reference target_run_length: target_run_length computed precisely by
the method in Section 5.2. This is likely to be more slightly the method in Section 5.2. This is likely to be slightly more
conservative than required by modern TCP implementations. conservative than required by modern TCP implementations.
Ancillary parameters used for some tests: Ancillary parameters used for some tests:
derating: Under some conditions the standard models are too derating: Under some conditions the standard models are too
conservative. The modeling framework permits some latitude in conservative. The modeling framework permits some latitude in
relaxing or "derating" some test parameters as described in relaxing or "derating" some test parameters as described in
Section 5.3 in exchange for a more stringent TIDS validation Section 5.3 in exchange for a more stringent TIDS validation
procedures, described in Section 10. procedures, described in Section 10.
subpath_IP_capacity: The IP capacity of a specific subpath. subpath_IP_capacity: The IP capacity of a specific subpath.
test path: A subpath of a complete path under test. test path: A subpath of a complete path under test.
test_path_RTT: The RTT observed between two measurement points using test_path_RTT: The RTT observed between two measurement points using
packet sizes that are consistent with the transport protocol. packet sizes that are consistent with the transport protocol.
Generally MTU sized packets of the forward path, header_overhead This is generally MTU sized packets of the forward path,
sized packets on the return path. header_overhead sized packets on the return path.
test_path_pipe: The pipe size of a test path. Nominally the test test_path_pipe: The pipe size of a test path. Nominally the test
path RTT times the test path IP_capacity. path RTT times the test path IP_capacity.
test_window: The window necessary to meet the target_rate over a test_window: The smallest window sufficient to meet or exceeded the
test path. Typically test_window=target_data_rate*test_path_RTT/ target_rate when operating with a pure self clock over a test
(target_MTU - header_overhead). path. The test_window is typically given by
ceiling(target_data_rate*test_path_RTT/(target_MTU-
header_overhead)) but see the discussion in Appendix B about the
effects of channel scheduling on RTT. On some test paths the
test_window may need to be adjusted slightly to compensate for the
RTT being inflated by the devices that schedule packets.
The terminology below is used to define temporal patterns for test The terminology below is used to define temporal patterns for test
stream. These patterns are designed to mimic TCP behavior, as stream. These patterns are designed to mimic TCP behavior, as
described in Section 4.1. described in Section 4.1.
packet headway: Time interval between packets, specified from the packet headway: Time interval between packets, specified from the
start of one to the start of the next. e.g. If packets are sent start of one to the start of the next. e.g. If packets are sent
with a 1 mS headway, there will be exactly 1000 packets per with a 1 mS headway, there will be exactly 1000 packets per
second. second.
burst headway: Time interval between bursts, specified from the burst headway: Time interval between bursts, specified from the
start of the first packet one burst to the start of the first start of the first packet one burst to the start of the first
packet of the next burst. e.g. If 4 packet bursts are sent with a packet of the next burst. e.g. If 4 packet bursts are sent with a
1 mS burst headway, there will be exactly 4000 packets per second. 1 mS burst headway, there will be exactly 4000 packets per second.
paced single packets: Send individual packets at the specified rate paced single packets: Send individual packets at the specified rate
or packet headway. or packet headway.
skipping to change at page 15, line 23 skipping to change at page 15, line 19
the same size (nominally target_window_size but other sizes might the same size (nominally target_window_size but other sizes might
be specified), and the ECN CE marks and lost packets are counted. be specified), and the ECN CE marks and lost packets are counted.
The tests described in this note can be grouped according to their The tests described in this note can be grouped according to their
applicability. applicability.
Capacity tests: Capacity tests determine if a network subpath has Capacity tests: Capacity tests determine if a network subpath has
sufficient capacity to deliver the Target Transport Performance. sufficient capacity to deliver the Target Transport Performance.
As long as the test stream is within the proper envelope for the As long as the test stream is within the proper envelope for the
Target Transport Performance, the average packet losses or ECN Target Transport Performance, the average packet losses or ECN
Congestion Experienced (CE) marks must be below the threshold Congestion Experienced (CE) marks must be below the statistical
computed by the model. As such, capacity tests reflect parameters criteria computed by the model. As such, capacity tests reflect
that can transition from passing to failing as a consequence of parameters that can transition from passing to failing as a
cross traffic, additional presented load or the actions of other consequence of cross traffic, additional presented load or the
network users. By definition, capacity tests also consume actions of other network users. By definition, capacity tests
significant network resources (data capacity and/or queue buffer also consume significant network resources (data capacity and/or
space), and the test schedules must be balanced by their cost. queue buffer space), and the test schedules must be balanced by
their cost.
Monitoring tests: Monitoring tests are designed to capture the most Monitoring tests: Monitoring tests are designed to capture the most
important aspects of a capacity test, but without presenting important aspects of a capacity test, but without presenting
excessive ongoing load themselves. As such they may miss some excessive ongoing load themselves. As such they may miss some
details of the network's performance, but can serve as a useful details of the network's performance, but can serve as a useful
reduced-cost proxy for a capacity test, for example to support reduced-cost proxy for a capacity test, for example to support
ongoing monitoring. ongoing monitoring.
Engineering tests: Engineering tests evaluate how network algorithms Engineering tests: Engineering tests evaluate how network algorithms
(such as AQM and channel allocation) interact with TCP-style self (such as AQM and channel allocation) interact with TCP-style self
clocked protocols and adaptive congestion control based on packet clocked protocols and adaptive congestion control based on packet
loss and ECN Congestion Experienced (CE) marks. These tests are loss and ECN Congestion Experienced (CE) marks. These tests are
skipping to change at page 16, line 8 skipping to change at page 15, line 51
experience a false pass result in the presence of cross traffic. experience a false pass result in the presence of cross traffic.
It is important that engineering tests be performed under a wide It is important that engineering tests be performed under a wide
range of conditions, including both in situ and bench testing, and range of conditions, including both in situ and bench testing, and
over a wide variety of load conditions. Ongoing monitoring is over a wide variety of load conditions. Ongoing monitoring is
less likely to be useful for engineering tests, although sparse in less likely to be useful for engineering tests, although sparse in
situ testing might be appropriate. situ testing might be appropriate.
4. Background 4. Background
At the time the IPPM WG was chartered, sound Bulk Transport Capacity At the time the IPPM WG was chartered, sound Bulk Transport Capacity
measurement was known to be well beyond our capabilities. Even at (BTC) measurement was known to be well beyond our capabilities. Even
the time that Framework for IP Performance Metrics [RFC3148] was at the time that Framework for Empirical BTC Metrics [RFC3148] was
written we knew that we didn't fully understand the problem. Now, by written we knew that we didn't fully understand the problem. Now, by
hindsight we understand why BTC is such a hard problem: hindsight we understand why assessing BTC is such a hard problem:
o TCP is a control system with circular dependencies - everything o TCP is a control system with circular dependencies - everything
affects performance, including components that are explicitly not affects performance, including components that are explicitly not
part of the test. part of the test (for example, the host processing power is not
o Congestion control is an equilibrium process, such that transport in-scope of path performance tests).
protocols change the packet transfer statistics (raise the packet o Congestion control is a dynamic equilibrium process, similar to
loss ratio and/or RTT) to conform to their behavior. By design processes observed in chemistry and other fields. The network and
transport protocols find an operating point which balances between
opposing forces: the transport protocol pushing harder (raising
the data rate and or window) while the network pushes back
(raising packet loss ratio, RTT and/or ECN CE marks). By design
TCP congestion control keeps raising the data rate until the TCP congestion control keeps raising the data rate until the
network gives some indication that it is full by dropping or ECN network gives some indication that its capacity has been exceeded
CE marking packets. If TCP successfully fills the network (e.g. by dropping packets or ECN CE marks. If a TCP sender accurately
the bottleneck is 100% utilized) the packet loss and ECN CE marks fills a path to its IP capacity, (e.g. the bottleneck is 100%
are mostly determined by TCP and how hard TCP drives the network utilized), then packet losses and ECN CE marks are mostly
at that rate and not by the network itself. determined by the TCP sender and how aggressively it seeks
o TCP's ability to compensate for network flaws is directly additional capacity, and not the network itself, since the network
proportional to the number of roundtrips per second (i.e. must send exactly the signals that TCP needs to set its rate.
inversely proportional to the RTT). As a consequence a flawed o TCP's ability to compensate for network impairments (such as loss,
subpath may pass a short RTT local test even though it fails when delay and delay variation, outside of those caused by TCP itself)
the subpath is extended by an effectively perfect network to some is directly proportional to the number of send-ACK round trip
larger RTT. exchanges per second (i.e. inversely proportional to the RTT). As
o TCP has an extreme form of the Heisenberg problem - Measurement a consequence an impaired subpath may pass a short RTT local test
and cross traffic interact in unknown and ill defined ways. The even though it fails when the subpath is extended by an
situation is actually worse than the traditional physics problem effectively perfect network to some larger RTT.
where you can at least estimate bounds on the relative momentum of o TCP has an extreme form of the Observer Effect (colloquially know
the measurement and measured particles. For network measurement as the Heisenberg effect). Measurement and cross traffic interact
you can not in general determine the relative "mass" of either the in unknown and ill defined ways. The situation is actually worse
test stream or the cross traffic, so you can not gauge the than the traditional physics problem where you can at least
relative magnitude of the uncertainty that might be introduced by estimate bounds on the relative momentum of the measurement and
any interaction. measured particles. For network measurement you can not in
general determine even the order of magnitude of the effect. It
is possible to construct measurement scenarios where the
measurement traffic starves real user traffic, yielding an overly
inflated measurement. The inverse is also possible: the user
traffic can fill the network, such that the measurement traffic
detects only minimal available capacity. You can not in general
determine which scenario might be in effect, so you can not gauge
the relative magnitude of the uncertainty introduced by
interactions with other network traffic.
o It is difficult, if not impossible, for two independent
implementations (HW or SW) of TCP congestion control to produce
equivalent performance results [RFC6576] under the same network
conditions, as an outcome of the other properties listed.
These properties are a consequence of the equilibrium behavior These properties are a consequence of the dynamic equilibrium
intrinsic to how all throughput maximizing protocols interact with behavior intrinsic to how all throughput maximizing protocols
the Internet. These protocols rely on control systems based on interact with the Internet. These protocols rely on control systems
estimated network parameters to regulate the quantity of data sent based on estimated network metrics to regulate the quantity of data
into the network. The sent data in turn alters the network sent into the network. The packet sending characteristics in turn
properties observed by the estimators, such that there are circular alter the network properties estimated by the control system metrics,
dependencies between every component and every property. Since some such that there are circular dependencies between every transmission
of these properties are nonlinear, the entire system is nonlinear, characteristic and every estimated metric. Since some of these
and any change anywhere causes difficult to predict changes in every dependencies are nonlinear, the entire system is nonlinear, and any
parameter. change causes a response in packet sending characteristics or
estimated network metrics that is difficult to predict.
Model Based Metrics overcome these problems by making the measurement Model Based Metrics overcome these problems by making the measurement
system open loop: the packet transfer statistics (akin to the network system open loop: the packet transfer statistics (akin to the network
estimators) do not affect the traffic or traffic patterns (bursts), estimators) do not affect the traffic or traffic patterns (bursts),
which computed on the basis of the Target Transport Performance. In which are computed on the basis of the Target Transport Performance.
order for a network to pass, the resulting packet transfer statistics A path or subpath meeting the Target Transfer Performance
and corresponding network estimators have to be such that they would requirements would exhibit packet transfer statistics and estimated
not cause the control systems slow the traffic below the Target Data metrics that would not cause the control system to slow the traffic
Rate. below the Target Data Rate.
4.1. TCP properties 4.1. TCP properties
TCP and SCTP are self clocked protocols that carry the vast majority TCP and SCTP are self clocked protocols that carry the vast majority
of all Internet data. Their dominant behavior is to have an of all Internet data. Their dominant behavior is to have an
approximately fixed quantity of data and acknowledgements (ACKs) approximately fixed quantity of data and acknowledgments (ACKs)
circulating in the network. The data receiver reports arriving data circulating in the network. The data receiver reports arriving data
by returning ACKs to the data sender, the data sender typically by returning ACKs to the data sender, the data sender typically
responds by sending exactly the same quantity of data back into the responds by sending exactly the same quantity of data back into the
network. The total quantity of data plus the data represented by network. The total quantity of data plus the data represented by
ACKs circulating in the network is referred to as the window. The ACKs circulating in the network is referred to as the window. The
mandatory congestion control algorithms incrementally adjust the mandatory congestion control algorithms incrementally adjust the
window by sending slightly more or less data in response to each ACK. window by sending slightly more or less data in response to each ACK.
The fundamentally important property of this system is that it is The fundamentally important property of this system is that it is
self clocked: The data transmissions are a reflection of the ACKs self clocked: The data transmissions are a reflection of the ACKs
that were delivered by the network, the ACKs are a reflection of the that were delivered by the network, the ACKs are a reflection of the
data arriving from the network. data arriving from the network.
A number of protocol features cause bursts of data, even in idealized A number of protocol features cause bursts of data, even in idealized
networks that can be modeled as simple queueing systems. networks that can be modeled as simple queuing systems.
During slowstart the IP rate is doubled on each RTT by sending twice During slowstart the IP rate is doubled on each RTT by sending twice
as much data as was delivered to the receiver during the prior RTT. as much data as was delivered to the receiver during the prior RTT.
Each returning ACK causes the sender to transmit twice the data the Each returning ACK causes the sender to transmit twice the data the
ACK reported arriving at the receiver. For slowstart to be able to ACK reported arriving at the receiver. For slowstart to be able to
fill a network the network must be able to tolerate slowstart bursts fill the pipe, the network must be able to tolerate slowstart bursts
up to the full pipe size inflated by the anticipated window reduction up to the full pipe size inflated by the anticipated window reduction
on the first loss or ECN CE mark. For example, with classic Reno on the first loss or ECN CE mark. For example, with classic Reno
congestion control, an optimal slowstart has to end with a burst that congestion control, an optimal slowstart has to end with a burst that
is twice the bottleneck rate for one RTT in duration. This burst is twice the bottleneck rate for one RTT in duration. This burst
causes a queue which is equal to the pipe size (i.e. the window is causes a queue which is equal to the pipe size (i.e. the window is
twice the pipe size) so when the window is halved in response to the twice the pipe size) so when the window is halved in response to the
first packet loss, the new window will be the pipe size. first packet loss, the new window will be the pipe size.
Note that if the bottleneck IP rate is less that half of the capacity Note that if the bottleneck IP rate is less that half of the capacity
of the front path (which is almost always the case), the slowstart of the front path (which is almost always the case), the slowstart
bursts will not by themselves cause significant queues anywhere else bursts will not by themselves cause significant queues anywhere else
along the front path; they primarily exercise the queue at the along the front path; they primarily exercise the queue at the
dominant bottleneck. dominant bottleneck.
Several common efficiency algorithms also cause bursts. The self Several common efficiency algorithms also cause bursts. The self
clock is typically applied to groups of packets: the receiver's clock is typically applied to groups of packets: the receiver's
delayed ACK algorithm generally sends only one ACK per two data delayed ACK algorithm generally sends only one ACK per two data
segments. Furthermore the modern senders use TCP segmentation segments. Furthermore the modern senders use TCP segmentation
offload (TSO) to reduce CPU overhead. The sender's software stack offload (TSO) to reduce CPU overhead. The sender's software stack
builds supersized TCP segments that the TSO hardware splits into MTU builds super sized TCP segments that the TSO hardware splits into MTU
sized segments on the wire. The net effect of TSO, delayed ACK and sized segments on the wire. The net effect of TSO, delayed ACK and
other efficiency algorithms is to send bursts of segments at full other efficiency algorithms is to send bursts of segments at full
sender interface rate. sender interface rate.
Note that these efficiency algorithms are almost always in effect, Note that these efficiency algorithms are almost always in effect,
including during slowstart, such that slowstart typically has a two including during slowstart, such that slowstart typically has a two
level burst structure. Section 6.1 describes slowstart in more level burst structure. Section 6.1 describes slowstart in more
detail. detail.
Additional sources of bursts include TCP's initial window [RFC6928], Additional sources of bursts include TCP's initial window [RFC6928],
skipping to change at page 18, line 36 skipping to change at page 18, line 50
Although the sender interface rate bursts are typically smaller than Although the sender interface rate bursts are typically smaller than
the last burst of a slowstart, they are at a higher IP rate so they the last burst of a slowstart, they are at a higher IP rate so they
potentially exercise queues at arbitrary points along the front path potentially exercise queues at arbitrary points along the front path
from the data sender up to and including the queue at the dominant from the data sender up to and including the queue at the dominant
bottleneck. There is no model for how frequent or what sizes of bottleneck. There is no model for how frequent or what sizes of
sender rate bursts the network should tolerate. sender rate bursts the network should tolerate.
In conclusion, to verify that a path can meet a Target Transport In conclusion, to verify that a path can meet a Target Transport
Performance, it is necessary to independently confirm that the path Performance, it is necessary to independently confirm that the path
can tolerate bursts at the scales that can be caused by these can tolerate bursts at the scales that can be caused by the above
mechanisms. Three cases are believed to be sufficient: mechanisms. Three cases are believed to be sufficient:
o Two level slowstart bursts sufficient to get connections started o Two level slowstart bursts sufficient to get connections started
properly. properly.
o Ubiquitous sender interface rate bursts caused by efficiency o Ubiquitous sender interface rate bursts caused by efficiency
algorithms. We assume 4 packet bursts to be the most common case, algorithms. We assume 4 packet bursts to be the most common case,
since it matches the effects of delayed ACK during slowstart. since it matches the effects of delayed ACK during slowstart.
These bursts should be assumed not to significantly affect packet These bursts should be assumed not to significantly affect packet
transfer statistics. transfer statistics.
o Infrequent sender interface rate bursts that are full o Infrequent sender interface rate bursts that are full
skipping to change at page 19, line 13 skipping to change at page 19, line 27
all of these scales then it has sufficient buffering at all potential all of these scales then it has sufficient buffering at all potential
bottlenecks to tolerate any of the bursts that are likely introduced bottlenecks to tolerate any of the bursts that are likely introduced
by TCP or other transport protocols. by TCP or other transport protocols.
4.2. Diagnostic Approach 4.2. Diagnostic Approach
A complete path of a given RTT and MTU, which are equal to or smaller A complete path of a given RTT and MTU, which are equal to or smaller
than the Target RTT and equal to or larger than the Target MTU than the Target RTT and equal to or larger than the Target MTU
respectively, is expected to be able to attain a specified Bulk respectively, is expected to be able to attain a specified Bulk
Transport Capacity when all of the following conditions are met: Transport Capacity when all of the following conditions are met:
1. The IP capacity is above the Target Data Rate by sufficient 1. The IP capacity is above the Target Data Rate by sufficient
margin to cover all TCP/IP overheads. This can be confirmed by margin to cover all TCP/IP overheads. This can be confirmed by
the tests described in Section 8.1 or any number of IP capacity the tests described in Section 8.1 or any number of IP capacity
tests adapted to implement MBM. tests adapted to implement MBM.
2. The observed packet transfer statistics are better than required 2. The observed packet transfer statistics are better than required
by a suitable TCP performance model (e.g. fewer packet losses or by a suitable TCP performance model (e.g. fewer packet losses or
ECN CE marks). See Section 8.1 or any number of low rate packet ECN CE marks). See Section 8.1 or any number of low rate packet
loss tests outside of MBM. loss tests outside of MBM.
3. There is sufficient buffering at the dominant bottleneck to 3. There is sufficient buffering at the dominant bottleneck to
absorb a slowstart bursts large enough to get the flow out of absorb a slowstart bursts large enough to get the flow out of
slowstart at a suitable window size. See Section 8.3. slowstart at a suitable window size. See Section 8.3.
4. There is sufficient buffering in the front path to absorb and 4. There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in to be generated by the application, any channel arbitration in
the ACK path or any other mechanisms. See Section 8.4. the ACK path or any other mechanisms. See Section 8.4.
5. When there is a slowly rising standing queue at the bottleneck 5. When there is a slowly rising standing queue at the bottleneck
skipping to change at page 20, line 22 skipping to change at page 20, line 38
points. The only requirements on MP selection should be that the points. The only requirements on MP selection should be that the
RTT between the MPs is below some reasonable bound, and that the RTT between the MPs is below some reasonable bound, and that the
effects of the "test leads" connecting MPs to the subpath under effects of the "test leads" connecting MPs to the subpath under
test can be can be calibrated out of the measurements. The latter test can be can be calibrated out of the measurements. The latter
might be be accomplished if the test leads are effectively ideal might be be accomplished if the test leads are effectively ideal
or their properties can be deducted from the measurements between or their properties can be deducted from the measurements between
the MPs. While many of tests require that the test leads have at the MPs. While many of tests require that the test leads have at
least as much IP capacity as the subpath under test, some do not, least as much IP capacity as the subpath under test, some do not,
for example Background Packet Transfer Tests described in for example Background Packet Transfer Tests described in
Section 8.1.3. Section 8.1.3.
o Metric measurements must be repeatable by multiple parties with no o Metric measurements should be repeatable by multiple parties with
specialized access to MPs or diagnostic infrastructure. It must no specialized access to MPs or diagnostic infrastructure. It
be possible for different parties to make the same measurement and should be possible for different parties to make the same
observe the same results. In particular it is specifically measurement and observe the same results. In particular it is
important that both a consumer (or their delegate) and ISP be able specifically important that both a consumer (or their delegate)
to perform the same measurement and get the same result. Note and ISP be able to perform the same measurement and get the same
that vantage independence is key to meeting this requirement. result. Note that vantage independence is key to meeting this
requirement.
5. Common Models and Parameters 5. Common Models and Parameters
5.1. Target End-to-end parameters 5.1. Target End-to-end parameters
The target end-to-end parameters are the Target Data Rate, Target RTT The target end-to-end parameters are the Target Data Rate, Target RTT
and Target MTU as defined in Section 3. These parameters are and Target MTU as defined in Section 3. These parameters are
determined by the needs of the application or the ultimate end user determined by the needs of the application or the ultimate end user
and the complete Internet path over which the application is expected and the complete Internet path over which the application is expected
to operate. The target parameters are in units that make sense to to operate. The target parameters are in units that make sense to
upper layers: payload bytes delivered to the application, above TCP. upper layers: payload bytes delivered to the application, above TCP.
They exclude overheads associated with TCP and IP headers, They exclude overheads associated with TCP and IP headers,
retransmits and other protocols (e.g. DNS). retransmits and other protocols (e.g. DNS). Note that IP-based
network services include TCP headers and retransmissions as part of
delivered payload, and this difference is recognized in calculations
below (header_overhead).
Other end-to-end parameters defined in Section 3 include the Other end-to-end parameters defined in Section 3 include the
effective bottleneck data rate, the sender interface data rate and effective bottleneck data rate, the sender interface data rate and
the TCP and IP header sizes. the TCP and IP header sizes.
The target_data_rate must be smaller than all subpath IP capacities The target_data_rate must be smaller than all subpath IP capacities
by enough headroom to carry the transport protocol overhead, by enough headroom to carry the transport protocol overhead,
explicitly including retransmissions and an allowance for explicitly including retransmissions and an allowance for
fluctuations in TCP's actual data rate. Specifying a fluctuations in TCP's actual data rate. Specifying a
target_data_rate with insufficient headroom is likely to result in target_data_rate with insufficient headroom is likely to result in
skipping to change at page 22, line 47 skipping to change at page 23, line 23
experiments. experiments.
Except as noted, all tests below assume no derating. Tests where Except as noted, all tests below assume no derating. Tests where
there is not currently a well established model for the required there is not currently a well established model for the required
parameters explicitly include derating as a way to indicate parameters explicitly include derating as a way to indicate
flexibility in the parameters. flexibility in the parameters.
5.4. Test Preconditions 5.4. Test Preconditions
Many tests have preconditions which are required to assure their Many tests have preconditions which are required to assure their
validity. Examples include: the presence or nonpresence of cross validity. Examples include: the presence or non-presence of cross
traffic on specific subpaths; negotiating ECN; and appropriate traffic on specific subpaths; negotiating ECN; and appropriate
preloading to put reactive network elements into the proper states preamble packet stream to testing to put reactive network elements
[RFC7312]. If preconditions are not properly satisfied for some into the proper states [RFC7312]. If preconditions are not properly
reason, the tests should be considered to be inconclusive. In satisfied for some reason, the tests should be considered to be
general it is useful to preserve diagnostic information as to why the inconclusive. In general it is useful to preserve diagnostic
preconditions were not met, and any test data that was collected even information as to why the preconditions were not met, and any test
if it is not useful for the intended test. Such diagnostic data that was collected even if it is not useful for the intended
information and partial test data may be useful for improving the test. Such diagnostic information and partial test data may be
test in the future. useful for improving the test in the future.
It is important to preserve the record that a test was scheduled, It is important to preserve the record that a test was scheduled,
because otherwise precondition enforcement mechanisms can introduce because otherwise precondition enforcement mechanisms can introduce
sampling bias. For example, canceling tests due to cross traffic on sampling bias. For example, canceling tests due to cross traffic on
subscriber access links might introduce sampling bias in tests of the subscriber access links might introduce sampling bias in tests of the
rest of the network by reducing the number of tests during peak rest of the network by reducing the number of tests during peak
network load. network load.
Test preconditions and failure actions MUST be specified in a FSTIDS. Test preconditions and failure actions MUST be specified in a FSTIDS.
skipping to change at page 23, line 31 skipping to change at page 24, line 6
Many important properties of Model Based Metrics, such as vantage Many important properties of Model Based Metrics, such as vantage
independence, are a consequence of using test streams that have independence, are a consequence of using test streams that have
temporal structures that mimic TCP or other transport protocols temporal structures that mimic TCP or other transport protocols
running over a complete path. As described in Section 4.1, self running over a complete path. As described in Section 4.1, self
clocked protocols naturally have burst structures related to the RTT clocked protocols naturally have burst structures related to the RTT
and pipe size of the complete path. These bursts naturally get and pipe size of the complete path. These bursts naturally get
larger (contain more packets) as either the Target RTT or Target Data larger (contain more packets) as either the Target RTT or Target Data
Rate get larger, or the Target MTU gets smaller. An implication of Rate get larger, or the Target MTU gets smaller. An implication of
these relationships is that test streams generated by running self these relationships is that test streams generated by running self
clocked protocols over short subpaths may not adequately exercise the clocked protocols over short subpaths may not adequately exercise the
queueing at any bottleneck to determine if the subpath can support queuing at any bottleneck to determine if the subpath can support the
the full Target Transport Performance over the complete path. full Target Transport Performance over the complete path.
Failing to authentically mimic TCP's temporal structure is part the Failing to authentically mimic TCP's temporal structure is part the
reason why simple performance tools such as iperf, netperf, nc, etc reason why simple performance tools such as iPerf, netperf, nc, etc
have the reputation of yielding false pass results over short test have the reputation of yielding false pass results over short test
paths, even when some subpath has a flaw. paths, even when some subpath has a flaw.
The definitions in Section 3 are sufficient for most test streams. The definitions in Section 3 are sufficient for most test streams.
We describe the slowstart and standing queue test streams in more We describe the slowstart and standing queue test streams in more
detail. detail.
In conventional measurement practice stochastic processes are used to In conventional measurement practice stochastic processes are used to
eliminate many unintended correlations and sample biases. However eliminate many unintended correlations and sample biases. However
MBM tests are designed to explicitly mimic temporal correlations MBM tests are designed to explicitly mimic temporal correlations
caused by network or protocol elements themselves and are intended to caused by network or protocol elements themselves and are intended to
accurately reflect implementation behavior. Some portion of the accurately reflect implementation behavior. Some portion of the
system, such as traffic arrival (test scheduling) are naturally system, such as traffic arrival (test scheduling) are naturally
stochastic. Other details, such as protocol processing times, are stochastic. Other details, such as protocol processing times, are
technically nondeterministic and might be modeled stochastically, but technically non-deterministic and might be modeled stochastically,
are only a tiny part of the overall behavior which is dominated by but are only a tiny part of the overall behavior which is dominated
implementation specific deterministic effects. Furthermore, it is by implementation specific deterministic effects. Furthermore, it is
known that sampling bias is a real problem for some protocol known that sampling bias is a real problem for some protocol
implementations. For example TCP's RTT estimator used to determine implementations. For example TCP's RTT estimator used to determine
the Retransmit Time Out (RTO), can be fooled by periodic cross the Retransmit Time Out (RTO), can be fooled by periodic cross
traffic or start-stop applications. traffic or start-stop applications.
At some point in the future it may make sense to introduce fine At some point in the future it may make sense to introduce fine
grained noise sources into the models used for generating test grained noise sources into the models used for generating test
streams, but they are not warranted at this time. streams, but they are not warranted at this time.
6.1. Mimicking slowstart 6.1. Mimicking slowstart
skipping to change at page 25, line 5 skipping to change at page 25, line 25
the average IP rate is the same as the sender interface rate; at a the average IP rate is the same as the sender interface rate; at a
medium timescale (a few packet times at the dominant bottleneck) the medium timescale (a few packet times at the dominant bottleneck) the
peak of the average IP rate is twice the implied bottleneck IP peak of the average IP rate is twice the implied bottleneck IP
capacity; and at time scales longer than the target_RTT and when the capacity; and at time scales longer than the target_RTT and when the
burst size is equal to the target_window_size the average rate is burst size is equal to the target_window_size the average rate is
equal to the target_data_rate. This pattern corresponds to repeating equal to the target_data_rate. This pattern corresponds to repeating
the last RTT of TCP slowstart when delayed ACK and sender side byte the last RTT of TCP slowstart when delayed ACK and sender side byte
counting are present but without the limits specified in Appropriate counting are present but without the limits specified in Appropriate
Byte Counting [RFC3465]. Byte Counting [RFC3465].
time --> ( - = one packet) time ==> ( - equals one packet)
Packet stream: Packet stream:
---- ---- ---- ---- ---- ---- ---- ... ---- ---- ---- ---- ---- ---- ---- ...
|<>| sender interface rate bursts (typically 3 or 4 packets) |<>| sender interface rate bursts (typically 3 or 4 packets)
|<--->| burst headway (determined by ACK headway) |<===>| burst headway (determined by ACK headway)
|<------------------------>| slowstart burst size(from the window) |<========================>| slowstart burst size(from the window)
|<---------------------------------------------->| slowstart headway |<==============================================>| slowstart headway
\____________ _____________/ \______ __ ... \____________ _____________/ \______ __ ...
V V V V
One slowstart burst Repeated slowstart bursts One slowstart burst Repeated slowstart bursts
Multiple levels of Slowstart Bursts Multiple levels of Slowstart Bursts
Figure 2 Figure 2
6.2. Constant window pseudo CBR 6.2. Constant window pseudo CBR
Implement pseudo constant bit rate by running a standard protocol Implement pseudo constant bit rate by running a standard self clocked
such as TCP with a fixed window size, such that it is self clocked. protocol such as TCP with a fixed window size. If that window size
Data packets arriving at the receiver trigger acknowledgements (ACKs) is test_window, the data rate will be slightly above the target_rate.
which travel back to the sender where they trigger additional
transmissions. The window size is computed from the target_data_rate
and the actual RTT of the test path. The rate is only maintained in
average over each RTT, and is subject to limitations of the transport
protocol.
Since the window size is constrained to be an integer number of Since the test_window is constrained to be an integer number of
packets, for small RTTs or low data rates there may not be packets, for small RTTs or low data rates there may not be
sufficiently precise control over the data rate. Rounding the window sufficiently precise control over the data rate. Rounding the
size up (the default) is likely to be result in data rates that are test_window up (the default) is likely to result in data rates that
higher than the target rate, but reducing the window by one packet are higher than the target rate, but reducing the window by one
may result in data rates that are too small. Also cross traffic packet may result in data rates that are too small. Also cross
potentially raises the RTT, implicitly reducing the rate. Cross traffic potentially raises the RTT, implicitly reducing the rate.
traffic that raises the RTT nearly always makes the test more Cross traffic that raises the RTT nearly always makes the test more
strenuous. A FSTIDS specifying a constant window CBR tests MUST strenuous. A FS-TIDS specifying a constant window CBR tests MUST
explicitly indicate under what conditions errors in the data rate explicitly indicate under what conditions errors in the data rate
causes tests to inconclusive. cause tests to inconclusive.
Since constant window pseudo CBR testing is sensitive to RTT Since constant window pseudo CBR testing is sensitive to RTT
fluctuations it is less accurate at controling the data rate in fluctuations it will be less accurate at controlling the data rate in
environments with fluctuating delays. environments with fluctuating delays. Conventional paced measurement
traffic may be more appropriate for these environments.
6.3. Scanned window pseudo CBR 6.3. Scanned window pseudo CBR
Scanned window pseudo CBR is similar to the constant window CBR Scanned window pseudo CBR is similar to the constant window CBR
described above, except the window is scanned across a range of sizes described above, except the window is scanned across a range of sizes
designed to include two key events, the onset of queueing and the designed to include two key events, the onset of queuing and the
onset of packet loss or ECN CE marks. The window is scanned by onset of packet loss or ECN CE marks. The window is scanned by
incrementing it by one packet every 2*target_window_size delivered incrementing it by one packet every 2*target_window_size delivered
packets. This mimics the additive increase phase of standard Reno packets. This mimics the additive increase phase of standard Reno
TCP congestion avoidance when delayed ACKs are in effect. Normally TCP congestion avoidance when delayed ACKs are in effect. Normally
the window increases separated by intervals slightly longer than the window increases separated by intervals slightly longer than
twice the target_RTT. twice the target_RTT.
There are two ways to implement this test: one built by applying a There are two ways to implement this test: one built by applying a
window clamp to standard congestion control in a standard protocol window clamp to standard congestion control in a standard protocol
such as TCP and the other built by stiffening a non-standard such as TCP and the other built by stiffening a non-standard
skipping to change at page 26, line 51 skipping to change at page 27, line 23
the flow to channel mapping to minimize packet reordering within the flow to channel mapping to minimize packet reordering within
flows. Second, TCP itself has scaling limits. Although the former flows. Second, TCP itself has scaling limits. Although the former
problem might be overcome through different design decisions, the problem might be overcome through different design decisions, the
later problem is more deeply rooted. later problem is more deeply rooted.
All congestion control algorithms that are philosophically aligned All congestion control algorithms that are philosophically aligned
with the standard [RFC5681] (e.g. claim some level of TCP with the standard [RFC5681] (e.g. claim some level of TCP
compatibility, friendliness or fairness) have scaling limits, in the compatibility, friendliness or fairness) have scaling limits, in the
sense that as a long fast network (LFN) with a fixed RTT and MTU gets sense that as a long fast network (LFN) with a fixed RTT and MTU gets
faster, these congestion control algorithms get less accurate and as faster, these congestion control algorithms get less accurate and as
a consequence have difficulty filling the network[CCscaling]. These a consequence have difficulty filling the network [CCscaling]. These
properties are a consequence of the original Reno AIMD congestion properties are a consequence of the original Reno AIMD congestion
control design and the requirement in [RFC5681] that all transport control design and the requirement in [RFC5681] that all transport
protocols have similar responses to congestion. protocols have similar responses to congestion.
There are a number of reasons to want to specify performance in term There are a number of reasons to want to specify performance in term
of multiple concurrent flows, however this approach is not of multiple concurrent flows, however this approach is not
recommended for data rates below several megabits per second, which recommended for data rates below several megabits per second, which
can be attained with run lengths under 10000 packets on many paths. can be attained with run lengths under 10000 packets on many paths.
Since the required run length goes as the square of the data rate, at Since the required run length goes as the square of the data rate, at
higher rates the run lengths can be unreasonably large, and multiple higher rates the run lengths can be unreasonably large, and multiple
flows might be the only feasible approach. flows might be the only feasible approach.
If multiple flows are deemed necessary to meet aggregate performance If multiple flows are deemed necessary to meet aggregate performance
targets then this MUST be stated both the design of the TIDS and in targets then this MUST be stated both the design of the TIDS and in
any claims about network performance. The IP diagnostic tests MUST any claims about network performance. The IP diagnostic tests MUST
be performed concurrently with the specified number of connections. be performed concurrently with the specified number of connections.
For the the tests that use bursty test streams, the bursts should be For the tests that use bursty test streams, the bursts should be
synchronized across streams. synchronized across streams.
7. Interpreting the Results 7. Interpreting the Results
7.1. Test outcomes 7.1. Test outcomes
To perform an exhaustive test of a complete network path, each test To perform an exhaustive test of a complete network path, each test
of the TIDS is applied to each subpath of the complete path. If any of the TIDS is applied to each subpath of the complete path. If any
subpath fails any test then a standard transport protocol running subpath fails any test then a standard transport protocol running
over the complete path can also be expected to fail to attain the over the complete path can also be expected to fail to attain the
skipping to change at page 28, line 17 skipping to change at page 28, line 36
statistics meet the statistical criteria for failing (accepting statistics meet the statistical criteria for failing (accepting
hypnosis H1 in Section 7.2), the test can can be considered to have hypnosis H1 in Section 7.2), the test can can be considered to have
failed because it doesn't really matter that the test didn't attain failed because it doesn't really matter that the test didn't attain
the required data rate. the required data rate.
The really important new properties of MBM, such as vantage The really important new properties of MBM, such as vantage
independence, are a direct consequence of opening the control loops independence, are a direct consequence of opening the control loops
in the protocols, such that the test stream does not depend on in the protocols, such that the test stream does not depend on
network conditions or IP packets received. Any mechanism that network conditions or IP packets received. Any mechanism that
introduces feedback between the path's measurements and the test introduces feedback between the path's measurements and the test
stream generation is at risk of introducing nonlinearities that spoil stream generation is at risk of introducing non-linearities that
these properties. Any exceptional event that indicates that such spoil these properties. Any exceptional event that indicates that
feedback has happened should cause the test to be considered such feedback has happened should cause the test to be considered
inconclusive. inconclusive.
One way to view inconclusive tests is that they reflect situations One way to view inconclusive tests is that they reflect situations
where a test outcome is ambiguous between limitations of the network where a test outcome is ambiguous between limitations of the network
and some unknown limitation of the IP diagnostic test itself, which and some unknown limitation of the IP diagnostic test itself, which
may have been caused by some uncontrolled feedback from the network. may have been caused by some uncontrolled feedback from the network.
Note that procedures that attempt to sweep the target parameter space Note that procedures that attempt to search the target parameter
to find the limits on some parameter such as target_data_rate are at space to find the limits on some parameter such as target_data_rate
risk of breaking the location independent properties of Model Based are at risk of breaking the location independent properties of Model
Metrics, if any part of the boundary between passing and inconclusive Based Metrics, if any part of the boundary between passing and
results is sensitive to RTT (which is normally the case). inconclusive or failing results is sensitive to RTT (which is
normally the case). For example the maximum data rate for a
margional link (e.g. exhibiting excess errors) is likely to be
sensitive to the test path RTT. The maximum observed data rate over
the test path has very little predictive value for the maximum rate
over a different path.
One of the goals for evolving TIDS designs will be to keep sharpening One of the goals for evolving TIDS designs will be to keep sharpening
distinction between inconclusive, passing and failing tests. The distinction between inconclusive, passing and failing tests. The
criteria for for passing, failing and inconclusive tests MUST be criteria for for passing, failing and inconclusive tests MUST be
explicitly stated for every test in the TIDS or FSTIDS. explicitly stated for every test in the TIDS or FSTIDS.
One of the goals of evolving the testing process, procedures, tools One of the goals of evolving the testing process, procedures, tools
and measurement point selection should be to minimize the number of and measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
It may be useful to keep raw packet transfer statistics and ancillary It may be useful to keep raw packet transfer statistics and ancillary
metrics [RFC3148] for deeper study of the behavior of the network metrics [RFC3148] for deeper study of the behavior of the network
path and to measure the tools themselves. Raw packet transfer path and to measure the tools themselves. Raw packet transfer
statistics can help to drive tool evolution. Under some conditions statistics can help to drive tool evolution. Under some conditions
it might be possible to reevaluate the raw data for satisfying it might be possible to re-evaluate the raw data for satisfying
alternate Target Transport Performance. However it is important to alternate Target Transport Performance. However it is important to
guard against sampling bias and other implicit feedback which can guard against sampling bias and other implicit feedback which can
cause false results and exhibit measurement point vantage cause false results and exhibit measurement point vantage
sensitivity. Simply applying different delivery criteria based on a sensitivity. Simply applying different delivery criteria based on a
different Target Transport Performance is insufficient if the test different Target Transport Performance is insufficient if the test
traffic patterns (bursts, etc) does not match the alternate Target traffic patterns (bursts, etc.) does not match the alternate Target
Transport Performance. Transport Performance.
7.2. Statistical criteria for estimating run_length 7.2. Statistical criteria for estimating run_length
When evaluating the observed run_length, we need to determine When evaluating the observed run_length, we need to determine
appropriate packet stream sizes and acceptable error levels for appropriate packet stream sizes and acceptable error levels for
efficient measurement. In practice, can we compare the empirically efficient measurement. In practice, can we compare the empirically
estimated packet loss and ECN Congestion Experienced (CE) marking estimated packet loss and ECN Congestion Experienced (CE) marking
ratios with the targets as the sample size grows? How large a sample ratios with the targets as the sample size grows? How large a sample
is needed to say that the measurements of packet transfer indicate a is needed to say that the measurements of packet transfer indicate a
skipping to change at page 30, line 18 skipping to change at page 30, line 43
Ratio Test (SPRT) [StatQC]. Note that as originally framed the Ratio Test (SPRT) [StatQC]. Note that as originally framed the
events under consideration were all manufacturing defects. In events under consideration were all manufacturing defects. In
networking, ECN CE marks and lost packets are not defects but networking, ECN CE marks and lost packets are not defects but
signals, indicating that the transport protocol should slow down. signals, indicating that the transport protocol should slow down.
The Sequential Probability Ratio Test also starts with a pair of The Sequential Probability Ratio Test also starts with a pair of
hypothesis specified as above: hypothesis specified as above:
H0: p0 = one defect in target_run_length H0: p0 = one defect in target_run_length
H1: p1 = one defect in target_run_length/4 H1: p1 = one defect in target_run_length/4
As packets are sent and measurements collected, the tester evaluates As packets are sent and measurements collected, the tester evaluates
the cumulative defect count against two boundaries representing H0 the cumulative defect count against two boundaries representing H0
Acceptance or Rejection (and acceptance of H1): Acceptance or Rejection (and acceptance of H1):
Acceptance line: Xa = -h1 + s*n Acceptance line: Xa = -h1 + s*n
Rejection line: Xr = h2 + s*n Rejection line: Xr = h2 + s*n
where n increases linearly for each packet sent and
where n increases linearly for each packet sent and
h1 = { log((1-alpha)/beta) }/k h1 = { log((1-alpha)/beta) }/k
h2 = { log((1-beta)/alpha) }/k h2 = { log((1-beta)/alpha) }/k
k = log{ (p1(1-p0)) / (p0(1-p1)) } k = log{ (p1(1-p0)) / (p0(1-p1)) }
s = [ log{ (1-p0)/(1-p1) } ]/k s = [ log{ (1-p0)/(1-p1) } ]/k
for p0 and p1 as defined in the null and alternative Hypotheses for p0 and p1 as defined in the null and alternative Hypotheses
statements above, and alpha and beta as the Type I and Type II statements above, and alpha and beta as the Type I and Type II
errors. errors.
The SPRT specifies simple stopping rules: The SPRT specifies simple stopping rules:
o Xa < defect_count(n) < Xb: continue testing o Xa < defect_count(n) < Xb: continue testing
o defect_count(n) <= Xa: Accept H0 o defect_count(n) <= Xa: Accept H0
o defect_count(n) >= Xb: Accept H1 o defect_count(n) >= Xb: Accept H1
skipping to change at page 30, line 42 skipping to change at page 31, line 21
errors. errors.
The SPRT specifies simple stopping rules: The SPRT specifies simple stopping rules:
o Xa < defect_count(n) < Xb: continue testing o Xa < defect_count(n) < Xb: continue testing
o defect_count(n) <= Xa: Accept H0 o defect_count(n) <= Xa: Accept H0
o defect_count(n) >= Xb: Accept H1 o defect_count(n) >= Xb: Accept H1
The calculations above are implemented in the R-tool for Statistical The calculations above are implemented in the R-tool for Statistical
Analysis [Rtool] , in the add-on package for Cross-Validation via Analysis [Rtool] , in the add-on package for Cross-Validation via
Sequential Testing (CVST) [CVST] . Sequential Testing (CVST) [CVST].
Using the equations above, we can calculate the minimum number of Using the equations above, we can calculate the minimum number of
packets (n) needed to accept H0 when x defects are observed. For packets (n) needed to accept H0 when x defects are observed. For
example, when x = 0: example, when x = 0:
Xa = 0 = -h1 + s*n Xa = 0 = -h1 + s*n
and n = h1 / s and n = h1 / s
7.3. Reordering Tolerance 7.3. Reordering Tolerance
All tests must be instrumented for packet level reordering [RFC4737]. All tests MUST be instrumented for packet level reordering [RFC4737].
However, there is no consensus for how much reordering should be However, there is no consensus for how much reordering should be
acceptable. Over the last two decades the general trend has been to acceptable. Over the last two decades the general trend has been to
make protocols and applications more tolerant to reordering (see for make protocols and applications more tolerant to reordering (see for
example [RFC4015]), in response to the gradual increase in reordering example [RFC4015]), in response to the gradual increase in reordering
in the network. This increase has been due to the deployment of in the network. This increase has been due to the deployment of
technologies such as multi threaded routing lookups and Equal Cost technologies such as multi threaded routing lookups and Equal Cost
MultiPath (ECMP) routing. These techniques increase parallelism in MultiPath (ECMP) routing. These techniques increase parallelism in
network and are critical to enabling overall Internet growth to network and are critical to enabling overall Internet growth to
exceed Moore's Law. exceed Moore's Law.
skipping to change at page 31, line 33 skipping to change at page 32, line 8
overhead from spurious retransmissions. In advance of new overhead from spurious retransmissions. In advance of new
retransmission strategies we propose the following strawman: retransmission strategies we propose the following strawman:
Transport protocols should be able to adapt to reordering as long as Transport protocols should be able to adapt to reordering as long as
the reordering extent is not more than the maximum of one quarter the reordering extent is not more than the maximum of one quarter
window or 1 mS, whichever is larger. Within this limit on reorder window or 1 mS, whichever is larger. Within this limit on reorder
extent, there should be no bound on reordering density. extent, there should be no bound on reordering density.
By implication, recording which is less than these bounds should not By implication, recording which is less than these bounds should not
be treated as a network impairment. However [RFC4737] still applies: be treated as a network impairment. However [RFC4737] still applies:
reordering should be instrumented and the maximum reordering that can reordering should be instrumented and the maximum reordering that can
be properly characterized by the test (e.g. bound on history buffers) be properly characterized by the test (because of the bound on
should be recorded with the measurement results. history buffers) should be recorded with the measurement results.
Reordering tolerance and diagnostic limitations, such as the size of Reordering tolerance and diagnostic limitations, such as the size of
the history buffer used to diagnose packets that are way out-of- the history buffer used to diagnose packets that are way out-of-
order, MUST be specified in a FSTIDS. order, MUST be specified in a FSTIDS.
8. IP Diagnostic Tests 8. IP Diagnostic Tests
The IP diagnostic tests below are organized by traffic pattern: basic The IP diagnostic tests below are organized by traffic pattern: basic
data rate and packet transfer statistics, standing queues, slowstart data rate and packet transfer statistics, standing queues, slowstart
bursts, and sender rate bursts. We also introduce some combined bursts, and sender rate bursts. We also introduce some combined
tests which are more efficient when networks are expected to pass, tests which are more efficient when networks are expected to pass,
but conflate diagnostic signatures when they fail. but conflate diagnostic signatures when they fail.
There are a number of test details which are not fully defined here. There are a number of test details which are not fully defined here.
They must be fully specified in a FSTIDS. From a standardization They must be fully specified in a FS-TIDS. From a standardization
perspective, this lack of specificity will weaken this version of perspective, this lack of specificity will weaken this version of
Model Based Metrics, however it is anticipated that this it be more Model Based Metrics, however it is anticipated that this weakness is
than offset by the extent to which MBM suppresses the problems caused than offset by the extent to which MBM suppresses the problems caused
by using transport protocols for measurement. e.g. non-specific MBM by using transport protocols for measurement. e.g. non-specific MBM
metrics are likely to have better repeatability than many existing metrics are likely to have better repeatability than many existing
BTC like metrics. Once we have good field experience, the missing BTC like metrics. Once we have good field experience, the missing
details can be fully specified. details can be fully specified.
8.1. Basic Data Rate and Packet Transfer Tests 8.1. Basic Data Rate and Packet Transfer Tests
We propose several versions of the basic data rate and packet We propose several versions of the basic data rate and packet
transfer statistics test. All measure the number of packets transfer statistics test. All measure the number of packets
delivered between losses or ECN Congestion Experienced (CE) marks, delivered between losses or ECN Congestion Experienced (CE) marks,
using a data stream that is rate controlled at or below the using a data stream that is rate controlled at approximately the
target_data_rate. target_data_rate.
The tests below differ in how the data rate is controlled. The data The tests below differ in how the data rate is controlled. The data
can be paced on a timer, or window controlled at full Target Data can be paced on a timer, or window controlled (and self clocked).
Rate. The first two tests implicitly confirm that sub_path has The first two tests implicitly confirm that sub_path has sufficient
sufficient raw capacity to carry the target_data_rate. They are raw capacity to carry the target_data_rate. They are recommended for
recommend for relatively infrequent testing, such as an installation relatively infrequent testing, such as an installation or periodic
or periodic auditing process. The third, background packet transfer auditing process. The third, background packet transfer statistics,
statistics, is a low rate test designed for ongoing monitoring for is a low rate test designed for ongoing monitoring for changes in
changes in subpath quality. subpath quality.
All rely on the data receiver accumulating packet transfer statistics All rely on the data receiver accumulating packet transfer statistics
as described in Section 7.2 to score the outcome: as described in Section 7.2 to score the outcome:
Pass: it is statistically significant that the observed interval Pass: it is statistically significant that the observed interval
between losses or ECN CE marks is larger than the target_run_length. between losses or ECN CE marks is larger than the target_run_length.
Fail: it is statistically significant that the observed interval Fail: it is statistically significant that the observed interval
between losses or ECN CE marks is smaller than the target_run_length. between losses or ECN CE marks is smaller than the target_run_length.
A test is considered to be inconclusive if it failed to meet the data A test is considered to be inconclusive if it failed to generate the
rate as specified below, meet the qualifications defined in data rate as specified below, meet the qualifications defined in
Section 5.4 or neither run length statistical hypothesis was Section 5.4 or neither run length statistical hypothesis was
confirmed in the allotted test duration. confirmed in the allotted test duration.
8.1.1. Delivery Statistics at Paced Full Data Rate 8.1.1. Delivery Statistics at Paced Full Data Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while relying on timer to send data at the target_run_length while relying on timer to send data at the
target_rate using the procedure described in in Section 6.1 with a target_rate using the procedure described in in Section 6.1 with a
burst size of 1 (single packets) or 2 (packet pairs). burst size of 1 (single packets) or 2 (packet pairs).
skipping to change at page 33, line 11 skipping to change at page 33, line 34
can not be accurately controlled for any reason. can not be accurately controlled for any reason.
RFC 6673 [RFC6673] is appropriate for measuring packet transfer RFC 6673 [RFC6673] is appropriate for measuring packet transfer
statistics at full data rate. statistics at full data rate.
8.1.2. Delivery Statistics at Full Data Windowed Rate 8.1.2. Delivery Statistics at Full Data Windowed Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while sending at an average rate approximately target_run_length while sending at an average rate approximately
equal to the target_data_rate, by controlling (or clamping) the equal to the target_data_rate, by controlling (or clamping) the
window size of a conventional transport protocol to a fixed value window size of a conventional transport protocol to test_window.
computed from the properties of the test path, typically
test_window=target_data_rate*test_path_RTT/target_MTU. Note that if
there is any interaction between the forward and return path,
test_window may need to be adjusted slightly to compensate for the
resulting inflated RTT. However see the discussion in Section 8.2.4.
Since losses and ECN CE marks cause transport protocols to reduce Since losses and ECN CE marks cause transport protocols to reduce
their data rates, this test is expected to be less precise about their data rates, this test is expected to be less precise about
controlling its data rate. It should not be considered inconclusive controlling its data rate. It should not be considered inconclusive
as long as at least some of the round trips reached the full as long as at least some of the round trips reached the full
target_data_rate without incurring losses or ECN CE marks. To pass target_data_rate without incurring losses or ECN CE marks. To pass
this test the network MUST deliver target_window_size packets in this test the network MUST deliver target_window_size packets in
target_RTT time without any losses or ECN CE marks at least once per target_RTT time without any losses or ECN CE marks at least once per
two target_window_size round trips, in addition to meeting the run two target_window_size round trips, in addition to meeting the run
length statistical test. length statistical test.
skipping to change at page 33, line 47 skipping to change at page 34, line 17
RFC 6673 [RFC6673] is appropriate for measuring background packet RFC 6673 [RFC6673] is appropriate for measuring background packet
transfer statistics. transfer statistics.
8.2. Standing Queue Tests 8.2. Standing Queue Tests
These engineering tests confirm that the bottleneck is well behaved These engineering tests confirm that the bottleneck is well behaved
across the onset of packet loss, which typically follows after the across the onset of packet loss, which typically follows after the
onset of queueing. Well behaved generally means lossless for onset of queueing. Well behaved generally means lossless for
transient queues, but once the queue has been sustained for a transient queues, but once the queue has been sustained for a
sufficient period of time (or reaches a sufficient queue depth) there sufficient period of time (or reaches a sufficient queue depth) there
should be a small number of losses to signal to the transport should be a small number of losses or ECN CE marks to signal to the
protocol that it should reduce its window. Losses that are too early transport protocol that it should reduce its window. Losses that are
can prevent the transport from averaging at the target_data_rate. too early can prevent the transport from averaging at the
Losses that are too late indicate that the queue might be subject to target_data_rate. Losses that are too late indicate that the queue
bufferbloat [wikiBloat] and inflict excess queuing delays on all might be subject to bufferbloat [wikiBloat] and inflict excess
flows sharing the bottleneck queue. Excess losses (more than half of queuing delays on all flows sharing the bottleneck queue. Excess
the window) at the onset of congestion make loss recovery problematic losses (more than half of the window) at the onset of congestion make
for the transport protocol. Non-linear, erratic or excessive RTT loss recovery problematic for the transport protocol. Non-linear,
increases suggest poor interactions between the channel acquisition erratic or excessive RTT increases suggest poor interactions between
algorithms and the transport self clock. All of the tests in this the channel acquisition algorithms and the transport self clock. All
section use the same basic scanning algorithm, described here, but of the tests in this section use the same basic scanning algorithm,
score the link or subpath on the basis of how well it avoids each of described here, but score the link or subpath on the basis of how
these problems. well it avoids each of these problems.
For some technologies the data might not be subject to increasing For some technologies the data might not be subject to increasing
delays, in which case the data rate will vary with the window size delays, in which case the data rate will vary with the window size
all the way up to the onset of load induced packet loss or ECN CE all the way up to the onset of load induced packet loss or ECN CE
marks. For theses technologies, the discussion of queueing does not marks. For theses technologies, the discussion of queueing does not
apply, but it is still required that the onset of losses or ECN CE apply, but it is still required that the onset of losses or ECN CE
marks be at an appropriate point and progressive. marks be at an appropriate point and progressive. Start the scan at
a window equal to or slightly below the test_window.
Use the procedure in Section 6.3 to sweep the window across the onset Use the procedure in Section 6.3 to sweep the window across the onset
of queueing and the onset of loss. The tests below all assume that of queueing and the onset of loss. The tests below all assume that
the scan emulates standard additive increase and delayed ACK by the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_window_size incrementing the window by one packet for every 2*target_window_size
packets delivered. A scan can typically be divided into three packets delivered. A scan can typically be divided into three
regions: below the onset of queueing, a standing queue, and at or regions: below the onset of queueing, a standing queue, and at or
beyond the onset of loss. beyond the onset of loss.
Below the onset of queueing the RTT is typically fairly constant, and Below the onset of queueing the RTT is typically fairly constant, and
the data rate varies in proportion to the window size. Once the data the data rate varies in proportion to the window size. Once the data
rate reaches the subpath IP rate, the data rate becomes fairly rate reaches the subpath IP rate, the data rate becomes fairly
constant, and the RTT increases in proportion to the increase in constant, and the RTT increases in proportion to the increase in
window size. The precise transition across the start of queueing can window size. The precise transition across the start of queueing can
be identified by the maximum network power, defined to be the ratio be identified by the maximum network power, defined to be the ratio
data rate over the RTT. The network power can be computed at each data rate over the RTT. The network power can be computed at each
window size, and the window with the maximum are taken as the start window size, and the window with the maximum is taken as the start of
of the queueing region. the queueing region.
For technologies that do not have conventional queues, start the scan
at a window equal to the test_window=target_data_rate*test_path_RTT/
target_MTU, i.e. starting at the target rate, instead of the power
point.
If there is random background loss (e.g. bit errors, etc), precise If there is random background loss (e.g. bit errors, etc), precise
determination of the onset of queue induced packet loss may require determination of the onset of queue induced packet loss may require
multiple scans. Above the onset of queuing loss, all transport multiple scans. Above the onset of queuing loss, all transport
protocols are expected to experience periodic losses determined by protocols are expected to experience periodic losses determined by
the interaction between the congestion control and AQM algorithms. the interaction between the congestion control and AQM algorithms.
For standard congestion control algorithms the periodic losses are For standard congestion control algorithms the periodic losses are
likely to be relatively widely spaced and the details are typically likely to be relatively widely spaced and the details are typically
dominated by the behavior of the transport protocol itself. For the dominated by the behavior of the transport protocol itself. For the
stiffened transport protocols case (with non-standard, aggressive stiffened transport protocols case (with non-standard, aggressive
congestion control algorithms) the details of periodic losses will be congestion control algorithms) the details of periodic losses will be
dominated by how the the window increase function responds to loss. dominated by how the window increase function responds to loss.
8.2.1. Congestion Avoidance 8.2.1. Congestion Avoidance
A subpath passes the congestion avoidance standing queue test if more A subpath passes the congestion avoidance standing queue test if more
than target_run_length packets are delivered between the onset of than target_run_length packets are delivered between the onset of
queueing (as determined by the window with the maximum network power) queueing (as determined by the window with the maximum network power
and the first loss or ECN CE mark. If this test is implemented using as described above) and the first loss or ECN CE mark. If this test
a standards congestion control algorithm with a clamp, it can be is implemented using a standard congestion control algorithm with a
performed in situ in the production internet as a capacity test. For clamp, it can be performed in situ in the production internet as a
an example of such a test see [Pathdiag]. capacity test. For an example of such a test see [Pathdiag].
For technologies that do not have conventional queues, use the For technologies that do not have conventional queues, use the
test_window in place of the onset of queueing. i.e. A subpath passes test_window in place of the onset of queueing. i.e. A subpath passes
the congestion avoidance standing queue test if more than the congestion avoidance standing queue test if more than
target_run_length packets are delivered between start of the scan at target_run_length packets are delivered between start of the scan at
test_window and the first loss or ECN CE mark. test_window and the first loss or ECN CE mark.
8.2.2. Bufferbloat 8.2.2. Bufferbloat
This test confirms that there is some mechanism to limit buffer This test confirms that there is some mechanism to limit buffer
occupancy (e.g. that prevents bufferbloat). Note that this is not occupancy (e.g. that prevents bufferbloat). Note that this is not
strictly a requirement for single stream bulk transport capacity, strictly a requirement for single stream bulk transport capacity,
however if there is no mechanism to limit buffer queue occupancy then however if there is no mechanism to limit buffer queue occupancy then
skipping to change at page 35, line 47 skipping to change at page 36, line 13
how much standing queue is acceptable. The factor of two chosen here how much standing queue is acceptable. The factor of two chosen here
reflects a rule of thumb. In conjunction with the previous test, reflects a rule of thumb. In conjunction with the previous test,
this test implies that the first loss should occur at a queueing this test implies that the first loss should occur at a queueing
delay which is between one and two times the target_RTT. delay which is between one and two times the target_RTT.
Specified RTT limits that are larger than twice the target_RTT must Specified RTT limits that are larger than twice the target_RTT must
be fully justified in the FSTIDS. be fully justified in the FSTIDS.
8.2.3. Non excessive loss 8.2.3. Non excessive loss
This test confirm that the onset of loss is not excessive. Pass if This test confirms that the onset of loss is not excessive. Pass if
losses are equal or less than the increase in the cross traffic plus losses are equal or less than the increase in the cross traffic plus
the test stream window increase on the previous RTT. This could be the test stream window increase since the previous RTT. This could
restated as non-decreasing subpath throughput at the onset of loss, be restated as non-decreasing total throughput of the subpath at the
which is easy to meet as long as discarding packets is not more onset of loss. (Note that when there is a transient drop in subpath
expensive than delivering them. (Note when there is a transient drop throughput and there is not already a standing queue, a subpath that
in subpath throughput, outside of a standing queue test, a subpath passes other queue tests in this document will have sufficient queue
that passes other queue tests in this document will have sufficient space to hold one full RTT worth of data).
queue space to hold one RTT worth of data).
Note that conventional Internet policers will not pass this test, Note that token bucket policers will not pass this test, which is as
which is correct. TCP often fails to come into equilibrium at more intended. TCP often stumbles badly if more than a small fraction of
than a small fraction of the available capacity, if the capacity is the packets are dropped in one RTT. Many TCP implementations will
enforced by a policer. [Citation Pending]. require a timeout and slowstart to recover their self clock. Even if
they can recover from the massive losses the sudden change in
available capacity at the bottleneck waists serving and front path
capacity until TCP can adapt to the new rate [Policing].
8.2.4. Duplex Self Interference 8.2.4. Duplex Self Interference
This engineering test confirms a bound on the interactions between This engineering test confirms a bound on the interactions between
the forward data path and the ACK return path. the forward data path and the ACK return path.
Some historical half duplex technologies had the property that each Some historical half duplex technologies had the property that each
direction held the channel until it completely drained its queue. direction held the channel until it completely drained its queue.
When a self clocked transport protocol, such as TCP, has data and When a self clocked transport protocol, such as TCP, has data and
ACKs passing in opposite directions through such a link, the behavior ACKs passing in opposite directions through such a link, the behavior
skipping to change at page 37, line 6 skipping to change at page 37, line 24
total packets. total packets.
Accumulate packet transfer statistics as described in Section 7.2 to Accumulate packet transfer statistics as described in Section 7.2 to
score the outcome. Pass if it is statistically significant that the score the outcome. Pass if it is statistically significant that the
observed number of good packets delivered between losses or ECN CE observed number of good packets delivered between losses or ECN CE
marks is larger than the target_run_length. Fail if it is marks is larger than the target_run_length. Fail if it is
statistically significant that the observed interval between losses statistically significant that the observed interval between losses
or ECN CE marks is smaller than the target_run_length. or ECN CE marks is smaller than the target_run_length.
It is deemed inconclusive if the elapsed time to send the data burst It is deemed inconclusive if the elapsed time to send the data burst
is not less than half of the time to receive the ACKs. (i.e. sending is not less than half of the time to receive the ACKs. (i.e. It is
data too fast is ok, but sending it slower than twice the actual acceptable to send data too fast, but sending it slower than twice
bottleneck rate as indicated by the ACKs is deemed inconclusive). the actual bottleneck rate as indicated by the ACKs is deemed
The headway for the slowstart bursts should be the target_RTT. inconclusive). The headway for the slowstart bursts should be the
target_RTT.
Note that these are the same parameters as the Sender Full Window Note that these are the same parameters as the Sender Full Window
burst test, except the burst rate is at slowestart rate, rather than burst test, except the burst rate is at slowestart rate, rather than
sender interface rate. sender interface rate.
8.3.2. Slowstart AQM test 8.3.2. Slowstart AQM test
Do a continuous slowstart (send data continuously at twice the Do a continuous slowstart (send data continuously at twice the
implied IP bottleneck capacity), until the first loss, stop, allow implied IP bottleneck capacity), until the first loss, stop, allow
the network to drain and repeat, gathering statistics on how many the network to drain and repeat, gathering statistics on how many
packets were delivered before the loss, the pattern of losses, packets were delivered before the loss, the pattern of losses,
maximum observed RTT and window size. Justify the results. There is maximum observed RTT and window size. Justify the results. There is
not currently sufficient theory justifying requiring any particular not currently sufficient theory justifying requiring any particular
result, however design decisions that affect the outcome of this result, however design decisions that affect the outcome of this
tests also affect how the network balances between long and short tests also affect how the network balances between long and short
flows (the "mice vs elephants" problem). The queue at the time of flows (the "mice vs elephants" problem). The queue sojourn time for
the first loss should be at least one half of the target_RTT. the first packet delivered after the first loss should be at least
one half of the target_RTT.
This is an engineering test: It must be performed on a quiescent This is an engineering test: It should be performed on a quiescent
network or testbed, since cross traffic has the potential to change network or testbed, since cross traffic has the potential to change
the results. the results in ill defined ways.
8.4. Sender Rate Burst tests 8.4. Sender Rate Burst tests
These tests determine how well the network can deliver bursts sent at These tests determine how well the network can deliver bursts sent at
sender's interface rate. Note that this test most heavily exercises sender's interface rate. Note that this test most heavily exercises
the front path, and is likely to include infrastructure may be out of the front path, and is likely to include infrastructure may be out of
scope for an access ISP, even though the bursts might be caused by scope for an access ISP, even though the bursts might be caused by
ACK compression, thinning or channel arbitration in the access ISP. ACK compression, thinning or channel arbitration in the access ISP.
See Appendix B. See Appendix B.
Also, there are a several details that are not precisely defined. Also, there are a several details that are not precisely defined.
For starters there is not a standard server interface rate. 1 Gb/s For starters there is not a standard server interface rate. 1 Gb/s
and 10 Gb/s are common today, but higher rates will become cost and 10 Gb/s are common today, but higher rates will become cost
effective and can be expected to be dominant some time in the future. effective and can be expected to be dominant some time in the future.
Current standards permit TCP to send a full window bursts following Current standards permit TCP to send a full window bursts following
an application pause. (Congestion Window Validation [RFC2861], is an application pause. (Congestion Window Validation [RFC2861]
not required, but even if was, it does not take effect until an [RFC7661], is not required, but even if was, it does not take effect
application pause is longer than an RTO.) Since full window bursts until an application pause is longer than an RTO.) Since full window
are consistent with standard behavior, it is desirable that the bursts are consistent with standard behavior, it is desirable that
network be able to deliver such bursts, otherwise application pauses the network be able to deliver such bursts, otherwise application
will cause unwarranted losses. Note that the AIMD sawtooth requires pauses will cause unwarranted losses. Note that the AIMD sawtooth
a peak window that is twice target_window_size, so the worst case requires a peak window that is twice target_window_size, so the worst
burst may be 2*target_window_size. case burst may be 2*target_window_size.
It is also understood in the application and serving community that It is also understood in the application and serving community that
interface rate bursts have a cost to the network that has to be interface rate bursts have a cost to the network that has to be
balanced against other costs in the servers themselves. For example balanced against other costs in the servers themselves. For example
TCP Segmentation Offload (TSO) reduces server CPU in exchange for TCP Segmentation Offload (TSO) reduces server CPU in exchange for
larger network bursts, which increase the stress on network buffer larger network bursts, which increase the stress on network buffer
memory. Some newer TCP implementations can pace traffic at scale memory. Some newer TCP implementations can pace traffic at scale
[TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how [TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how
quickly these changes will be deployed. quickly these changes will be deployed.
skipping to change at page 38, line 43 skipping to change at page 39, line 20
at the cost of conflating diagnostic signatures when they fail. at the cost of conflating diagnostic signatures when they fail.
These are by far the most efficient for monitoring networks that are These are by far the most efficient for monitoring networks that are
nominally expected to pass all tests. nominally expected to pass all tests.
8.5.1. Sustained Bursts Test 8.5.1. Sustained Bursts Test
The sustained burst test implements a combined worst case version of The sustained burst test implements a combined worst case version of
all of the capacity tests above. It is simply: all of the capacity tests above. It is simply:
Send target_window_size bursts of packets at server interface rate Send target_window_size bursts of packets at server interface rate
with target_RTT burst headway (burst start to burst start). Verify with target_RTT burst headway (burst start to next burst start).
that the observed packet transfer statistics meets the Verify that the observed packet transfer statistics meets the
target_run_length. target_run_length.
Key observations: Key observations:
o The subpath under test is expected to go idle for some fraction of o The subpath under test is expected to go idle for some fraction of
the time: (subpath_IP_capacity-target_rate/ the time, determined by the difference between the time to drain
(target_MTU-header_overhead)*target_MTU)/subpath_IP_capacity. the queue at the subpath IP capacity, and the target_RTT. If the
Failing to do so indicates a problem with the procedure and an queue does not drain completely it may be an indication that the
inconclusive test result. the subpath has insufficient IP capacity or that there is some
other problem with the test (e.g. inconclusive).
o The burst sensitivity can be derated by sending smaller bursts o The burst sensitivity can be derated by sending smaller bursts
more frequently. E.g. send target_window_size*derate packet more frequently. E.g. send target_window_size*derate packet
bursts every target_RTT*derate. bursts every target_RTT*derate, where "derate" is less than one.
o When not derated, this test is the most strenuous capacity test. o When not derated, this test is the most strenuous capacity test.
o A subpath that passes this test is likely to be able to sustain o A subpath that passes this test is likely to be able to sustain
higher rates (close to subpath_IP_capacity) for paths with RTTs higher rates (close to subpath_IP_capacity) for paths with RTTs
significantly smaller than the target_RTT. significantly smaller than the target_RTT.
o This test can be implemented with instrumented TCP [RFC4898], o This test can be implemented with instrumented TCP [RFC4898],
using a specialized measurement application at one end [MBMSource] using a specialized measurement application at one end [MBMSource]
and a minimal service at the other end [RFC0863] [RFC0864]. and a minimal service at the other end [RFC0863] [RFC0864].
o This test is efficient to implement, since it does not require o This test is efficient to implement, since it does not require
per-packet timers, and can make use of TSO in modern NIC hardware. per-packet timers, and can make use of TSO in modern NIC hardware.
o If a subpath is known to pass the Standing Queue engineering tests o If a subpath is known to pass the Standing Queue engineering tests
skipping to change at page 39, line 36 skipping to change at page 40, line 16
other details of the measurement infrastructure, as long as the other details of the measurement infrastructure, as long as the
measurement infrastructure can accurately and reliably deliver the measurement infrastructure can accurately and reliably deliver the
required bursts to the subpath under test. required bursts to the subpath under test.
8.5.2. Streaming Media 8.5.2. Streaming Media
Model Based Metrics can be implicitly implemented as a side effect Model Based Metrics can be implicitly implemented as a side effect
any non-throughput maximizing application, such as streaming media, any non-throughput maximizing application, such as streaming media,
with some additional controls and instrumentation in the servers. with some additional controls and instrumentation in the servers.
The essential requirement is that the data rate be constrained such The essential requirement is that the data rate be constrained such
that even with arbitrary application pauses and bursts the data rate that even with arbitrary application pauses and bursts, the data rate
and burst sizes stay within the envelope defined by the individual and burst sizes stay within the envelope defined by the individual
tests described above. tests described above.
If the application's serving_data_rate is less than or equal to the If the application's serving data rate can be constrained to be less
target_data_rate and the serving_RTT (the RTT between the sender and than or equal to the target_data_rate and the serving_RTT (the RTT
client) is less than the target_RTT, this constraint is most easily between the sender and client) is less than the target_RTT, this
implemented by clamping the transport window size to be no larger constraint is most easily implemented by clamping the transport
than: window size to serving_window_clamp, set to the test_window, computed
for the actual serving path.
serving_window_clamp=target_data_rate*serving_RTT/
(target_MTU-header_overhead)
Under the above constraints the serving_window_clamp will limit the Under the above constraints the serving_window_clamp will limit the
both the serving data rate and burst sizes to be no larger than the both the serving data rate and burst sizes to be no larger than the
procedures in Section 8.1.2 and Section 8.4 or Section 8.5.1. Since procedures in Section 8.1.2 and Section 8.4 or Section 8.5.1. Since
the serving RTT is smaller than the target_RTT, the worst case bursts the serving RTT is smaller than the target_RTT, the worst case bursts
that might be generated under these conditions will be smaller than that might be generated under these conditions will be smaller than
called for by Section 8.4 and the sender rate burst sizes are called for by Section 8.4 and the sender rate burst sizes are
implicitly derated by the serving_window_clamp divided by the implicitly derated by the serving_window_clamp divided by the
target_window_size at the very least. (Depending on the application target_window_size at the very least. (Depending on the application
behavior, the data might be significantly smoother than specified by behavior, the data might be significantly smoother than specified by
any of the burst tests.) any of the burst tests.)
In an alternative implementation the data rate and bursts might be In an alternative implementation the data rate and bursts might be
explicitly controlled by a programmable traffic shaper or pacing at explicitly controlled by a programmable traffic shaper or pacing at
the sender. This would provide better control over transmissions but the sender. This would provide better control over transmissions but
it is substantially more complicated to implement and would be likely is more complicated to implement, although the required technology is
to have a higher CPU overhead. available[TSO_pacing][TSO_fq_pacing].
Note that these techniques can be applied to any content delivery Note that these techniques can be applied to any content delivery
that can be subjected to a reduced data rate in order to inhibit TCP that can be constrained to a reduced data rate in order to inhibit
equilibrium behavior. TCP equilibrium behavior.
9. An Example 9. An Example
In this section a we illustrate a TIDS designed to confirm that an In this section a we illustrate a TIDS designed to confirm that an
access ISP can reliably deliver HD video from multiple content access ISP can reliably deliver HD video from multiple content
providers to all of their customers. With modern codecs, minimal HD providers to all of their customers. With modern codecs, minimal HD
video (720p) generally fits in 2.5 Mb/s. Due to their geographical video (720p) generally fits in 2.5 Mb/s. Due to their geographical
size, network topology and modem designs the ISP determines that most size, network topology and modem characteristics the ISP determines
content is within a 50 mS RTT from their users (This is a sufficient that most content is within a 50 mS RTT of their users (This example
to cover continental Europe or either US coast from a single serving RTT is a sufficient to cover the propagation delay to continental
site.) Europe or either US coast with low delay modems or somewhat smaller
geographical regions if the modems require additional delay to
implement advanced compression and error recovery).
2.5 Mb/s over a 50 ms path 2.5 Mb/s over a 50 ms path
+----------------------+-------+---------+ +----------------------+-------+---------+
| End-to-End Parameter | value | units | | End-to-End Parameter | value | units |
+----------------------+-------+---------+ +----------------------+-------+---------+
| target_rate | 2.5 | Mb/s | | target_rate | 2.5 | Mb/s |
| target_RTT | 50 | ms | | target_RTT | 50 | ms |
| target_MTU | 1500 | bytes | | target_MTU | 1500 | bytes |
| header_overhead | 64 | bytes | | header_overhead | 64 | bytes |
| | | |
| target_window_size | 11 | packets | | target_window_size | 11 | packets |
| target_run_length | 363 | packets | | target_run_length | 363 | packets |
+----------------------+-------+---------+ +----------------------+-------+---------+
Table 1 Table 1
Table 1 shows the default TCP model with no derating, and as such is Table 1 shows the default TCP model with no derating, and as such is
quite conservative. The simplest TIDS would be to use the sustained quite conservative. The simplest TIDS would be to use the sustained
burst test, described in Section 8.5.1. Such a test would send 11 burst test, described in Section 8.5.1. Such a test would send 11
packet bursts every 50mS, and confirming that there was no more than packet bursts every 50mS, and confirming that there was no more than
skipping to change at page 41, line 16 skipping to change at page 41, line 43
Since this number represents is the entire end-to-end loss budget, Since this number represents is the entire end-to-end loss budget,
independent subpath tests could be implemented by apportioning the independent subpath tests could be implemented by apportioning the
packet loss ratio across subpaths. For example 50% of the losses packet loss ratio across subpaths. For example 50% of the losses
might be allocated to the access or last mile link to the user, 40% might be allocated to the access or last mile link to the user, 40%
to the interconnects with other ISPs and 1% to each internal hop to the interconnects with other ISPs and 1% to each internal hop
(assuming no more than 10 internal hops). Then all of the subpaths (assuming no more than 10 internal hops). Then all of the subpaths
can be tested independently, and the spatial composition of passing can be tested independently, and the spatial composition of passing
subpaths would be expected to be within the end-to-end loss budget. subpaths would be expected to be within the end-to-end loss budget.
9.1. Observations about applicability
Guidance on deploying and using MBM belong in a future document.
However this example illustrates some the issues that may need to be
considered.
Note that a another ISP, with different geographical coverage,
topology or modem technology may need to assume a different
target_RTT, and as a consequence different target_window_size and
target_run_length, even for the same target_data rate. One of the
implications of this is that infrastructure shared by multiple ISPs,
such as inter-exchange points (IXPs) and other interconnects may need
to be evaluated on the basis of the most stringent target_window_size
and target_run_length of any participating ISP. One way to do this
might be to choose target parameters for evaluating such shared
infrastructure on the basis of a hypothetical reference path that
does not necessarily match any actual paths.
Testing interconnects has generally been problematic: conventional Testing interconnects has generally been problematic: conventional
performance tests run between measurement points adjacent to either performance tests run between measurement points adjacent to either
side of the interconnect, are not generally useful. Unconstrained side of the interconnect are not generally useful. Unconstrained TCP
TCP tests, such as iperf [iperf] are usually overly aggressive tests, such as iPerf [iPerf] are usually overly aggressive due to the
because the RTT is so small (often less than 1 mS). With a short RTT small RTT (often less than 1 mS). With a short RTT these tools are
these tools are likely to report inflated numbers because for short likely to report inflated data rates because on a short RTT these
RTTs these tools can tolerate very high packet loss ratios and can tools can tolerate very high packet loss ratios and can push other
push other cross traffic off of the network. As a consequence they cross traffic off of the network. As a consequence these
are useless for predicting actual user performance, and may measurements are useless for predicting actual user performance over
themselves be quite disruptive. Model Based Metrics solves this longer paths, and may themselves be quite disruptive. Model Based
problem. The same test pattern as used on other subpaths can be Metrics solves this problem. The interconnect can be evaluated with
applied to the interconnect. For our example, when apportioned 40% the same TIDS as other subpaths. Continuing our example, if the
of the losses, 11 packet bursts sent every 50mS should have fewer interconnect is apportioned 40% of the losses, 11 packet bursts sent
than one loss per 82 bursts (902 packets). every 50mS should have fewer than one loss per 82 bursts (902
packets).
10. Validation 10. Validation
Since some aspects of the models are likely to be too conservative, Since some aspects of the models are likely to be too conservative,
Section 5.2 permits alternate protocol models and Section 5.3 permits Section 5.2 permits alternate protocol models and Section 5.3 permits
test parameter derating. If either of these techniques are used, we test parameter derating. If either of these techniques are used, we
require demonstrations that such a TIDS can robustly detect subpaths require demonstrations that such a TIDS can robustly detect subpaths
that will prevent authentic applications using state-of-the-art that will prevent authentic applications using state-of-the-art
protocol implementations from meeting the specified Target Transport protocol implementations from meeting the specified Target Transport
Performance. This correctness criteria is potentially difficult to Performance. This correctness criteria is potentially difficult to
prove, because it implicitly requires validating a TIDS against all prove, because it implicitly requires validating a TIDS against all
possible subpaths and subpaths. The procedures described here are possible paths and subpaths. The procedures described here are still
still experimental. experimental.
We suggest two approaches, both of which should be applied: first, We suggest two approaches, both of which should be applied: first,
publish a fully open description of the TIDS, including what publish a fully open description of the TIDS, including what
assumptions were used and and how it was derived, such that the assumptions were used and and how it was derived, such that the
research community can evaluate the design decisions, test them and research community can evaluate the design decisions, test them and
comment on their applicability; and second, demonstrate that an comment on their applicability; and second, demonstrate that an
applications running over an infinitesimally passing testbed do meet applications running over an infinitesimally passing testbed do meet
the performance targets. the performance targets.
An infinitesimally passing testbed resembles a epsilon-delta proof in An infinitesimally passing testbed resembles a epsilon-delta proof in
skipping to change at page 42, line 28 skipping to change at page 43, line 25
and the front path should have just enough buffering to withstand 11 and the front path should have just enough buffering to withstand 11
packet interface rate bursts. We want every one of the TIDS tests to packet interface rate bursts. We want every one of the TIDS tests to
fail if we slightly increase the relevant test parameter, so for fail if we slightly increase the relevant test parameter, so for
example sending a 12 packet bursts should cause excess (possibly example sending a 12 packet bursts should cause excess (possibly
deterministic) packet drops at the dominant queue at the bottleneck. deterministic) packet drops at the dominant queue at the bottleneck.
On this infinitesimally passing network it should be possible for a On this infinitesimally passing network it should be possible for a
real application using a stock TCP implementation in the vendor's real application using a stock TCP implementation in the vendor's
default configuration to attain 2.5 Mb/s over an 50 mS path. default configuration to attain 2.5 Mb/s over an 50 mS path.
The most difficult part of setting up such a testbed is arranging for The most difficult part of setting up such a testbed is arranging for
it to infinitesimally pass the individual tests. Two approaches: it to infinitesimally pass the individual tests. Two approaches are
constraining the network devices not to use all available resources suggested: constraining the network devices not to use all available
(e.g. by limiting available buffer space or data rate); and resources (e.g. by limiting available buffer space or data rate); and
preloading subpaths with cross traffic. Note that is it important pre-loading subpaths with cross traffic. Note that is it important
that a single environment be constructed which infinitesimally passes that a single environment be constructed which infinitesimally passes
all tests at the same time, otherwise there is a chance that TCP can all tests at the same time, otherwise there is a chance that TCP can
exploit extra latitude in some parameters (such as data rate) to exploit extra latitude in some parameters (such as data rate) to
partially compensate for constraints in other parameters (queue partially compensate for constraints in other parameters (queue
space, or viceversa). space, or vice-versa).
To the extent that a TIDS is used to inform public dialog it should To the extent that a TIDS is used to inform public dialog it should
be fully publicly documented, including the details of the tests, be fully publicly documented, including the details of the tests,
what assumptions were used and how it was derived. All of the what assumptions were used and how it was derived. All of the
details of the validation experiment should also be published with details of the validation experiment should also be published with
sufficient detail for the experiments to be replicated by other sufficient detail for the experiments to be replicated by other
researchers. All components should either be open source of fully researchers. All components should either be open source of fully
described proprietary implementations that are available to the described proprietary implementations that are available to the
research community. research community.
skipping to change at page 43, line 13 skipping to change at page 44, line 10
Based Metrics are expected to be a huge step forward because Based Metrics are expected to be a huge step forward because
equivalent measurements can be performed from multiple vantage equivalent measurements can be performed from multiple vantage
points, such that performance claims can be independently validated points, such that performance claims can be independently validated
by multiple parties. by multiple parties.
Much of the acrimony in the Net Neutrality debate is due by the Much of the acrimony in the Net Neutrality debate is due by the
historical lack of any effective vantage independent tools to historical lack of any effective vantage independent tools to
characterize network performance. Traditional methods for measuring characterize network performance. Traditional methods for measuring
Bulk Transport Capacity are sensitive to RTT and as a consequence Bulk Transport Capacity are sensitive to RTT and as a consequence
often yield very different results when run local to an ISP or often yield very different results when run local to an ISP or
internconnect and when run over a customer's complete path. Neither interconnect and when run over a customer's complete path. Neither
the ISP nor customer can repeat the other's measurements, leading to the ISP nor customer can repeat the others measurements, leading to
high levels of distrust and acrimony. Model Based Metrics are high levels of distrust and acrimony. Model Based Metrics are
expected to greatly improve this situation. expected to greatly improve this situation.
This document only describes a framework for designing Fully This document only describes a framework for designing Fully
Specified Targeted IP Diagnostic Suite. Each FSTIDS MUST include its Specified Targeted IP Diagnostic Suite. Each FS-TIDS MUST include
own security section. its own security section.
12. Acknowledgements 12. Acknowledgements
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. Alex Gilgur for helping with probability in the target run length. Alex Gilgur for helping with
the statistics. the statistics.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
Ruediger Geib provided feedback which greatly improved the document. Ruediger Geib provided feedback which greatly improved the document.
skipping to change at page 44, line 13 skipping to change at page 45, line 6
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
14.2. Informative References 14.2. Informative References
[RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983.
[RFC0864] Postel, J., "Character Generator Protocol", STD 22, [RFC0864] Postel, J., "Character Generator Protocol", STD 22,
RFC 864, May 1983. RFC 864, May 1983.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330, May
May 1998. 1998.
[RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion [RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion
Window Validation", RFC 2861, June 2000. Window Validation", RFC 2861, June 2000.
[RFC3148] Mathis, M. and M. Allman, "A Framework for Defining [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", RFC 3148, Empirical Bulk Transfer Capacity Metrics", RFC 3148, July
July 2001. 2001.
[RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte [RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte
Counting (ABC)", RFC 3465, February 2003. Counting (ABC)", RFC 3465, February 2003.
[RFC4015] Ludwig, R. and A. Gurtov, "The Eifel Response Algorithm [RFC4015] Ludwig, R. and A. Gurtov, "The Eifel Response Algorithm
for TCP", RFC 4015, February 2005. for TCP", RFC 4015, February 2005.
[RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
S., and J. Perser, "Packet Reordering Metrics", RFC 4737, S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
November 2006. November 2006.
skipping to change at page 44, line 48 skipping to change at page 45, line 41
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009. Control", RFC 5681, September 2009.
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric
Composition", RFC 5835, April 2010. Composition", RFC 5835, April 2010.
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of
Metrics", RFC 6049, January 2011. Metrics", RFC 6049, January 2011.
[RFC6576] Geib, R., Ed., Morton, A., Fardid, R., and A. Steinmitz,
"IP Performance Metrics (IPPM) Standard Advancement
Testing", BCP 176, RFC 6576, DOI 10.17487/RFC6576, March
2012, <http://www.rfc-editor.org/info/rfc6576>.
[RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673,
August 2012. August 2012.
[RFC6928] Chu, J., Dukkipati, N., Cheng, Y., and M. Mathis, [RFC6928] Chu, J., Dukkipati, N., Cheng, Y., and M. Mathis,
"Increasing TCP's Initial Window", RFC 6928, DOI 10.17487/ "Increasing TCP's Initial Window", RFC 6928,
RFC6928, April 2013, DOI 10.17487/RFC6928, April 2013,
<http://www.rfc-editor.org/info/rfc6928>. <http://www.rfc-editor.org/info/rfc6928>.
[RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling
Framework for IP Performance Metrics (IPPM)", RFC 7312, Framework for IP Performance Metrics (IPPM)", RFC 7312,
August 2014. August 2014.
[RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
A. Morton, "A Reference Path and Measurement Points for A. Morton, "A Reference Path and Measurement Points for
Large-Scale Measurement of Broadband Performance", Large-Scale Measurement of Broadband Performance",
RFC 7398, February 2015. RFC 7398, February 2015.
[RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF
Recommendations Regarding Active Queue Management", Recommendations Regarding Active Queue Management",
BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015,
<http://www.rfc-editor.org/info/rfc7567>. <http://www.rfc-editor.org/info/rfc7567>.
[I-D.ietf-ippm-2680-bis] [RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating
Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, "A TCP to Support Rate-Limited Traffic", RFC 7661,
One-Way Loss Metric for IPPM", draft-ietf-ippm-2680-bis-05 DOI 10.17487/RFC7661, October 2015,
(work in progress), August 2015. <http://www.rfc-editor.org/info/rfc7661>.
[RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton,
Ed., "A One-Way Loss Metric for IP Performance Metrics
(IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January
2016, <http://www.rfc-editor.org/info/rfc7680>.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource] [mpingSource]
Fan, X., Mathis, M., and D. Hamon, "Git Repository for Fan, X., Mathis, M., and D. Hamon, "Git Repository for
mping: An IP Level Performance Diagnostic", Sept 2013, mping: An IP Level Performance Diagnostic", Sept 2013,
<https://github.com/m-lab/mping>. <https://github.com/m-lab/mping>.
[MBMSource] [MBMSource]
Hamon, D., Stuart, S., and H. Chen, "Git Repository for Hamon, D., Stuart, S., and H. Chen, "Git Repository for
Model Based Metrics", Sept 2013, Model Based Metrics", Sept 2013, <https://github.com/m-
<https://github.com/m-lab/MBM>. lab/MBM>.
[Pathdiag] [Pathdiag]
Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen, Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen,
"Pathdiag: Automated TCP Diagnosis", Passive and Active "Pathdiag: Automated TCP Diagnosis", Passive and Active
Measurement , June 2008. Measurement , June 2008.
[iperf] Wikipedia Contributors, "iPerf", Wikipedia, The Free [iPerf] Wikipedia Contributors, , "iPerf", Wikipedia, The Free
Encyclopedia , cited March 2015, <http://en.wikipedia.org/ Encyclopedia , cited March 2015,
w/index.php?title=Iperf&oldid=649720021>. <http://en.wikipedia.org/w/
index.php?title=Iperf&oldid=649720021>.
[StatQC] Montgomery, D., "Introduction to Statistical Quality [StatQC] Montgomery, D., "Introduction to Statistical Quality
Control - 2nd ed.", ISBN 0-471-51988-X, 1990. Control - 2nd ed.", ISBN 0-471-51988-X, 1990.
[Rtool] R Development Core Team, "R: A language and environment [Rtool] R Development Core Team, , "R: A language and environment
for statistical computing. R Foundation for Statistical for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0, URL Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
http://www.R-project.org/", , 2011. http://www.R-project.org/", , 2011.
[CVST] Krueger, T. and M. Braun, "R package: Fast Cross- [CVST] Krueger, T. and M. Braun, "R package: Fast Cross-
Validation via Sequential Testing", version 0.1, 11 2012. Validation via Sequential Testing", version 0.1, 11 2012.
[AFD] Pan, R., Breslau, L., Prabhakar, B., and S. Shenker, [AFD] Pan, R., Breslau, L., Prabhakar, B., and S. Shenker,
"Approximate fairness through differential dropping", "Approximate fairness through differential dropping",
SIGCOMM Comput. Commun. Rev. 33, 2, April 2003. SIGCOMM Comput. Commun. Rev. 33, 2, April 2003.
[wikiBloat] [wikiBloat]
Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/ Wikipedia, , "Bufferbloat", http://en.wikipedia.org/
index.php?title=Bufferbloat&oldid=608805474, March 2015. w/ index.php?title=Bufferbloat&oldid=608805474, March
2015.
[CCscaling] [CCscaling]
Fernando, F., Doyle, J., and S. Steven, "Scalable laws for Fernando, F., Doyle, J., and S. Steven, "Scalable laws for
stable network congestion control", Proceedings of stable network congestion control", Proceedings of
Conference on Decision and Conference on Decision and
Control, http://www.ee.ucla.edu/~paganini, December 2001. Control, http://www.ee.ucla.edu/~paganini, December 2001.
[TSO_pacing] [TSO_pacing]
Corbet, J., "TSO sizing and the FQ scheduler", Corbet, J., "TSO sizing and the FQ scheduler",
LWN.net https://lwn.net/Articles/564978/, Aug 2013. LWN.net https://lwn.net/Articles/564978/, Aug 2013.
[TSO_fq_pacing] [TSO_fq_pacing]
Dumazet, E. and Y. Chen, "TSO, fair queuing, pacing: Dumazet, E. and Y. Chen, "TSO, fair queuing, pacing:
three's a charm", Proceedings of IETF 88, TCPM WG https:// three's a charm", Proceedings of IETF 88, TCPM WG
www.ietf.org/proceedings/88/slides/slides-88-tcpm-9.pdf, https://www.ietf.org/proceedings/88/slides/slides-88-tcpm-
Nov 2013. 9.pdf, Nov 2013.
[Policing]
Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng,
Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An
Internet-Wide Analysis of Traffic Policing", ACM SIGCOMM ,
August 2016.
Appendix A. Model Derivations Appendix A. Model Derivations
The reference target_run_length described in Section 5.2 is based on The reference target_run_length described in Section 5.2 is based on
very conservative assumptions: that all window above very conservative assumptions: that all window above
target_window_size contributes to a standing queue that raises the target_window_size contributes to a standing queue that raises the
RTT, and that classic Reno congestion control with delayed ACKs are RTT, and that classic Reno congestion control with delayed ACKs are
in effect. In this section we provide two alternative calculations in effect. In this section we provide two alternative calculations
using different assumptions. using different assumptions.
skipping to change at page 48, line 16 skipping to change at page 49, line 28
target_run_length = (4/3)(target_window_size^2) target_run_length = (4/3)(target_window_size^2)
Note that this is 44% of the reference_run_length computed earlier. Note that this is 44% of the reference_run_length computed earlier.
This makes sense because under the assumptions in Section 5.2 the This makes sense because under the assumptions in Section 5.2 the
AMID sawtooth caused a queue at the bottleneck, which raised the AMID sawtooth caused a queue at the bottleneck, which raised the
effective RTT by 50%. effective RTT by 50%.
Appendix B. The effects of ACK scheduling Appendix B. The effects of ACK scheduling
For many network technologies simple queueing models don't apply: the For many network technologies simple queuing models don't apply: the
network schedules, thins or otherwise alters the timing of ACKs and network schedules, thins or otherwise alters the timing of ACKs and
data, generally to raise the efficiency of the channel allocation data, generally to raise the efficiency of the channel allocation
algorithms when confronted with relatively widely spaced small ACKs. algorithms when confronted with relatively widely spaced small ACKs.
These efficiency strategies are ubiquitous for half duplex, wireless These efficiency strategies are ubiquitous for half duplex, wireless
and broadcast media. and broadcast media.
Altering the ACK stream by holding or thinning ACKs typically has two Altering the ACK stream by holding or thinning ACKs typically has two
consequences: it raises the implied bottleneck IP capacity, making consequences: it raises the implied bottleneck IP capacity, making
the fine grained slowstart bursts either faster or larger and it the fine grained slowstart bursts either faster or larger and it
raises the effective RTT by the average time that the ACKs and data raises the effective RTT by the average time that the ACKs and data
are delayed. The first effect can be partially mitigated by are delayed. The first effect can be partially mitigated by re-
reclocking ACKs once they are beyond the bottleneck on the return clocking ACKs once they are beyond the bottleneck on the return path
path to the sender, however this further raises the effective RTT. to the sender, however this further raises the effective RTT.
The most extreme example of this sort of behavior would be a half The most extreme example of this sort of behavior would be a half
duplex channel that is not released as long as the endpoint currently duplex channel that is not released as long as the endpoint currently
holding the channel has more traffic (data or ACKs) to send. Such holding the channel has more traffic (data or ACKs) to send. Such
environments cause self clocked protocols under full load to revert environments cause self clocked protocols under full load to revert
to extremely inefficient stop and wait behavior. The channel to extremely inefficient stop and wait behavior. The channel
constrains the protocol to send an entire window of data as a single constrains the protocol to send an entire window of data as a single
contiguous burst on the forward path, followed by the entire window contiguous burst on the forward path, followed by the entire window
of ACKs on the return path. of ACKs on the return path.
If a particular return path contains a subpath or device that alters If a particular return path contains a subpath or device that alters
the timing of the ACK stream, then the entire front path from the the timing of the ACK stream, then the entire front path from the
sender up to the bottleneck must be tested at the burst parameters sender up to the bottleneck must be tested at the burst parameters
implied by the ACK scheduling algorithm. The most important implied by the ACK scheduling algorithm. The most important
parameter is the Implied Bottleneck IP Capacity, which is the average parameter is the Implied Bottleneck IP Capacity, which is the average
rate at which the ACKs advance snd.una. Note that thinning the ACK rate at which the ACKs advance snd.una. Note that thinning the ACK
stream (relying on the cumulative nature of seg.ack to permit stream (relying on the cumulative nature of seg.ack to permit
discarding some ACKs) requires larger sender interface bursts to discarding some ACKs) causes most TCP implementation to send
offset the longer times between ACK in order to maintain the average interface rate bursts to offset the longer times between ACKs in
data rate. order to maintain the average data rate.
It is important to note that due to ubiquitous self clocking in Note that due to ubiquitous self clocking in Internet protocols, ill
Internet protocols, ill conceived channel allocation mechanisms conceived channel allocation mechanisms are likely to increases the
increases the queueing stress on the front path because they cause queuing stress on the front path because they cause larger full
larger full sender rate data bursts. sender rate data bursts.
Holding data or ACKs for channel allocation or other reasons (such as Holding data or ACKs for channel allocation or other reasons (such as
forward error correction) always raises the effective RTT relative to forward error correction) always raises the effective RTT relative to
the minimum delay for the path. Therefore it may be necessary to the minimum delay for the path. Therefore it may be necessary to
replace target_RTT in the calculation in Section 5.2 by an replace target_RTT in the calculation in Section 5.2 by an
effective_RTT, which includes the target_RTT plus a term to account effective_RTT, which includes the target_RTT plus a term to account
for the extra delays introduced by these mechanisms. for the extra delays introduced by these mechanisms.
Appendix C. Version Control Appendix C. Version Control
This section to be removed prior to publication. This section to be removed prior to publication.
Formatted: Mon Oct 19 15:59:51 PDT 2015 Formatted: Thu Apr 7 18:12:37 PDT 2016
Authors' Addresses Authors' Addresses
Matt Mathis Matt Mathis
Google, Inc Google, Inc
1600 Amphitheater Parkway 1600 Amphitheater Parkway
Mountain View, California 94043 Mountain View, California 94043
USA USA
Email: mattmathis@google.com Email: mattmathis@google.com
Al Morton Al Morton
AT&T Labs AT&T Labs
200 Laurel Avenue South 200 Laurel Avenue South
Middletown, NJ 07748 Middletown, NJ 07748
USA USA
Phone: +1 732 420 1571 Phone: +1 732 420 1571
Email: acmorton@att.com Email: acmorton@att.com
URI: http://home.comcast.net/~acmacm/
 End of changes. 136 change blocks. 
416 lines changed or deleted 518 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/