draft-ietf-ippm-model-based-metrics-00.txt   draft-ietf-ippm-model-based-metrics-01.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: December 23, 2013 AT&T Labs Expires: April 24, 2014 AT&T Labs
June 21, 2013 October 21, 2013
Model Based Bulk Performance Metrics Model Based Bulk Performance Metrics
draft-ietf-ippm-model-based-metrics-00.txt draft-ietf-ippm-model-based-metrics-01.txt
Abstract Abstract
We introduce a new class of model based metrics designed to determine We introduce a new class of model based metrics designed to determine
if a long path can meet predefined end-to-end application performance if a long network path can meet predefined end-to-end application
targets. This is done by subpath at a time testing -- by applying a performance targets by applying a suite of IP diagnostic tests to
suite of single property tests to successive subpaths of a long path. successive subpaths. The subpath at a time tests are designed to
In many cases these single property tests are based on existing IPPM exclude all known conditions which might prevent the full end-to-end
metrics, with the addition of success and validity criteria. The path from meeting the user's target application performance.
subpath at a time tests are designed to facilitate IP providers
eliminating all known conditions that might prevent the full end-to-
end path from meeting the users target performance.
This approach makes it possible to to determine the IP performance This approach makes it possible to to determine the IP performance
requirements needed to support the desired end-to-end TCP requirements needed to support the desired end-to-end TCP
performance. The IP metrics are based on traffic patterns that mimic performance. The IP metrics are based on traffic patterns that mimic
TCP but are precomputed independently of the actual behavior of TCP TCP or other transport protocol but are precomputed independently of
over the subpath under test. This makes the measurements open loop, the actual behavior of the transport protocol over the subpath under
eliminating nearly all of the difficulties encountered by traditional test. This makes the measurements open loop, eliminating nearly all
bulk transport metrics, which rely on congestion control equilibrium of the difficulties encountered by traditional bulk transport
metrics, which fundamentally depend on congestion control equilibrium
behavior. behavior.
A natural consequence of this methodology is verifiable network A natural consequence of this methodology is verifiable network
measurement: measurements from any given vantage point are repeatable measurement: measurements from any given vantage point can be
from other vantage points. verified by repeating them from other vantage points.
Formatted: Fri Jun 21 18:23:29 PDT 2013 Formatted: Mon Oct 21 15:42:35 PDT 2013
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 23, 2013. This Internet-Draft will expire on April 24, 2014.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 10 skipping to change at page 3, line 10
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6
3. New requirements relative to RFC 2330 . . . . . . . . . . . . 8 3. New requirements relative to RFC 2330 . . . . . . . . . . . . 9
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 11 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 12
5. Common Models and Parameters . . . . . . . . . . . . . . . . . 12 5. Common Models and Parameters . . . . . . . . . . . . . . . . . 14
5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 13 5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 14
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 13 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 15
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 14 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 16
6. Common testing procedures . . . . . . . . . . . . . . . . . . 15 6. Common testing procedures . . . . . . . . . . . . . . . . . . 16
6.1. Traffic generating techniques . . . . . . . . . . . . . . 15 6.1. Traffic generating techniques . . . . . . . . . . . . . . 16
6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 15 6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 16
6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 16 6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 17
6.1.2.1. Scanned window pseudo CBR . . . . . . . . . . . . 16 6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 18
6.1.3. Intermittent Testing . . . . . . . . . . . . . . . . . 16 6.1.4. Concurrent or channelized testing . . . . . . . . . . 18
6.1.4. Intermittent Scatter Testing . . . . . . . . . . . . . 17 6.1.5. Intermittent Testing . . . . . . . . . . . . . . . . . 19
6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 17 6.1.6. Intermittent Scatter Testing . . . . . . . . . . . . . 20
6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 17 6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 20
6.2.2. Statistical criteria for measuring run_length . . . . 17 6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 20
6.2.3. Classifications of tests . . . . . . . . . . . . . . . 19 6.2.2. Statistical criteria for measuring run_length . . . . 21
6.2.4. Reordering Tolerance . . . . . . . . . . . . . . . . . 20 6.2.3. Reordering Tolerance . . . . . . . . . . . . . . . . . 23
6.3. Test Qualifications . . . . . . . . . . . . . . . . . . . 20 6.3. Test Qualifications . . . . . . . . . . . . . . . . . . . 23
6.3.1. Verify the Traffic Generation Accuracy . . . . . . . . 20 6.3.1. Verify the Traffic Generation Accuracy . . . . . . . . 23
6.3.2. Verify the absence of cross traffic . . . . . . . . . 21 6.3.2. Verify the absence of cross traffic . . . . . . . . . 24
6.3.3. Additional test preconditions . . . . . . . . . . . . 22 6.3.3. Additional test preconditions . . . . . . . . . . . . 25
7. Single Property Tests . . . . . . . . . . . . . . . . . . . . 22 7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 25
7.1. Basic Data and Loss Rate Tests . . . . . . . . . . . . . . 22 7.1. Basic Data Rate and Run Length Tests . . . . . . . . . . . 25
7.1.1. Loss Rate at Paced Full Data Rate . . . . . . . . . . 22 7.1.1. Run Length at Paced Full Data Rate . . . . . . . . . . 26
7.1.2. Loss Rate at Full Data Windowed Rate . . . . . . . . . 23 7.1.2. run length at Full Data Windowed Rate . . . . . . . . 26
7.1.3. Background Loss Rate Tests . . . . . . . . . . . . . . 23 7.1.3. Background Run Length Tests . . . . . . . . . . . . . 26
7.2. Standing Queue tests . . . . . . . . . . . . . . . . . . . 24 7.2. Standing Queue tests . . . . . . . . . . . . . . . . . . . 26
7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 24 7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 28
7.2.2. Buffer Bloat . . . . . . . . . . . . . . . . . . . . . 25 7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 28
7.2.3. Duplex Self Interference . . . . . . . . . . . . . . . 25 7.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 28
7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 25 7.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 28
7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 25 7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 29
7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 26 7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 29
7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 26 7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 29
7.4.1. Sender TCP Send Offload (TSO) tests . . . . . . . . . 26 7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 29
7.4.2. Sender Full Window burst test . . . . . . . . . . . . 26 7.5. Combined Tests . . . . . . . . . . . . . . . . . . . . . . 30
8. Combined Tests . . . . . . . . . . . . . . . . . . . . . . . . 27 7.5.1. Sustained burst test . . . . . . . . . . . . . . . . . 30
8.1. Sustained burst test . . . . . . . . . . . . . . . . . . . 27 7.5.2. Live Streaming Media . . . . . . . . . . . . . . . . . 31
9. Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 28 8. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 28 8.1. Near serving HD streaming video . . . . . . . . . . . . . 32
11. Informative References . . . . . . . . . . . . . . . . . . . . 28 8.2. Far serving SD streaming video . . . . . . . . . . . . . . 32
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 29 8.3. Bulk delivery of remote scientific data . . . . . . . . . 33
Appendix B. old text . . . . . . . . . . . . . . . . . . . . . . 29 9. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 33
B.1. An earlier document . . . . . . . . . . . . . . . . . . . 30 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34
B.2. End-to-end parameters from subpaths . . . . . . . . . . . 31 11. Informative References . . . . . . . . . . . . . . . . . . . . 35
B.3. Per subpath parameters . . . . . . . . . . . . . . . . . . 32 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 36
B.4. Version Control . . . . . . . . . . . . . . . . . . . . . 32 A.1. Aggregate Reno . . . . . . . . . . . . . . . . . . . . . . 37
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 32 A.2. CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Appendix B. Version Control . . . . . . . . . . . . . . . . . . . 38
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 38
1. Introduction 1. Introduction
Model based bulk performance metrics evaluate an Internet paths Model based bulk performance metrics evaluate an Internet path's
ability to carry bulk data. TCP models are used to design a targeted ability to carry bulk data. TCP models are used to design a targeted
diagnostic suite of IP performance tests which can be applied diagnostic suite (TDS) of IP performance tests which can be applied
independently to each subpath of the full end-to-end path. The independently to each subpath of the full end-to-end path. A
targeted diagnostic suites are constructed such that independent targeted diagnostic suite is constructed such that independent tests
tests of the subpaths will accurately predict if the full end-to-end of the subpaths will accurately predict if the full end-to-end path
path can deliver bulk data at the specified performance target, can deliver bulk data at the specified performance target,
independent of the measurement vantage points or other details of the independent of the measurement vantage points or other details of the
test procedures used to measure each subpath. test procedures used to measure each subpath.
Each test in the targeted diagnostic suite consists of a precomputed Each test in the TDS consists of a precomputed traffic pattern and
traffic pattern and statistical criteria for evaluating packet statistical criteria for evaluating packet delivery.
delivery.
TCP models are used to design traffic patterns that mimic TCP or TCP models are used to design traffic patterns that mimic TCP or
other bulk transport protocol operating at the target performance and other bulk transport protocol operating at the target performance and
RTT over a full range of conditions, including flows that are bursty RTT over a full range of conditions, including flows that are bursty
at multiple time scales. The traffic patterns are computed in at multiple time scales. The traffic patterns are computed in
advance based on the properties of the full end-to-end path and advance based on the properties of the full end-to-end path and
independent of the properties of individual subpaths. As much as independent of the properties of individual subpaths. As much as
possible the traffic is generated deterministically in ways that possible the traffic is generated deterministically in ways that
minimizes the extent to which test methodology, measurement points, minimizes the extent to which test methodology, measurement points,
measurement vantage or path partitioning effect the details of the measurement vantage or path partitioning effect the details of the
traffic. traffic.
Models are also used to compute the statistical criteria for Models are also used to compute the bounds on the packet delivery
evaluating the IP diagnostics tests. The criteria for passing each statistics for acceptable the IP performance. The criteria for
test must be determined from the end-to-end target performance and passing each test are determined from the end-to-end target
independent of the RTT or other properties of the subpath under test. performance and are independent of the subpath under test. In
In addition to passing or failing, a test can be inconclusive if the addition to passing or failing, a test can be inconclusive if the
precomputed traffic pattern was not authentically generated, test precomputed traffic pattern was not authentically generated, test
preconditions were not met or the measurement results were not preconditions were not met or the measurement results were not
statistically significantly. statistically significant.
TCP's ability to compensate for less than ideal network conditions is TCP's ability to compensate for less than ideal network conditions is
fundamentally affected by the RTT and MTU of the end-to-end Internet fundamentally affected by the RTT and MTU of the end-to-end Internet
path that it traverses which are both fixed properties of the end-to- path that it traverses. The end-to-end path determines fixed bounds
end path. The target values for these three parameters, Data Rate, on these parameters. The target values for these three parameters,
RTT and MTU, are determined by the application, its intended use and Data Rate, RTT and MTU, are determined by the application, its
the physical infrastructure over which it traverses. They are used intended use and the physical infrastructure over which it is
to inform the models used to design the targeted diagnostic suite. intended to traverse. These parameters are used to inform the models
used to design the TDS.
Section 2 defines terminology used throughout this document. It has This document describes a framework for deriving the traffic and
been difficult to develop BTC metrics due to some overlooked delivery statistics for model based metrics. It does not fully
specify any measurement techniques. Important details such as packet
type-p selection, sampling techniques, vantage selection, etc are out
of scope for this document. We imagine Fully Specified Targeted
Diagnostic Suites (FSTDS), that fully defines all of these details.
We use TDS to refer to the subset of such a specification that is in
scope for this document. A TDS includes specification for the
traffic and delivery statistics for the diagnostic tests themselves,
documentation of the models and any assumptions or derating used to
derive the test parameters and a description of the test setup used
to calibrate the models, as described in later sections.
Section 2 defines terminology used throughout this document.
It has been difficult to develop BTC metrics due to some overlooked
requirements described in Section 3 and some intrinsic problems with requirements described in Section 3 and some intrinsic problems with
using protocols for measurement, described in Section 4. In using protocols for measurement, described in Section 4.
Section 5 we describe the models and common parameters used to derive
the targeted diagnostic suite. In Section 6 we describe common In Section 5 we describe the models and common parameters used to
testing procedures used by all of the tests. Each subpath is derive the targeted diagnostic suite. In Section 6 we describe
evaluated using suite of far simpler and more predictable single common testing procedures. Each subpath is evaluated using suite of
property tests described in Section 7. Section 8 describes some far simpler and more predictable diagnostic tests described in
combined tests that are more efficient to implement and deploy. Section 7. In Section 8 we present three example TDS, one that might
However, if they fail they may not clearly indicate the nature of the be representative of HD video, when served fairly close to the user,
problem. a second that might be representative of standard video, served from
a greater distance, and a third that might be representative of an
network designed to support high performance bulk download.
There exists a small risk that model based metric itself might yield There exists a small risk that model based metric itself might yield
a false pass result, in the sense that every subpath of an end-to-end a false pass result, in the sense that every subpath of an end-to-end
path passes every IP diagnostic test and yet a real application falls path passes every IP diagnostic test and yet a real application falls
to attain the performance target over the end-to-end path. If this to attain the performance target over the end-to-end path. If this
happens, then the calibration procedure described in Section 9 needs happens, then the validation procedure described in Section 9 needs
to be used to validate and potentially revise the models. to be used to prove and potentially revise the models.
Future document will define model based metrics for other traffic Future document will define model based metrics for other traffic
classes and application types, such as real time. classes and application types, such as real time streaming media.
1.1. TODO 1.1. TODO
Please send comments on this draft to ippm@ietf.org. See Please send comments on this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Fri Jun 21 18:23:29 PDT 2013 Formatted: Mon Oct 21 15:42:35 PDT 2013
2. Terminology 2. Terminology
Properties determined by the end-to-end path and application. They
are described in more detail in Section 5.1.
end-to-end target parameters: Application or transport performance
goals for the end-to-end path. They include the target data rate,
RTT and MTU described below.
Target Data Rate: The application or ultimate user's performance
goal. This must be slightly smaller than the actual link rate,
otherwise there is no margin for compensating for RTT or other
path protperties.
Target RTT (Round Trip Time): The RTT over which the application
must meet the target performance.
Target MTU (Maximum Transmission Unit): Assume 1500 Bytes per packet
unless otherwise specified. If some subpath forces a smaller MTU,
then it becomes the target MTU, and all subpaths must be tested
with the same smaller MTU.
Effective Bottleneck Data Rate: This is the bottleneck data rate
that might be inferred from the ACK stream, by looking at how much
data the ACK stream reports was delivered per unit time. See
Section 4.1 for more details.
Permitted Number of Connections: The target rate can be more easily
obtained by dividing the traffic across more than one connection.
In general the number of concurrent connections is determined by
the application, however see the comments below on multiple
connections.
[sender] [interface] rate: The burst data rate, constrained by the
data sender's interfaces. Today 1 or 10 Gb/s are typical.
Header overhead: The IP and TCP header sizes, which are the portion
of each MTU not available for carrying application payload.
Without loss of generality this is assumed to be the size for
returning acknowledgements (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header overhead.
Terminology about paths, etc. See [RFC2330] and Terminology about paths, etc. See [RFC2330] and
[I-D.morton-ippm-lmap-path]. [I-D.morton-ippm-lmap-path].
[data] sender Host sending data and receiving ACKs, typically via [data] sender Host sending data and receiving ACKs, typically via
TCP. TCP.
[data] receiver Host receiving data and sending ACKs, typically via [data] receiver Host receiving data and sending ACKs, typically via
TCP. TCP.
subpath Subpath as defined in [RFC2330]. subpath A portion of the full path. Note that there is no
requirement that subpaths be non-overlapping.
Measurement Point Measurement points as described in Measurement Point Measurement points as described in
[I-D.morton-ippm-lmap-path]. [I-D.morton-ippm-lmap-path].
test path A path between two measurement points that includes a test path A path between two measurement points that includes a
subpath of the end-to-end path under test, plus possibly subpath of the end-to-end path under test, plus possibly
additional infrastructure between the measurement points and the additional infrastructure between the measurement points and the
subpath. subpath.
[Dominant] Bottleneck The Bottleneck that determines a flow's self [Dominant] Bottleneck The Bottleneck that determines a flow's self
clock. It generally determines the traffic statistics for the clock. It generally determines the traffic statistics for the
entire path. See Section 4.1. entire path. See Section 4.1.
front path The subpath from the data sender to the dominant front path The subpath from the data sender to the dominant
bottleneck. bottleneck.
back path The subpath from the dominant bottleneck to the receiver. back path The subpath from the dominant bottleneck to the receiver.
return path The path taken by the ACKs from the data receiver to the return path The path taken by the ACKs from the data receiver to the
data sender. data sender.
cross traffic Other, potentially interfering, traffic competing for cross traffic Other, potentially interfering, traffic competing for
resources (network and/or queue capacity). resources (network and/or queue capacity).
Basic parameters common to all models and subpath tests. They are Properties determined by the end-to-end path and application. They
are described in more detail in Section 5.1.
Application Data Rate General term for the data rate as seen by the
application above the transport layer. This is the payload data
rate, and excludes TCP/IP (or other protocol) headers and
retransmits.
Link Data Rate General term for the data rate as seen by the link or
lower layers. It includes transport and IP headers, retransmits
and other transport layer overhead. This document is agnostic as
to whether the link data rate includes or excludes framing, MAC or
other lower layer overheads, except that they must be treated
uniformly.
end-to-end target parameters: Application or transport performance
goals for the end-to-end path. They include the target data rate,
RTT and MTU described below.
Target Data Rate: The application or ultimate user's performance
goal. When converted to link data rate, it must be slightly
smaller than the actual link data rate, otherwise there is no
margin for compensating for RTT or other path properties. These
test will be excessively brittle if the target data rate does not
include any built in headroom.
Target RTT (Round Trip Time): The baseline (minimum) RTT of the
longest end-to-end path the over which the application expects to
meet the target performance. This must be specified considering
authentic packets sizes: MTU sized packets on the forward path,
header_overhead sized packets on the return (ACK) path.
Target MTU (Maximum Transmission Unit): The maximum MTU supported by
the end-to-end path the over which the application expects to meet
the target performance. Assume 1500 Bytes per packet unless
otherwise specified. If some subpath forces a smaller MTU, then
it becomes the target MTU, and all model calculations and subpath
tests must use the same smaller MTU.
Effective Bottleneck Data Rate: This is the bottleneck data rate
that might be inferred from the ACK stream, by looking at how much
data the ACK stream reports was delivered per unit time. See
Section 4.1 for more details.
[sender] [interface] rate: The burst data rate, constrained by the
data sender's interfaces. Today 1 or 10 Gb/s are typical.
Header overhead: The IP and TCP header sizes, which are the portion
of each MTU not available for carrying application payload.
Without loss of generality this is assumed to be the size for
returning acknowledgements (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header overhead.
Basic parameters common to models and subpath tests. They are
described in more detail in Section 5.2. described in more detail in Section 5.2.
@ @@@ pipe size A general term for number of packets needed in flight (the
pipe size The number of packets needed in flight (the window size) window size) to exactly fill some network path or subpath. This
to exactly fill some network path or sub path. The is the window is the window size which in normally the onset of queueing.
size which in normally the onset of queueing.
target_pipe_size: The number of packets in flight (the window size) target_pipe_size: The number of packets in flight (the window size)
needed to exactly meet the target rate, with a single stream and needed to exactly meet the target rate, with a single stream and
no cross traffic for the specified target data rate, RTT and MTU. no cross traffic for the specified target data rate, RTT and MTU.
subpath pipe size run length A general term for the observed, measured or specified
run length Observed, measured or specified number of packets that number of packets that are (to be) delivered between losses or ECN
are (to be) delivered between losses or ECN marks. Nominally one marks. Nominally one over the loss or ECN marking probability.
over the loss probability.
target_run_length Required run length computed from the target data target_run_length Required run length computed from the target data
rate, RTT and MTU. rate, RTT and MTU.
reference_target_run_length: One specific conservative estimate of
the number of packets that must be delivered between loss episodes
in most diagnostic tests.
derating: The modeling framework permits some latitude in derating
some specific test parameters as described in Section 5.3.
Test types [These need work] Ancillary parameters used for some tests
capacity tests: For "capacity tests" is required that as long as the derating: Under some conditions the standard models are too
test traffic is within the proper envelope for the target end-to- conservative. The modeling framework permits some latitude in
end performance, the average packet losses must be below the relaxing or derating some test parameters as described in
threshold computed by the model. Section 5.3 in exchange for a more stringent TDS validation
Engineering tests: Engineering tests verify that the subpath under procedures, described in Section 9.
test interacts well with TCP style self clocked protocols using
adaptive congestion control based on packet loss and ECN marks. subpath_data_rate The maximum IP data rate supported by a subpath.
For example "AQM Tests" verify that when the presented load This typically includes TCP/IP overhead, including headers,
exceeds the capacity of the subpath, the subpath signals for the retransmits, etc.
transport protocol to slow down, by appropriately ECN marking or test_path_RTT The RTT (using appropriate packet sizes) between two
dropping some of the packets. Note while that cross traffic is measurement points.
can cause capacity tests to fail, it has the potential to cause test_path_pipe The amount of data necessary to fill a test path.
AQM tests to false pass, which is why AQM tests require separate Nominally the test path RTT times the subpath_data_rate (which
test procedures. should be part of the end-to-end subpath).
test_window The window necessary to meet the target_rate over a
subpath. Typically test_window=target_data_rate*test_RTT/
target_MTU.
Tests can be classified into groups according to their applicability
Capacity tests determine if a network subpath has sufficient
capacity to deliver the target performance. As long as the test
traffic is within the proper envelope for the target end-to-end
performance, the average packet losses or ECN must be below the
threshold computed by the model. As such, they reflect parameters
that can transition from passing to failing as a consequence of
additional presented load or the actions of other network users.
By definition, capacity tests also consume significant network
resources (data capacity and/or buffer space), and the test
schedules must be balanced by their cost.
Monitoring tests are design to capture the most important aspects of
a capacity test, but without causing unreasonable ongoing load
themselves. As such they may miss some details of the network
performance, but can serve as a useful reduced cost proxy for a
capacity test.
Engineering tests evaluate how network algorithms (such as AQM and
channel allocation) interact with TCP style self clocked protocols
and adaptive congestion control based on packet loss and ECN
marks. These tests are likely to have complicated interactions
with other traffic and under some conditions can be inversely
sensitive to load. For example a test to verify that an AQM
algorithm causes ECN marks or packet drops early enough to limit
queue occupancy may experience a false pass results in the
presence of bursty cross traffic. It is important that
engineering tests be performed under a wide range of conditions,
including both in situ and bench testing, and over a wide variety
of load conditions. Ongoing monitoring is less likely to be
useful for engineering tests, although sparse in situ testing
might be appropriate.
3. New requirements relative to RFC 2330 3. New requirements relative to RFC 2330
Model Based Metrics are designed to fulfil some additional [Move this entire section to a future paper]
requirement that were not recognized at the time RFC 2330 was Model Based Metrics are designed to fulfill some additional
written. These missing requirements may have significantly requirement that were not recognized at the time RFC 2330 [RFC2330]
was written. These missing requirements may have significantly
contributed to policy difficulties in the IP measurement space. Some contributed to policy difficulties in the IP measurement space. Some
additional requirements are: additional requirements are:
o Metrics must be actionable by the ISP - they have to be o Metrics must be actionable by the ISP - they have to be
interpreted in terms of behaviors or properties at the IP or lower interpreted in terms of behaviors or properties at the IP or lower
layers, that an ISP can test, repair and verify. layers, that an ISP can test, repair and verify.
o Metrics must be vantage point invariant over a significant range o Metrics must be vantage point invariant over a significant range
of measurement point choices (e.g., measurement points as of measurement point choices (e.g., measurement points as
described in [I-D.morton-ippm-lmap-path]), including off path described in [I-D.morton-ippm-lmap-path]), including off path
measurement points. The only requirements on MP selection should measurement points. The only requirements on MP selection should
be that the portion of the path that is not under test is be that the portion of the path that is not under test is
effectively ideal (or is non ideal in calibratable ways) and the effectively ideal (or is non ideal in calibratable ways) and the
end-to-end RTT between MPs is below some reasonable bound. RTT between MPs is below some reasonable bound.
o Metrics must be repeatable by multiple parties. It must be o Metrics must be repeatable by multiple parties. It must be
possible for different parties to make the same measurement and possible for different parties to make the same measurement and
observe the same results. In particular it is specifically observe the same results. In particular it is specifically
important that both a consumer (or their delegate) and ISP be able important that both a consumer (or their delegate) and ISP be able
to perform the same measurement and get the same result. to perform the same measurement and get the same result.
NB: All of the metric requirements in RFC 2330 should be reviewed and NB: All of the metric requirements in RFC 2330 should be reviewed and
potentially revised. If such a document is opened soon enough, this potentially revised. If such a document is opened soon enough, this
entire section should be dropped. entire section should be dropped.
4. Background 4. Background
[Move to a future paper, abridge here, ]
At the time the IPPM WG was chartered, sound Bulk Transport Capacity At the time the IPPM WG was chartered, sound Bulk Transport Capacity
measurement was known to be beyond our capabilities. By hindsight it measurement was known to be beyond our capabilities. By hindsight it
is now clear why it is such a hard problem: is now clear why it is such a hard problem:
o TCP is a control system with circular dependencies - everything o TCP is a control system with circular dependencies - everything
affects performance, including components that are explicitly not affects performance, including components that are explicitly not
part of the test. part of the test.
o Congestion control is an equilibrium process, transport protocols o Congestion control is an equilibrium process, transport protocols
change the network (raise loss probability and/or RTT) to conform change the network (raise loss probability and/or RTT) to conform
to their behavior. to their behavior.
o TCP's ability to compensate for network flaws is directly o TCP's ability to compensate for network flaws is directly
skipping to change at page 10, line 4 skipping to change at page 11, line 11
interact in unknown and ill defined ways. The situation is interact in unknown and ill defined ways. The situation is
actually worse than the traditional physics problem where you can actually worse than the traditional physics problem where you can
at least estimate the relative momentum of the measurement and at least estimate the relative momentum of the measurement and
measured particles. For network measurement you can not in measured particles. For network measurement you can not in
general determine the relative "elasticity" of the measurement general determine the relative "elasticity" of the measurement
traffic and cross traffic, so you can not even gage the relative traffic and cross traffic, so you can not even gage the relative
magnitude of their effects on each other. magnitude of their effects on each other.
The MBM approach is to "open loop" TCP by precomputing traffic The MBM approach is to "open loop" TCP by precomputing traffic
patterns that are typically generated by TCP operating at the given patterns that are typically generated by TCP operating at the given
target parameters, and evaluating delivery statistics (losses and target parameters, and evaluating delivery statistics (losses, ECN
delay). In this approach the measurement software explicitly marks and delay). In this approach the measurement software
controls the data rate, transmission pattern or cwnd (TCP's primary explicitly controls the data rate, transmission pattern or cwnd
congestion control state variables) to create repeatable traffic (TCP's primary congestion control state variables) to create
patterns that mimic TCP behavior but are independent of the actual repeatable traffic patterns that mimic TCP behavior but are
network behavior of the subpath under test. These patterns are independent of the actual network behavior of the subpath under test.
manipulated to probe the network to verify that it can deliver all of These patterns are manipulated to probe the network to verify that it
the traffic patterns that a transport protocol is likely to generate can deliver all of the traffic patterns that a transport protocol is
under normal operation at the target rate and RTT. likely to generate under normal operation at the target rate and RTT.
Models are used to determine the actual test parameters (burst size, Models are used to determine the actual test parameters (burst size,
loss rate, etc) from the target parameters. The basic method is to loss rate, etc) from the target parameters. The basic method is to
use models to estimate specific network properties required to use models to estimate specific network properties required to
sustain a given transport flow (or set of flows), and using a suite sustain a given transport flow (or set of flows), and using a suite
of metrics to confirm that the network meets the required properties. of metrics to confirm that the network meets the required properties.
A network is expected to be able to sustain a Bulk TCP flow of a A network is expected to be able to sustain a Bulk TCP flow of a
given data rate, MTU and RTT when the following conditions are met: given data rate, MTU and RTT when the following conditions are met:
o The raw link rate is higher than the target data rate. o The raw link rate is higher than the target data rate.
o The raw packet loss rate is lower than required by a suitable TCP o The raw packet run length is larger than required by a suitable
performance model TCP performance model
o There is sufficient buffering at the dominant bottleneck to absorb o There is sufficient buffering at the dominant bottleneck to absorb
a slowstart rate burst large enough to get the flow out of a slowstart rate burst large enough to get the flow out of
slowstart at a suitable window size. slowstart at a suitable window size.
o There is sufficient buffering in the front path to absorb and o There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in the to be generated by the application, any channel arbitration in the
ACK path or other mechanisms. ACK path or other mechanisms.
o When there is a standing queue at a bottleneck for a shared media o When there is a standing queue at a bottleneck for a shared media
subpath, there are suitable bounds on how the data and ACKs subpath, there are suitable bounds on how the data and ACKs
interact, for example due to the channel arbitration mechanism. interact, for example due to the channel arbitration mechanism.
o When there is a slowly rising standing queue at the bottleneck the o When there is a slowly rising standing queue at the bottleneck the
onset of packet loss has to be at an appropriate point (time or onset of packet loss has to be at an appropriate point (time or
queue depth) and progressive. queue depth) and progressive.
The tests to verify these condition are described in Section 7. The tests to verify these condition are described in Section 7.
Note that this procedure is not invertible: a singleton measurement A singleton [RFC2330] measurement is a pass/fail evaluation of a
is a pass/fail evaluation of a given path or subpath at a given given path or subpath at a given performance. Note that measurements
performance. Measurements to confirm that a link passes at one to confirm that a link passes at one particular performance might not
particular performance may not be generally be useful to predict if be be useful to predict if the link will pass at a different
the link will pass at a different performance. performance.
Although they are not invertible, they do have several other valuable A TDS does have several valuable properties, such as natural ways to
properties, such as natural ways to define several different define several different composition metrics [RFC5835].
composition metrics [RFC5835].
[Add text on algebra on metrics (A-Frame from [RFC2330]) and [Add text on algebra on metrics (A-Frame from [RFC2330]) and
tomography.] The Spatial Composition of fundamental IPPM metrics has tomography.] The Spatial Composition of fundamental IPPM metrics has
been studied and standardized. For example, the algebra to combine been studied and standardized. For example, the algebra to combine
empirical assessments of loss ratio to estimate complete path empirical assessments of loss ratio to estimate complete path
performance is described in section 5.1.5. of [RFC6049]. We intend performance is described in section 5.1.5. of [RFC6049]. We intend
to use this and other composition metrics as necessary. to use this and other composition metrics as necessary.
We are developing a tool that can perform many of the tests described
here[MBMSource].
4.1. TCP properties 4.1. TCP properties
[Move this entire section to a future paper]
TCP and SCTP are self clocked protocols. The dominant steady state TCP and SCTP are self clocked protocols. The dominant steady state
behavior is to have an approximately fixed quantity of data and behavior is to have an approximately fixed quantity of data and
acknowledgements (ACKs) circulating in the network. The receiver acknowledgements (ACKs) circulating in the network. The receiver
reports arriving data by returning ACKs to the data sender, the data reports arriving data by returning ACKs to the data sender, the data
sender most frequently responds by sending exactly the same quantity sender most frequently responds by sending exactly the same quantity
of data back into the network. The quantity of data plus the data of data back into the network. The quantity of data plus the data
represented by ACKs circulating in the network is referred to as the represented by ACKs circulating in the network is referred to as the
window. The mandatory congestion control algorithms incrementally window. The mandatory congestion control algorithms incrementally
adjust the widow by sending slightly more or less data in response to adjust the widow by sending slightly more or less data in response to
each ACK. The fundamentally important property of this systems is each ACK. The fundamentally important property of this systems is
that it is entirely self clocked: The data transmissions are a that it is entirely self clocked: The data transmissions are a
reflection of the ACKs that were delivered by the network, the ACKs reflection of the ACKs that were delivered by the network, the ACKs
are a reflection of the data arriving from the network. are a reflection of the data arriving from the network.
A number of phenomena can cause bursts of data, even in idealized A number of phenomena can cause bursts of data, even in idealized
networks that are modeled as simple queueing systems. networks that are modeled as simple queueing systems.
During slowstart the data rate is doubled by sending twice as much During slowstart the data rate is doubled on each RTT by sending
data as was delivered to the receiver. For slowstart to be able to twice as much data as was delivered to the receiver on the prior RTT.
fill such a network the network must be able to tolerate slowstart For slowstart to be able to fill such a network the network must be
bursts up to the full pipe size inflated by the anticipated window able to tolerate slowstart bursts up to the full pipe size inflated
reduction on the first loss. For example, with classic Reno by the anticipated window reduction on the first loss or ECN mark.
congestion control, an optimal slowstart has to end with a burst that For example, with classic Reno congestion control, an optimal
is twice the bottleneck rate for exactly one RTT in duration. This slowstart has to end with a burst that is twice the bottleneck rate
burst causes a queue which is exactly equal to the pipe size (the for exactly one RTT in duration. This burst causes a queue which is
window is exactly twice the pipe size) so when the window is halved, exactly equal to the pipe size (the window is exactly twice the pipe
the new window will be exactly the pipe size. size) so when the window is halved, the new window will be exactly
the pipe size.
Another source of bursts are application pauses. If the application Another source of bursts are application pauses. If the application
pauses (stops reading or writing data) for some fraction of one RTT, pauses (stops reading or writing data) for some fraction of one RTT,
state-of-the-art TCP to "catches up" to the earlier window size by state-of-the-art TCP to "catches up" to the earlier window size by
sending a burst of data at the full sender interface rate. To fill sending a burst of data at the full sender interface rate. To fill
such a network with a realistic application, the network has to be such a network with a realistic application, the network has to be
able to tolerate interface rate bursts from the data sender large able to tolerate interface rate bursts from the data sender large
enough to cover the worst case application pause. enough to cover application pauses.
Note that if the bottleneck data rate is significantly slower than Note that if the bottleneck data rate is significantly slower than
the rest of the path, the slowstart bursts will not cause significant the rest of the path, the slowstart bursts will not cause significant
queues anywhere else along the path; they primarily exercise the queues anywhere else along the path; they primarily exercise the
queue at the dominant bottleneck. Furthermore although the interface queue at the dominant bottleneck. Furthermore, although the
rate bursts caused by the application are likely to be smaller than interface rate bursts caused by the application are likely to be
burst at the last RTT of slowstart, they are at a higher rate so they smaller than last burst of a slowstart, they are at a higher rate so
can exercise queues at arbitrary points along the "front path" from they can exercise queues at arbitrary points along the "front path"
the data sender up to and including the queue at the bottleneck. from the data sender up to and including the queue at the bottleneck.
For many network technologies a simple queueing model does not apply: For many network technologies a simple queueing model does not apply:
the network schedules, thins or otherwise alters the ACKs and data the network schedules, thins or otherwise alters the timing of ACKs
stream, generally to raise the efficiency of the channel allocation and data, generally to raise the efficiency of the channel allocation
process when confronted with relatively widely spaced ACKs. These process when confronted with relatively widely spaced small ACKs.
efficiency strategies are ubiquitous for wireless and other half These efficiency strategies are ubiquitous for half duplex, wireless
duplex or broadcast media. or broadcast media.
Altering the ACK stream generally has two consequences: raising the Altering the ACK stream generally has two consequences: raising the
effective bottleneck rate making slowstart burst at higher rates effective bottleneck data rate making slowstart burst at higher rates
(possibly as high as the sender's interface rate) and effectively (possibly as high as the sender's interface rate) and effectively
raising the RTT by the time that the ACKs were postponed. The first raising the RTT by the time that the ACKs were postponed. The first
effect can be partially mitigated by reclocking ACKs once they are effect can be partially mitigated by reclocking ACKs once they are
through the bottleneck on the return to the sender, however this beyond the bottleneck on the return path to the sender, however this
further raises the effective RTT. The most extreme example of this further raises the effective RTT. The most extreme example of this
class of behaviors is a half duplex channel that is never released class of behaviors is a half duplex channel that is never released
until the current sender has no pending traffic. Such environments until the current end point has no pending traffic. Such
intrinsically cause self clocked protocols revert to extremely environments cause self clocked protocols revert to extremely
inefficient stop and wait behavior, where they send an entire window inefficient stop and wait behavior, where they send an entire window
of data as a single burst, followed by the entire window of ACKs on of data as a single burst, followed by the entire window of ACKs on
the return path. the return path.
If a particular end-to-end path contains a link or device that alters If a particular end-to-end path contains a link or device that alters
the ACK stream, then the entire path from the sender up to the the ACK stream, then the entire path from the sender up to the
bottleneck must be tested at the burst parameters implied by the ACK bottleneck must be tested at the burst parameters implied by the ACK
scheduling algorithms. The most important parameter is the Effective scheduling algorithm. The most important parameter is the Effective
Bottleneck Data Rate, which is the average rate at which the ACKs Bottleneck Data Rate, which is the average rate at which the ACKs
advance snd.una. Note that thinning the ACKs (relying on the advance snd.una. Note that thinning the ACKs (relying on the
cumulative nature of seg.ack to permit discarding some ACKs) is cumulative nature of seg.ack to permit discarding some ACKs) is
implies an effectively infinite bottleneck data rate. implies an effectively infinite bottleneck data rate.
To verify that a path can meet the performance target, Model Based To verify that a path can meet the performance target, it is
Metrics need to independently confirm that the entire path can necessary to independently confirm that the entire path can tolerate
tolerate bursts of the dimensions that are likely to be induced by bursts in the dimensions that are likely to be induced by the
the application and any data or ACK scheduling. Two common cases are application and any data or ACK scheduling anywhere in the path. Two
the most important: slowstart bursts of with more than the common cases are the most important: slowstart bursts at twice the
target_pipe_size data at twice the effective bottleneck data rate; effective bottleneck data rate; and somewhat smaller sender interface
and somewhat smaller sender interface rate bursts. rate bursts.
5. Common Models and Parameters The slowstart rate bursts must be at least as least as large
target_pipe_size packets and should be twice as large (so the peak
queue occupancy at the dominant bottleneck would be approximately
target_pipe_size).
Transport performance models are used to derive the test parameters There is no general model for how well the network needs to tolerate
for test suites of simple diagnostics from the end-to-end target sender interface rate bursts. All existing TCP implementations send
parameters and additional ancillary parameters. full sized full rate bursts under some typically uncommon conditions,
such as application pauses that approximately match the RTT, or when
ACKs are lost or thinned. Strawman: partial window bursts (some
fraction of target_pipe_size) should be tolerated without
significantly raising the loss probability. Full target_pipe_size
bursts may slightly increase the loss probability. Interface rate
bursts as large as twice target_pipe_size should not cause
deterministic packet drops.
5. Common Models and Parameters
5.1. Target End-to-end parameters 5.1. Target End-to-end parameters
The target end to end parameters are the target data rate, target RTT The target end to end parameters are the target data rate, target RTT
and target MTU as defined in Section 2 These parameters are and target MTU as defined in Section 2 These parameters are
determined by the needs of the application or the ultimate end user determined by the needs of the application or the ultimate end user
and the end-to-end Internet path. They are in units that make sense and the end-to-end Internet path over which the application is
to the upper layer: payload bytes delivered, excluding header expected to operate. The target parameters are in units that make
overheads for IP, TCP and other protocol. sense to the upper layer: payload bytes delivered to the application,
above TCP. They exclude overheads associated with TCP and IP
headers, retransmitts and other protocols (e.g. DNS). In addition,
other end-to-end parameters include the effective bottleneck data
rate, the sender interface data rate and the TCP/IP header sizes
(overhead).
Ancillary parameters include the effective bottleneck rate and the Note that the target parameters can be specified for a hypothetical
permitted number of connections (numb_cons). path, for example to construct TDS designed for bench testing in the
absence of a real application, or for a real physical test, for in
situ testing of production infrastructure.
The use of multiple connections has been very controversial since the The number of concurrent connections is explicitly not a parameter to
beginning of the World-Wide-Web[first complaint]. Modern browsers this model [unlike earlier drafts]. If a subpath requires multiple
open many connections [BScope]. Experts associated with IETF connections in order to meet the specified performance, that must be
transport area have frequently spoken against this practice [long stated explicitly and the procedure described in Section 6.1.4
list]. It is not inappropriate to assume some small number of applies.
concurrent connections (e.g. 4 or 6), to compensate for limitation in
TCP. However, choosing too large a number is at risk of being
interpreted as a signal by the web browser community that this
practice has been embraced by the Internet service provider
community. It may not be desirable to send such a signal.
5.2. Common Model Calculations 5.2. Common Model Calculations
The most important derived parameter is target_pipe_size (in The most important derived parameter is target_pipe_size (in
packets), which is the number of packets needed exactly meet the packets), which is the window size --- the number of packets needed
target rate, with numb_cons connections and no cross traffic for the exactly meet the target rate, with no cross traffic for the specified
specified target RTT and MTU. It is given by: target RTT and MTU. It is given by:
target_pipe_size = (target_rate / numb_cons) * target_RTT / ( target_pipe_size = target_rate * target_RTT / ( target_MTU -
target_MTU - header_overhead ) header_overhead )
If the transport protocol (e.g. TCP) average window size is smaller If the transport protocol (e.g. TCP) average window size is smaller
than this, it will not meet the target rate. than this, it will not meet the target rate.
The reference_target_run_length, which is the most conservative model The reference target_run_length, is a very conservative model for the
for the minimum spacing between losses, can derived as follows: minimum required spacing between losses or ECN marks. The reference
assume the link_data_rate is infinitesimally larger than the target_run_length can derived as follows: assume the
target_data_rate. Then target_pipe_size also predicts the onset of subpath_data_rate is infinitesimally larger than the target_data_rate
queueing. If the transport protocol (e.g. TCP) has an average plus the required header overheads. Then target_pipe_size also
window size that is larger than the target_pipe_size, the excess predicts the onset of queueing. If the transport protocol (e.g.
packets will form a standing queue at the bottleneck. TCP) has a window size that is larger than the target_pipe_size, the
excess packets will raise the RTT, typically by forming a standing
queue at the bottleneck.
If the transport protocol is using standard Reno style Additive Assume the transport protocol is using standard Reno style Additive
Increase, Multiplicative Decrease congestion control [RFC5681], then Increase, Multiplicative Decrease congestion control [RFC5681] and
there must be target_pipe_size roundtrips between losses. Otherwise the receiver is using standard delayed ACKs. With delayed ACKs there
the multiplicative window reduction triggered by a loss would cause must be 2*target_pipe_size roundtrips between losses. Otherwise the
the network to be underfilled. Following [MSMO97], we derive the multiplicative window reduction triggered by a loss would cause the
losses must be no more frequent than every 1 in network to be underfilled. We derive the number of packets between
(3/2)(target_pipe_size^2) packets. This provides the reference value losses from the area under the AIMD sawtooth following [MSMO97].
for target_run_length which is typically the number of packets that They must be no more frequent than every 1 in
must be delivered between loss episodes in the tests below: (3/2)*target_pipe_size*(2*target_pipe_size) packets. This simplifies
to:
reference_target_run_length = (3/2)(target_pipe_size^2) target_run_length = 3*(target_pipe_size^2)
Note that this calculation is based on a number of assumptions that Note that this calculation is very conservative and is based on a
may not apply. Appendix A discusses these assumptions and provides number of assumptions that may not apply. Appendix A discusses these
some alternative models. The actual method for computing assumptions and provides some alternative models. If a less
target_run_length MUST be documented along with the rationale for the conservative model is used, a fully specified TDS or FSTDS MUST
underlying assumptions and the ratio of chosen target_run_length to document the actual method for computing target_run_length along with
reference_target_run_length. @@@ MOVE the rationale for the underlying assumptions and the ratio of chosen
target_run_length to the reference target_run_length calculated
above.
Although this document gives a lot of latitude for calculating These two parameters, target_pipe_size and target_run_length,
target_run_length, people designing suites of tests need to consider directly imply most of the individual parameters for the tests below.
the effect of their choices on the ongoing conversation and tussle Target_pipe_size is the window size, the amount of circulating data
about the relevance of "TCP friendliness" as an appropriate model for required to meet the target data rate, and implies the scale of the
capacity allocation. Choosing a target_run_length that is bursts that the network might experience. Target_run_length is the
substantially smaller than reference_target_run_length is equivalent amount of data required between losses or ECN marks standard for
to saying that it is appropriate for the transport research community standard congestion control.
to abandon "TCP friendliness" as a fairness model and to develop more
aggressive Internet transport protocols, and for applications to
continue (or even increase) the number of connections that they open
concurrently.
The calculations for individual parameters are presented with the The individual parameters are for each diagnostic test is described
each single property test. In general these calculations permit some below. In a few case there are not well established models for what
derating as described in Section 5.3. For test parameters that can is considered correct network operation. In many of these cases the
be derated and are proportional to target_pipe_size, it is problems might either be partially mitigated by future improvements
recommended that the derating be specified relative to to TCP implementations.
target_pipe_size calculations using numb_cons=1, although the
derating may additionally be specified relative to the
target_pipe_size common to other tests.
5.3. Parameter Derating 5.3. Parameter Derating
Since some aspects of the models are very conservative, the modeling Since some aspects of the models are very conservative, this
framework permits some latitude in derating some specific test framework permits some latitude in derating test parameters. Rather
parameters. For example classical performance models suggest that in than trying to formalize more complicated models we permit some test
order to be sure that a single TCP stream can fill a link, it needs parameters to be relaxed as long as they meet some additional
to have a full bandwidth-delay-product worth of buffering at the procedural constraints:
bottleneck[QueueSize]. In real networks with real applications this o The TDS or FSTDS MUST document and justify the actual method used
is often overly conservative. Rather than trying to formalize more compute the derated metric parameters.
complicated models we permit some test parameters to be relaxed as o The validation procedures described in Section 9 must be used to
long as they meet some additional procedural constraints:
o The method used compute and justify the derated metrics is
published in such a way that it becomes a matter of public record.
@@@ introduce earlier
o The calibration procedures described in Section 9 are used to
demonstrate the feasibility of meeting the performance targets demonstrate the feasibility of meeting the performance targets
with the derated test parameters. with infrastructure that infinitessimally passes the derated
o The calibration process itself is documented is such a way that tests.
other researchers can duplicate the experiments and validate the o The validation process itself must be documented is such a way
results. that other researchers can duplicate the validation experiments.
In the test specifications in Section 7 assume 0 < derate <= 1, is a
derating parameter. These will be individually named in the final
document. In all cases making derate smaller makes the test more
tolerant. Derate = 1 is "full strenght".
Note that some test parameters are not permitted to be derated. Except as noted, all tests below assume no derating. Tests where
there is not currently a well established model for the required
parameters include derating as a way to indicate flexibility in the
parameters.
6. Common testing procedures 6. Common testing procedures
6.1. Traffic generating techniques 6.1. Traffic generating techniques
6.1.1. Paced transmission 6.1.1. Paced transmission
Paced (burst) transmissions: send bursts of data on a timer to meet a Paced (burst) transmissions: send bursts of data on a timer to meet a
particular target rate and pattern. particular target rate and pattern. In all cases the specified data
Single: Send individual packets at the specified rate or headway. rate can either be the application or link rates. Header overheads
must be included in the calculations as appropriate.
Paced single packets: Send individual packets at the specified rate
or headway.
Burst: Send sender interface rate bursts on a timer. Specify any 3 Burst: Send sender interface rate bursts on a timer. Specify any 3
of average rate, packet size, burst size (number of packets) and of: average rate, packet size, burst size (number of packets) and
burst headway (burst start to start). These bursts are typically burst headway (burst start to start). These bursts are typically
sent as back-to-back packets at the testers interface rate. sent as back-to-back packets at the testers interface rate.
Slowstart: Send 4 packet sender interface rate bursts at an average Slowstart bursts: Send 4 packet sender interface rate bursts at an
rate equal to the minimum of twice effective bottleneck link rate average data rate equal to twice effective bottleneck link rate
or the sender interface rate. This corresponds to the average (but not more than the sender interface rate). This corresponds
rate during a TCP slowstart when Appropriate Byte Counting [ABC] to the average rate during a TCP slowstart when Appropriate Byte
is present or delayed ack is disabled. Counting [ABC] is present or delayed ack is disabled.
Repeated Slowstart: Slowstart pacing itself is typically part of Repeated Slowstart bursts: Slowstart bursts are typically part of
larger scale pattern of repeated bursts, such as sending larger scale pattern of repeated bursts, such as sending
target_pipe_size packets as slowstart bursts on a target_RTT target_pipe_size packets as slowstart bursts on a target_RTT
headway (burst start to burst start). Such a stream has three headway (burst start to burst start). Such a stream has three
different average rates, depending on the averaging time scale. different average rates, depending on the averaging time scale.
At the finest time scale the average rate is the same as the At the finest time scale the average rate is the same as the
sender interface rate, at a medium scale the average rate is twice sender interface rate, at a medium scale the average rate is twice
the bottleneck link rate and at the longest time scales the the effective bottleneck link rate and at the longest time scales
average rate is the target data rate, adjusted to include header the average rate is the target data rate.
overhead.
Note that if the effective bottleneck link rate is more than half of Note that if the effective bottleneck link rate is more than half of
the sender interface rate, slowstart bursts become sender interface the sender interface rate, slowstart bursts become sender interface
rate bursts. rate bursts.
6.1.2. Constant window pseudo CBR 6.1.2. Constant window pseudo CBR
Implement pseudo CBR by running a standard protocol such as TCP with Implement pseudo constant bit rate by running a standard protocol
a fixed window size. This has the advantage that it can be such as TCP with a fixed bound on the window size. The rate is only
implemented as part of real content delivery. The rate is only
maintained in average over each RTT, and is subject to limitations of maintained in average over each RTT, and is subject to limitations of
the transport protocol. the transport protocol.
For tests that have strongly prescribed data rates, if the transport The bound on the window size is computed from the target_data_rate
protocol fails to maintain the test rate for any reason related to and the actual RTT of the test path.
the network itself, such as packet losses or congestion, the test
should be considered inconclusive. Otherwise there are some cases
where tester failures might cause false negative link test results.
6.1.2.1. Scanned window pseudo CBR If the transport protocol fails to maintain the test rate within
prescribed data rates, the test MUST NOT be considered passing. If
there is a signature of a network problem (e.g. the run length is too
small) then the test can be considered to fail. Since packet loss
and ECN marks are required to reduce the data rate for standard
transport protocols, the test specification must include suitable
allowances in the prescribed data rates. If there is not sufficient
signature of a network problem, then failing to make the prescribed
data rate must be considered inconclusive. Otherwise there are some
cases where tester failures might cause false negative test results.
Same as the above, except the window is incremented once per 6.1.3. Scanned window pseudo CBR
2*target_pipe_size, starting from below target_pipe[@@@ test pipe]
and sweeping up to first loss or some other event. This is analogous
to the tests implemented in Windowed Ping [WPING] and pathdiag
[Pathdiag]
6.1.3. Intermittent Testing Same as the above, except the window is scanned across a range of
sizes designed to include two key events, the onset of queueing and
the onset of packet loss or ECN marks. The window is scanned by
incrementing it by one packet for every 2*target_pipe_size delivered
packets. This mimics the additive increase phase of standard
congestion avoidance and normally separates the the window increases
by approximately twice the target_RTT.
There are two versions of this test: one built by applying a window
clamp to standard congestion control and one one built by stiffening
a non-standard transport protocol. When standard congestion control
is in effect, any losses or ECN marks cause the transport to revert
to a window smaller than the clamp such that the scanning clamp
looses control the window size. The NPAD pathdiag tool is an example
of this class of algorithms [Pathdiag].
Alternatively a non-standard congestion control algorithm can respond
to losses by transmitting extra data, such that it (attempts) to
maintain the specified window size independent of losses or ECN
marks. Such a stiffened transport explicitly violates mandatory
Internet congestion control and is not suitable for in situ testing.
It is only appropriate for engineering testing under laboratory
conditions. The Windowed Ping tools implemented such a test [WPING].
This tool has been updated and is under test.[mpingSource]
The test procedures in Section 7.2 describe how to the partition the
scans into regions and how to interpret the results.
6.1.4. Concurrent or channelized testing
The procedures described in his document are only directly applicable
to single stream performance measurement, e.g. one TCP connection.
In an Ideal world, we would disallow all performance claims based
multiple concurrent stream but this is not practical due to at least
two different issues. First, many very high rate link technologies
are channelized, and pin individual flows to specific channels to
minimize reordering or solve other problems and second TCP itself has
scaling limits. Although the former problem might be overcome
through different design decisions, the later problem is more deeply
rooted.
All standard [RFC 5681] and de facto standard [CUBIC] congestion
control algorithms have scaling limits, in the sense that as a
network over a fixed RTT and MTU gets faster all congestion control
algorithms get less accurate. In general their noise immunity drops
(a single packet drop should have less effect as individual packets
become smaller relative to the window size) and the control frequency
of the AIMD sawtooth also drops, meaning that as TCP is using more
total capacity it gets less information about the state of the
network and other traffic. These properties are a direct consequence
of the original Reno design and are implicitly required by the
requirement that all transport protocols be "TCP friendly"
[Guidelines] There are a number of reason to want to specify
performance in term of multiple concurrent flows. Although there are
a number of downsides to @@@@
The use of multiple connections in the Internet has been very
controversial since the beginning of the World-Wide-Web[first
complaint]. Modern browsers open many connections [BScope]. Experts
associated with IETF transport area have frequently spoken against
this practice [long list]. It is not inappropriate to assume some
small number of concurrent connections (e.g. 4 or 6), to compensate
for limitation in TCP. However, choosing too large a number is at
risk of being interpreted as a signal by the web browser community
that this practice has been embraced by the Internet service provider
community. It may not be desirable to send such a signal.
Note that the current proposal for httpbis [SPDY] is specifically
designed to work best with a single TCP connection per client server
pair, because it uses adaptive compression which requires sending
separate compression dictionaries per connection. As long as TCP can
use IW10 and some of the transport parameter can be cached, multiple
connections provide a negative gain, due to the replicated
compression overhead.
The specification to use multiple connections is not recommended for
data rates below several Mb/s, which can be attained with run lengths
under 10000. Since run length goes as the square of the data rates,
at higher rates (see Section 8.3) the run lengths can be unfeasibly
large, and multiple connection might be the only feasible approach.
6.1.5. Intermittent Testing
Any test which does not depend on queueing (e.g. the CBR tests) or Any test which does not depend on queueing (e.g. the CBR tests) or
experiences periodic zero outstanding data during normal operation experiences periodic zero outstanding data during normal operation
(e.g. between bursts for burst tests), can be formulated as an (e.g. between bursts for the various burst tests), can be formulated
intermittent test. as an intermittent test.
The Intermittent testing can be used for ongoing monitoring for The Intermittent testing can be used for ongoing monitoring for
changes in subpath quality with minimal disruption users. It should changes in subpath quality with minimal disruption users. It should
be used in conjunction with the full rate test because this method be used in conjunction with the full rate test because this method
assesses an average_run_length over a long time interval w.r.t. user assesses an average_run_length over a long time interval w.r.t. user
sessions. It may false fail due to other legitimate congestion sessions. It may false fail due to other legitimate congestion
causing traffic or may false pass changes in underlying link causing traffic or may false pass changes in underlying link
properties (e.g. a modem retraining to an out of contract lower properties (e.g. a modem retraining to an out of contract lower
rate). rate).
[Need text about bias (false pass) in the shadow of loss caused by [Need text about bias (false pass) in the shadow of loss caused by
excessive bursts] excessive bursts]
6.1.4. Intermittent Scatter Testing 6.1.6. Intermittent Scatter Testing
Intermittent scatter testing: when testing the network path to or Intermittent scatter testing: when testing the network path to or
from an ISP subscriber aggregation point (CMTS, DSLAM, etc), from an ISP subscriber aggregation point (CMTS, DSLAM, etc),
intermittent tests can be spread across a pool of users such that no intermittent tests can be spread across a pool of users such that no
one users experiences the full impact of the testing, even though the one users experiences the full impact of the testing, even though the
traffic to or from the ISP subscriber aggregation point is sustained traffic to or from the ISP subscriber aggregation point is sustained
at full rate. at full rate.
6.2. Interpreting the Results 6.2. Interpreting the Results
6.2.1. Test outcomes 6.2.1. Test outcomes
A singleton is a pass fail measurement. If any subpath fails any A singleton is a pass/fail measurement of a subpath. If any subpath
test it can be assumed that the end-to-end path will also fail to fails any test then the end-to-end path is also expected to fail to
attain the target performance under some conditions. attain the target performance under some conditions.
In addition we use "inconclusive" outcome to indicate that a test In addition we use "inconclusive outcome" to indicate that a test
failed to attain the required test conditions. This is important to failed to attain the required test conditions. A test is
the extent that the tests themselves use protocols that have built in inconclusive if the precomputed traffic pattern was not authentically
control systems which might interfere with some aspect of the test. generated, test preconditions were not met or the measurement results
For example consider a test is implemented by adding rate controls were not statistically significantly.
and instrumentation to TCP: failing to attain the specified data rate
has to be treated an inconclusive, unless the test clearly fails This is important to the extent that the diagnostic tests use
(target_run_lenght is too small). This is because failing to reach protocols which themselves include built in control systems which
the target rate is an ambiguous signature for problems with either might interfere with some aspect of the test. For example consider a
the test procedure (a problem with the TCP implementation or the test test that is implemented by adding rate controls and loss
path RTT is too long) or the subpath itself. instrumentation to TCP: meeting the run length specification while
failing to attain the specified data rate must be treated as an
inconclusive result, because we can not a priori determine if the
reduced data rate was caused by a TCP problem or a network problem,
or if the reduced data rate had a material effect on the run length
measurement. (Note that if the measured run length was too small,
the test can be considered to have failed because it doesn't really
matter that the test didn't attain the required data rate).
The vantage independence properties of Model Based Metrics depends on The vantage independence properties of Model Based Metrics depends on
the accuracy of the distinction between failing and inconclusive the accuracy of the distinction between conclusive (pass or fail) and
tests. One of the goals of evolving test designs will be to keep inconclusive tests. One way to view inconclusive tests is that they
sharpening the distinction between failing and inconclusive tests. reflect situations where the signature is ambiguous between problems
with the the subpath and problems with the diagnostic test itself.
One of the goals for evolving diagnostic test designs will be to keep
sharpening this distinction.
One of the goals of evolving the testing process, procedures and One of the goals of evolving the testing process, procedures and
measurement point selection should be to minimize the number of measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
Note that procedures that attempt to sweep the target parameter space
to find the bounds on some parameter (for example to find the highest
data rate for a subpath) are likely to break the location independent
properties of Model Based Metrics, because the boundary between
passing and inconclusive is extremely likely to be RTT sensitive,
because TCP's ability to compensate for problems scales with the
number of round trips per second.
6.2.2. Statistical criteria for measuring run_length 6.2.2. Statistical criteria for measuring run_length
When evaluating the observed run_length, we need to determine When evaluating the observed run_length, we need to determine
appropriate packet stream sizes and acceptable error levels to test appropriate packet stream sizes and acceptable error levels for
efficiently. In practice, can we compare the empirically estimated efficient methods of measurement. In practice, can we compare the
loss probabilities with the targets as the sample size grows? How empirically estimated loss probabilities with the targets as the
large a sample is needed to say that the measurements of packet sample size grows? How large a sample is needed to say that the
transfer indicate a particular run-length is present? measurements of packet transfer indicate a particular run-length is
present?
The generalized measurement can be described as recursive testing: The generalized measurement can be described as recursive testing:
send packets (individually or in patterns) and observe the packet
transfer performance (loss ratio or other metric, any defect we
define).
send a flight of packets and observe the packet transfer performance As each packet is sent and measured, we have an ongoing estimate of
(loss ratio or other metric, any defect we define).
As each flight is sent and measured, we have an ongoing estimate of
the performance in terms of defect to total packet ratio (or an the performance in terms of defect to total packet ratio (or an
empirical probability). Continue to send until conditions support a empirical probability). We continue to send until conditions support
conclusion or a maximum sending limit has been reached. a conclusion or a maximum sending limit has been reached.
We have a target_defect_probability, 1 defect per target_run_length, We have a target_defect_probability, 1 defect per target_run_length,
where a "defect" is defined as a lost packet, a packet with ECN mark, where a "defect" is defined as a lost packet, a packet with ECN mark,
or other impairment. This constitutes the null Hypothesis: or other impairment. This constitutes the null Hypothesis:
H0: no more than one defects in target_run_length = (3/2)*(flight)^2 H0: no more than one defect in target_run_length =
packets 3*(target_pipe_size)^2 packets
and we can stop sending flights of packets if measurements support and we can stop sending packets if on-going measurements support
accepting H0 with the specified Type I error = alpha (= 0.05 for accepting H0 with the specified Type I error = alpha (= 0.05 for
example). example).
We also have an alternative Hypothesis to evaluate: if performance is We also have an alternative Hypothesis to evaluate: if performance is
significantly lower than the target_defect_probability, say half the significantly lower than the target_defect_probability. Based on
target: analysis of typical values and practical limits on measurement
duration, we choose four times the H0 probability:
H1: one or more defects in target_run_length/2 packets H1: one or more defects in (target_run_length/4) packets
and we can stop sending flights of packets if measurements support and we can stop sending packets if measurements support rejecting H0
rejecting H0 with the specified Type II error = beta, thus preferring with the specified Type II error = beta (= 0.05 for example), thus
the alternate H1. preferring the alternate hypothesis H1.
H0 and H1 constitute the Success and Failure outcomes described H0 and H1 constitute the Success and Failure outcomes described
elsewhere in the memo, and while the ongoing measurements do not elsewhere in the memo, and while the ongoing measurements do not
support either hypothesis the current status of measurements is support either hypothesis the current status of measurements is
inconclusive. inconclusive.
The problem above is formulated to match the Sequential Probability The problem above is formulated to match the Sequential Probability
Ratio Test (SPRT) [StatQC] [temp ref: Ratio Test (SPRT) [StatQC], which also starts with a pair of
http://en.wikipedia.org/wiki/Sequential_probability_ratio_test ], hypothesis specified as above:
which also starts with a pair of hypothesis specified as above:
H0: p = p0 = one defect in target_run_length H0: p0 = one defect in target_run_length
H1: p = p1 = one defect in target_run_length/2 H1: p1 = one defect in target_run_length/4
As flights are sent and measurements collected, the tester evaluates As packets are sent and measurements collected, the tester evaluates
the cumulative log-likelihood ratio: the cumulative defect count against two boundaries representing H0
Acceptance or Rejection (and acceptance of H1):
S_i = S_i-1 + log(Lambda_i) Acceptance line: Xa = -h1 + sn
Rejection line: Xr = h2 + sn
where n increases linearly for each packet sent and
where Lambda_i is the ratio of the two likelihood functions h1 = { log((1-alpha)/beta) }/k
(calculated on the measurement at packet i, and index i increases h2 = { log((1-beta)/alpha) }/k
linearly over all flights of packets ) for p0 and p1 [temp ref: k = log{ (p1(1-p0)) / (p0(1-p1)) }
http://en.wikipedia.org/wiki/Likelihood_function ]. s = [ log{ (1-p0)/(1-p1) } ]/k
for p0 and p1 as defined in the null and alternative Hypotheses
statements above, and alpha and beta as the Type I and Type II error.
The SPRT specifies simple stopping rules: The SPRT specifies simple stopping rules:
o a < S_i < b: continue testing o Xa < defect_count(n) < Xb: continue testing
o S_i <= a: Accept H0 o defect_count(n) <= Xa: Accept H0
o S_i >= b: Accept H1 o defect_count(n) >= Xb: Accept H1
where a and b are based on the Type I and II errors, alpha and beta:
a ~= Log((beta/(1-alpha)) and b ~= Log((1-beta)/alpha)
with the error probabilities decided beforehand, as above.
The calculations above are implemented in the R-tool for Statistical The calculations above are implemented in the R-tool for Statistical
Analysis, in the add-on package for Cross-Validation via Sequential Analysis, in the add-on package for Cross-Validation via Sequential
Testing (CVST) [http://www.r-project.org/] [Rtool] [CVST] . Testing (CVST) [http://www.r-project.org/] [Rtool] [CVST] .
6.2.3. Classifications of tests Using the equations above, we can calculate the minimum number of
packets (n) needed to accept H0 when x defects are observed. For
Tests are annotated with "(capacity)", "(engineering)" or example, when x = 0:
"(monitoring)". @@@@MOVE to definitions?
Capacity tests determine if a network subpath has sufficient capacity
to deliver the target performance. As such, they reflect parameters
that can transition from passing to failing as a consequence of
additional presented load or the actions of other network users. By
definition, capacity tests also consume network resources (capacity
and/or buffer space), and their test schedules must be balanced by
their cost.
Monitoring tests are design to capture the most important aspects of
a capacity test, but without causing unreasonable ongoing load
themselves. As such they may miss some details of the network
performance, but can serve as a useful reduced cost proxy for a
capacity test.
Engineering tests evaluate how network algorithms (such as AQM and
channel allocation) interact with transport protocols. These tests
are likely to have complicated interactions with other network
traffic and can be inversely sensitive to load. For example a test
to verify that an AQM algorithm causes ECN marks or packet drops
early enough to limit queue occupancy may experience a false pass
results in the presence of bursty cross traffic. It is important
that engineering tests be performed under a wide range of conditions,
including both in situ and bench testing, and under a variety of load
conditions. Ongoing monitoring is less likely to be useful for these
tests, although sparse in situ testing might be appropriate.
@@@ Add single property vs combined tests here? Xa = 0 = -h1 + sn
and n = h1 / s
6.2.4. Reordering Tolerance 6.2.3. Reordering Tolerance
All tests must be instrumented for reordering [RFC4737]. All tests must be instrumented for reordering [RFC4737].
NB: there is no global consensus for how much reordering tolerance is NB: there is no global consensus for how much reordering tolerance is
appropriate or reasonable. ("None" is absolutely unreasonable.) appropriate or reasonable. ("None" is absolutely unreasonable.)
Section 5 of [RFC4737] proposed a metric that may be sufficient to Section 5 of [RFC4737] proposed a metric that may be sufficient to
designate isolated reordered packets as effectively lost, because designate isolated reordered packets as effectively lost, because
TCP's retransmission response would be the same. TCP's retransmission response would be the same.
[As a strawman, we propose the following:] TCP should be able to [As a strawman, we propose the following:] TCP should be able to
adapt to reordering as long as the reordering extent is no more than adapt to reordering as long as the reordering extent is no more than
the maximum of one half window or 1 mS, whichever is larger. Note the maximum of one half window or 1 mS, whichever is larger. Note
that there is a fundamental tradeoff between tolerance to reordering that there is a fundamental tradeoff between tolerance to reordering
and how quickly algorithms such as fast retransmit can repair losses. and how quickly algorithms such as fast retransmit can repair losses.
Within this limit on reorder extent, there should be no bound on Within this limit on reorder extent, there should be no bound on
reordering frequency. reordering density.
NB: Current TCP implementations are not compatible with this metric. NB: Traditional TCP implementations were not compatible with this
We view this as bugs in current TCP implementations. metric, however newer implementations still need to be evaluated
Parameters: Parameters:
Reordering displacement: the maximum of one half of target_pipe_size Reordering displacement: the maximum of one half of target_pipe_size
or 1 mS. or 1 mS.
6.3. Test Qualifications 6.3. Test Qualifications
This entire section might be summarized as "needs to be specified in
a FSTDS"
Things to monitor before, during and after a test. Things to monitor before, during and after a test.
6.3.1. Verify the Traffic Generation Accuracy 6.3.1. Verify the Traffic Generation Accuracy
[Excess detail for this doc. To be summarized]
for most tests, failing to accurately generate the test traffic for most tests, failing to accurately generate the test traffic
indicates an inconclusive tests, since it has to be presumed that the indicates an inconclusive tests, since it has to be presumed that the
error in traffic generation might have affected the test outcome. To error in traffic generation might have affected the test outcome. To
the extent that the network itself had an effect on the the traffic the extent that the network itself had an effect on the the traffic
generation (e.g. in the standing queue tests) the possibility exists generation (e.g. in the standing queue tests) the possibility exists
that allowing too large of error margin in the traffic generation that allowing too large of error margin in the traffic generation
might introduce feedback loops that comprise the vantage independents might introduce feedback loops that comprise the vantage independents
properties of these tests. properties of these tests.
Parameters: Parameters:
skipping to change at page 21, line 15 skipping to change at page 24, line 16
Maximum Data Rate Error The permitted amount that the test traffic Maximum Data Rate Error The permitted amount that the test traffic
can be different than specified for the current test. This is a can be different than specified for the current test. This is a
symmetrical bound. symmetrical bound.
Maximum Data Rate Overage The permitted amount that the test traffic Maximum Data Rate Overage The permitted amount that the test traffic
can be above than specified for the current test. can be above than specified for the current test.
Maximum Data Rate Underage The permitted amount that the test Maximum Data Rate Underage The permitted amount that the test
traffic can be less than specified for the current test. traffic can be less than specified for the current test.
6.3.2. Verify the absence of cross traffic 6.3.2. Verify the absence of cross traffic
[Excess detail for this doc. To be summarized]
The proper treatment of cross traffic is different for different The proper treatment of cross traffic is different for different
subpaths. In general when testing infrastructure which is associated subpaths. In general when testing infrastructure which is associated
with only one subscriber, the test should be treated as inconclusive with only one subscriber, the test should be treated as inconclusive
it that subscriber is active on the network. However, for shared it that subscriber is active on the network. However, for shared
infrastructure, the question at hand is likely to be testing if infrastructure, the question at hand is likely to be testing if
provider has sufficient total capacity. In such cases the presence provider has sufficient total capacity. In such cases the presence
of cross traffic due to other subscribers is explicitly part of the of cross traffic due to other subscribers is explicitly part of the
network conditions and its effects are explicitly part of the test. network conditions and its effects are explicitly part of the test.
@@@@ Need to distinguish between ISP managed sharing and unmanaged
sharing. e.g. WiFi
Note that canceling tests due to load on subscriber lines may Note that canceling tests due to load on subscriber lines may
introduce sampling errors for testing other parts of the introduce sampling errors for testing other parts of the
infrastructure. For this reason tests that are scheduled but not run infrastructure. For this reason tests that are scheduled but not run
due to load should be treated as a special case of "inconclusive". due to load should be treated as a special case of "inconclusive".
Use a passive packet or SNMP monitoring to verify that the traffic Use a passive packet or SNMP monitoring to verify that the traffic
volume on the subpath agrees with the traffic generated by a test. volume on the subpath agrees with the traffic generated by a test.
Ideally this should be performed before during and after each test. Ideally this should be performed before, during and after each test.
The goal is provide quality assurance on the overall measurement The goal is provide quality assurance on the overall measurement
process, and specifically to detect the following measurement process, and specifically to detect the following measurement
failure: a user observes unexpectedly poor application performance, failure: a user observes unexpectedly poor application performance,
the ISP observes that the access link is running at the rated the ISP observes that the access link is running at the rated
capacity. Both fail to observe that the user's computer has been capacity. Both fail to observe that the user's computer has been
infected by a virus which is spewing traffic as fast as it can. infected by a virus which is spewing traffic as fast as it can.
Parameters: Parameters:
Maximum Cross Traffic Data Rate The amount of excess traffic Maximum Cross Traffic Data Rate The amount of excess traffic
permitted. Note that this will be different for different tests. permitted. Note that this will be different for different tests.
One possible method is an adaptation of: www-didc.lbl.gov/papers/ One possible method is an adaptation of: www-didc.lbl.gov/papers/
SCNM-PAM03.pdf D Agarwal etal. "An Infrastructure for Passive SCNM-PAM03.pdf D Agarwal etal. "An Infrastructure for Passive
Network Monitoring of Application Data Streams". Use the same Network Monitoring of Application Data Streams". Use the same
technique as that paper to trigger the capture of SNMP statistics for technique as that paper to trigger the capture of SNMP statistics for
the link. the link.
6.3.3. Additional test preconditions 6.3.3. Additional test preconditions
[Excess detail for this doc. To be summarized]
Send pre-load traffic as needed to activate radios with a sleep mode, Send pre-load traffic as needed to activate radios with a sleep mode,
or other "reactive network" elements (term defined in or other "reactive network" elements (term defined in
[draft-morton-ippm-2330-update-01]). [draft-morton-ippm-2330-update-01]).
Use the procedure above to confirm that the pre-test background Use the procedure above to confirm that the pre-test background
traffic is low enough. traffic is low enough.
7. Single Property Tests 7. Diagnostic Tests
7.1. Basic Data and Loss Rate Tests
We propose several versions of the loss rate test. All are rate
controlled at or below the target_data_rate. The first, performed at
constant full data rate, is intrusive and recommend for infrequent
testing, such as when a service is first turned up or as part of an
auditing process. The second, background loss rate, is designed for
ongoing monitoring for change is subpath quality.
7.1.1. Loss Rate at Paced Full Data Rate The diagnostic tests are organized by which properties are being
tested: run length, standing queues; slowstart bursts; sender rate
bursts; and combined tests. The combined tests reduce overhead at
the expense of conflating the signatures of multiple failures.
Confirm that the observed run length is at least the 7.1. Basic Data Rate and Run Length Tests
target_run_lenght while sending at the target_rate. This test
implicitly confirms that sub_path has sufficient raw capacity to
carry the target_data_rate. This version of the loss rate test
relies on timers to schedule data transmission at a true constant bit
rate (CBR).
Test Parameters: We propose several versions of the basic data rate and run length
Run Length Same as target_run_lenght test. All measure the number of packets delivered between losses or
Data Rate Same as target_data_rate ECN marks, using a data stream that is rate controlled at or below
Maximum Cross Traffic A specified small fraction of the target_data_rate.
target_data_rate.
Note that target_run_lenght and target_data_rate parameters MUST NOT The tests below differ in how the data rate is controlled. The data
be derated. If the default parameters are too stringent an alternate can be paced on a timer, or window controlled at full target data
model as described in Appendix A can be used to compute rate. The first two tests implicitly confirm that sub_path has
target_run_lenght. sufficient raw capacity to carry the target_data_rate. They are
recommend for relatively infrequent testing, such as an installation
or auditing process. The third, background run length, is a low rate
test designed for ongoing monitoring for changes in subpath quality.
The test traffic is sent using the procedures in Section 6.1.1 at All rely on the receiver accumulating packet delivery statistics as
target_data_rate with a burst size of 1, subject to the described in Section 6.2.2 to score the outcome:
qualifications in Section 6.3. The receiver accumulates packet
delivery statistics as described in Section 6.2 to score the outcome:
Pass: it is statistically significantly that the observed run length Pass: it is statistically significant that the observed run length is
is larger than the target_run_length. larger than the target_run_length.
Fail: it is statistically significantly that the observed run length Fail: it is statistically significant that the observed run length is
is smaller than the target_run_length. smaller than the target_run_length.
Inconclusive: The test failed to meet the qualifications defined in A test is considered to be inconclusive if it failed to meet the data
Section 6.3 or neither test was statistically significant. rate as specified below, meet the qualifications defined in
Section 6.3 or neither run length statistical hypothesis was
confirmed in the allotted test duration.
7.1.2. Loss Rate at Full Data Windowed Rate 7.1.1. Run Length at Paced Full Data Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_lenght while sending at the target_rate. This test target_run_length while relying on timer to send data at the
implicitly confirms that sub_path has sufficient raw capacity to target_rate using the procedure described in in Section 6.1.1 with a
carry the target_data_rate. This version of the loss rate test burst size of 1 (single packets).
relies on a fixed window to self clock data transmission into the
network. This is more authentic.
Test Parameters:
Run Length Same as target_run_lenght
Data Rate Same as target_data_rate
Maximum Cross Traffic A specified small fraction of
target_data_rate.
Note that target_run_lenght and target_data_rate parameters MUST NOT
be derated. If the default parameters are too stringent an alternate
model as described in Appendix A can be used to compute
target_run_lenght.
The test traffic is sent using the procedures in Section 6.1.1 at The test is considered to be inconclusive if the packet transmission
target_data_rate with a burst size of 1, subject to the can not be accurately controlled for any reason.
qualifications in Section 6.3. The receiver accumulates packet
delivery statistics as described in Section 6.2 to score the outcome:
Pass: it is statistically significantly that the observed run length 7.1.2. run length at Full Data Windowed Rate
is larger than the target_run_length.
Fail: it is statistically significantly that the observed run length Confirm that the observed run length is at least the
is smaller than the target_run_length. target_run_length while sending at an average rate equal to the
target_data_rate, by controlling (or clamping) the window size of a
conventional transport protocol to a fixed value computed from the
properties of the test path, typically
test_window=target_data_rate*test_RTT/target_MTU.
Inconclusive: The test failed to meet the qualifications defined in Since losses and ECN marks generally cause transport protocols to at
Section 6.3 or neither test was statistically significant. least temporarily reduce their data rates, this test is expected to
be less precise about controlling its data rate. It should not be
considered inconclusive as long as at least some of the round trips
reached the full target_data_rate, without incurring losses. To pass
this test the network MUST deliver target_pipe_size packets in
target_RTT time without any losses or ECN marks at least once per two
target_pipe_size round trips, in addition to meeting the run length
statistical test.
7.1.3. Background Loss Rate Tests 7.1.3. Background Run Length Tests
The background loss rate is a low rate version of the target rate The background run length is a low rate version of the target target
test above, designed for ongoing monitoring for changes in subpath rate test above, designed for ongoing lightweight monitoring for
quality without disrupting users. It should be used in conjunction changes in the observed subpath run length without disrupting users.
with the above full rate test because it may be subject to false It should be used in conjunction with one of the above full rate
results under some conditions, in particular it may false pass tests because it does not confirm that the subpath can support raw
changes in underlying link properties (e.g. a modem retraining to an data rate.
out of contract lower rate).
Parameters: Existing loss metrics such as [RFC 6673] might be appropriate for
Run Length Same as target_run_lenght measuring background run length.
Data Rate Some small fraction of target_data_rate, such as 1%.
Once the preconditions described in Section 6.3 are met, the test 7.2. Standing Queue tests
data is sent at the prescribed rate with a burst size of 1. The
receiver accumulates packet delivery statistics and the procedures
described in Section 6.2.1 and Section 6.3 are used to score the
outcome:
Pass: it is statistically significantly that the observed run length These test confirm that the bottleneck is well behaved across the
is larger than the target_run_length. onset of packet loss, which typically follows after the onset of
queueing. Well behaved generally means lossless for transient
queues, but once the queue has been sustained for a sufficient period
of time (or a sufficient queue depth) there should be a small number
of losses to signal to the transport protocol that it should reduce
its window. Losses that are too early can prevent the transport from
averaging at the target_data_rate. Losses that are too late indicate
that the queue might be subject to bufferbloat [Bufferbloat] and
inflict excess queuing delays on all flows sharing the bottleneck.
Excess losses make loss recovery problematic for the transport
protocol. Non-linear or erratic RTT fluctuations suggest poor
interactions between the channel acquisition systems and the
transport self clock. All of the tests in this section use the same
basic scanning algorithm but score the link on the basis of how well
it avoids each of these problems.
Fail: it is statistically significantly that the observed run length For some technologies the data might not be subject to increasing
is smaller than the target_run_length. delays, in which case the data rate will vary with the window size
all the way up to the onset of losses or ECN marks. For theses
technologies, the discussion of queueing does not apply, but it is
still required that the onset of losses (or ECN marks) be at an
appropriate point and progressive.
Inconclusive: Neither test was statistically significant or there was Use the procedure in Section 6.1.3 to sweep the window across the
excess cross traffic during the test. onset of queueing and the onset of loss. The tests below all assume
that the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_pipe_size
packets delivered. A scan can be divided into three regions: below
the onset of queueing, a standing queue, and at or beyond the onset
of loss.
7.2. Standing Queue tests Below the onset of queueing the RTT is typically fairly constant, and
the data rate varies in proportion to the window size. Once the data
rate reaches the link rate, the data rate becomes fairly constant,
and the RTT increases in proportion to the the window size. The
precise transition from one region to the other can be identified by
the maximum network power, defined to be the ratio data rate over the
RTT[POWER].
These test confirm that the bottleneck is well behaved across the For technologies that do not have conventional queues, start the scan
onset of queueing. For conventional bottlenecks this will be from at a window equal to the test_window, i.e. starting at the target
the onset of queuing to the point where there is a full target_pipe rate, instead of the power point.
of standing data. Well behaved generally means lossless for
target_run_length, followed by a small number of losses to signal to
the transport protocol that it should slow down. Losses that are too
early can prevent the transport from averaging above the target_rate.
Losses that are too late indicate that the queue might be subject to
bufferbloat and subject other flows to excess queuing delay. Excess
losses (more than half of of target_pipe) make loss recovery
problematic for the transport protcol.
These tests can also observe some problems with channel acquisition If there is random background loss (e.g. bit errors, etc), precise
systems, especially at the onset of persistent queueing. Details determination of the onset of packet loss may require multiple scans.
TBD. Above the onset of loss, all transport protocols are expected to
experience periodic losses. For the stiffened transport case they
will be determined by the AQM algorithm in the network or the details
of how the the window increase function responds to loss. For the
standard transport case the details of periodic losses are typically
dominated by the behavior of the transport protocol itself.
7.2.1. Congestion Avoidance 7.2.1. Congestion Avoidance
Use the procedure in Section 6.1.2.1 to sweep the window (rate) from A link passes the congestion avoidance standing queue test if more
below link_pipe up to beyond target_pipe+link_pipe. Depending on than target_run_length packets are delivered between the power point
events that happen during the scan, score the link. Identify the (or test_window) and the first loss or ECN mark. If this test is
power_point=MAX(rate/RTT) as the start of the test. implemented using a standards congestion control algorithm with a
clamp, it can be used in situ in the production internet as a
capacity test. For an example of such a test see [NPAD].
Fail if first loss is too early (loss rate too high) on repeated 7.2.2. Bufferbloat
tests or if the losses are more than half of the outstanding data. (a
capacity test)
7.2.2. Buffer Bloat This test confirms that there is some mechanism to limit buffer
occupancy (e.g. prevents bufferbloat). Note that this is not
strictly a requirement for single stream bulk performance, however if
there is no mechanism to limit buffer occupancy then a single stream
with sufficient data to deliver is likely to cause the problems
described in [RFC 2309] and [Bufferbloat]. This may cause only minor
symptoms for the dominant flow, but has the potential to make the
link unusable for all other flows and applications.
Use the procedure in Section 6.1.2.1 to sweep the window (rate) from Pass if the onset of loss is before a standing queue has introduced
below link_pipe up to beyond target_pipe+link_pipe. Depending on more delay than than twice target_RTT, or other well defined limit.
events that happen during the scan, score the link. Identify the Note that there is not yet a model for how much standing queue is
"power point:MAX(rate/RTT) as the start of the test (should be acceptable. The factor of two chosen here reflects a rule of thumb.
window=target_pipe) Note that in conjunction with the previous test, this test implies
that the first loss should occur at a queueing delay which is between
one and two times the target_RTT.
Fail if first loss is too late (insufficient AQM and subject to 7.2.3. Non excessive loss
bufferbloat - an engineering test). NO THEORY
7.2.3. Duplex Self Interference This test confirm that the onset of loss is not excessive. Pass if
losses are bound by the the fluctuations in the cross traffic, such
that transient load (bursts) do not cause dips in aggregate raw
throughput. e.g. pass as long as the losses are no more bursty than
are expected from a simple drop tail queue. Although this test could
be made more precise it is really included here for pedantic
completeness.
Use the procedure in Section 6.1.2.1 to sweep the window (rate) from 7.2.4. Duplex Self Interference
below link_pipe up to beyond target_pipe+required_queue. Depending
on events that happen during the scan, score the link. Identify the
"power point:MAX(rate/RTT) as the start of the test (should be
window=target_pipe) @@@ add required_queue and power_point
Fail if RTT is non-monotonic by more than a small number of packet This engineering test confirms a bound on the interactions between
times (channel allocation self interference - engineering) IS THIS the forward data path and the ACK return path. Fail if the RTT rises
SUFFICIENT? by more than some fixed bound above the expected queueing time
computed from trom the excess window divided by the link data rate.
@@@@ This needs further testing.
7.3. Slowstart tests 7.3. Slowstart tests
These tests mimic slowstart: data is sent at slowstart_rate (twice These tests mimic slowstart: data is sent at twice the effective
subpath_rate). They are deemed inconclusive if the elapsed time to bottleneck rate to exercise the queue at the dominant bottleneck.
send the data burst is not less than half of the (extrapolated) time
to receive the ACKs. (i.e. sending data too fast is ok, but sending
it slower than twice the actual bottleneck rate is deemed
inconclusive). Space the bursts such that the average ACK rate is
equal to or faster than the target_data_rate.
These tests are not useful at burst sizes smaller than the sender They are deemed inconclusive if the elapsed time to send the data
interface rate tests, since the sender interface rate tests are more burst is not less than half of the time to receive the ACKs. (i.e.
strenuous. If it is necessary to derate the sender interface rate sending data too fast is ok, but sending it slower than twice the
tests, then the full window slowstart test (un-derated) would be actual bottleneck rate as indicated by the ACKs is deemed
important. inconclusive). Space the bursts such that the average data rate is
equal to the target_data_rate.
7.3.1. Full Window slowstart test 7.3.1. Full Window slowstart test
Send (target_pipe_size+required_queue)*derate bursts must have fewer This is a capacity test to confirm that slowstart is not likely to
than one loss per target_run_length*derate. Note that these are the exit prematurely. Send slowstart bursts that are target_pipe_size
same parameters as the Sender Full Window burst test, except the total packets. Accumulate packet delivery statistics as described in
burst rate is at slowestart rate, rather than sender interface rate. Section 6.2.2 to score the outcome. Pass if it is statistically
SHOULD derate=1. significant that the observed run length is larger than the
target_run_length. Fail if it is statistically significant that the
Otherwise TCP will exit from slowstart prematurely, and only reach a observed run length is smaller than the target_run_length.
full target_pipe_size window by way of congestion avoidance.
This is a capacity test: cross traffic may cause premature losses. Note that these are the same parameters as the Sender Full Window
burst test, except the burst rate is at slowestart rate, rather than
sender interface rate.
7.3.2. Slowstart AQM test 7.3.2. Slowstart AQM test
Do a continuous slowstart (date rate = slowstart_rate), until first Do a continuous slowstart (send data continuously at slowstart_rate),
loss, and repeat, gathering statistics on the last delivered packet's until the first loss, stop, allow the network to drain and repeat,
RTT and window size. Fail if too large (NO THEORY for value). gathering statistics on the last packet delivered before the loss,
the loss pattern, maximum RTT and window size. Justify the results.
There is not currently sufficient theory justifying requiring any
particular result, however design decisions that affect the outcome
of this tests also affect how the network balances between long and
short flows (the "mice and elephants" problem)
This is an engineering test: It would be best performed on a This is an engineering test: It would be best performed on a
quiescent network or testbed, since cross traffic might cause a false quiescent network or testbed, since cross traffic has the potential
pass. to change the results.
7.4. Sender Rate Burst tests 7.4. Sender Rate Burst tests
These tests us "sender interface rate" bursts. Although this is not These tests determine how well the network can deliver bursts sent at
well defined it should be assumed to be current state of the art sender's interface rate. Note that this test most heavily exercises
server grade hardware (often 10Gb/s today). (load) the front path, and is likely to include infrastructure nominally out
of scope.
7.4.1. Sender TCP Send Offload (TSO) tests
If MIN(target_pipe_size, 42) packet bursts meet target_run_lenght
(Not derated!).
Otherwise the link will interact badly with modern server NIC Also, there are a several details that are not precisely defined.
implementations, which as an optimization to reduce host side For starters there is not a standard server interface rate. 1 Gb/s is
interactions (interrupts etc) accept up to 64kB super packets and very common today, but higher rates (e.g. 10 Gb/s) are becoming cost
send them as 42 seperate packets on the wire side.cc (load) effective and can be expected to be dominant some time in the future.
7.4.2. Sender Full Window burst test Current standards permit TCP to send a full window bursts following
an application pause. Congestion Window Validation [RFC 2861], is
not required, but even if was it does not take effect until an
application pause is longer than an RTO. Since this is standard
behavior, it is desirable that the network be able to deliver it,
otherwise application pauses will cause unwarranted losses.
target_pipe_size*derate bursts have fewer than one loss per It is also understood in the application and serving community that
target_run_length*derate. interface rate bursts have a cost to the network that has to be
balanced against other costs in the servers themselves. For example
TCP Segmentation Offload [TSO] reduces server CPU in exchange for
larger network bursts, which increase the stress on network buffer
memory.
Otherwise application pauses will cause unwarranted losses. Current There is not yet theory to unify these costs or to provide a
standards permit TCP to send a full cwnd burst following an framework for trying to optimize global efficiency. We do not yet
application pause. (Cwnd validation in not required, but even so have a model for how much the network should tolerate server rate
does not take effect until the pause is longer than RTO). bursts. Some bursts must be tolerated by the network, but it is
probably unreasonable to expect the network to efficiently deliver
all data as a series of bursts.
NB: there is no model here for what is good enough. derate=1 is For this reason, this is the only test for which we explicitly
safest, but may be unnecessarily conservative for some applications. encourage detrateing. A TDS should include a table of pairs of
Some application, such as streaming video need derate=1 to be derating parameters: what burst size to use as a fraction of the
efficient when the application pacing quanta is larger than cwnd. target_pipe_size, and how much each burst size is permitted to reduce
(load) the run length, relative to to the target_run_length. @@@@ Needs more
work and experimentation.
8. Combined Tests 7.5. Combined Tests
These tests are more efficient from a deployment/operational These tests are more efficient from a deployment/operational
perspective, but may not be possible to diagnose if they fail. perspective, but may not be possible to diagnose if they fail.
8.1. Sustained burst test 7.5.1. Sustained burst test
Send target_pipe_size sender interface rate bursts every target_RTT, Send target_pipe_size*derate sender interface rate bursts every
verify that the observed run length meets target_run_length. Key target_RTT*derate, for derate between 0 and 1. Verify that the
observations: observed run length meets target_run_length. Key observations:
o This test is RTT invariant, as long as the tester can generate the o This test is subpath RTT invariant, as long as the tester can
required pattern. generate the required pattern.
o The subpath under test is expected to go idle for some fraction of o The subpath under test is expected to go idle for some fraction of
the time: (link_rate-target_rate)/link_rate. Failing to do so the time: (subpath_data_rate-target_rate)/subpath_data_rate.
suggests a problem with the procedure. Failing to do so suggests a problem with the procedure.
o This test is more strenuous than the slowstart tests: they are not o This test is more strenuous than the slowstart tests: they are not
needed if the link passes underated sender interface rate burst needed if the link passes this test with derate=1.
tests.
o This test could be derated by reducing both the burst size and
headway (same average data rate).
o A link that passes this test is likely to be able to sustain o A link that passes this test is likely to be able to sustain
higher rates (close to link_rate) for paths with RTTs smaller than higher rates (close to subpath_data_rate) for paths with RTTs
the target_RTT. Offsetting this performance underestimation is smaller than the target_RTT. Offsetting this performance
the rationale behind permitting derating in general. underestimation is part of the rationale behind permitting
o This test should be implementable with standard instrumented TCP, derating in general.
[RFC 4898] using a specialized measurement application at one end o This test can be implemented with standard instrumented TCP[RFC
and a minimal service at the other end [RFC 863, RFC 864]. It may 4898], using a specialized measurement application at one end and
a minimal service at the other end [RFC 863, RFC 864]. It may
require tweaks to the TCP implementation. require tweaks to the TCP implementation.
o This test is efficient to implement, since it does not require o This test is efficient to implement, since it does not require
per-packet timers, and can make maximal use of TSO in modern NIC per-packet timers, and can make use of TSO in modern NIC hardware.
hardware.
o This test is not totally sufficient: the standing window o This test is not totally sufficient: the standing window
engineering tests are also needed to be sure that the link is well engineering tests are also needed to be sure that the link is well
behaved at and beyond the onset of congestion. behaved at and beyond the onset of congestion.
o I believe that this test can be proven to be the one capacity test o This one test can be proven to be the one capacity test to
to supplant them all. supplant them all.
Example 7.5.2. Live Streaming Media
To confirm that a 100 Mb/s link can reliably deliver single 10 Model Based Metrics can be implemented as a side effect of serving
MByte/s stream at a distance of 50 mS, test the link by sending 346 any non-throughput maximizing traffic, such as streaming media, by
packet bursts every 50 mS (10 MByte/s payload rate, assuming a 1500 applying some additional controls to the traffic. The essential
Byte IP MTU and 52 Byte TCP/IP headers). These bursts are 4196288 requirement is that the traffic be constrained such that even with
bits on the wire (assuming 16 bytes of link overhead and framing) for arbitrary application pauses, bursts and data rate fluctuations the
an aggregate test data rate of 8.4 Mb/s. traffic stays within the envelope determined by all of the individual
tests described above, for a specific TDS.
To pass the test using the most conservative TCP model for a single If the serving RTT is less than the target_RTT, this constraint is
stream the observed run length must be larger than 179574 packets. most easily implemented by clamping the transport window size to
test_window=target_data_rate*serving_RTT/target_MTU. This
test_window size will limit the both the serving data rate and burst
sizes to be no larger than the procedures in Section 7.1.2 and
Section 7.4, assuming burst size derating equal to the serving_RTT
divided by the target_RTT.
This is the same as less than one loss per 519 bursts (1.5*346) or Note that if the application tolerates fluctuations in its actual
every 26 seconds. data rate (say by use of a playout buffer) it is important that the
target_data_rate be above the actual average rate needed by the
application so it can recover after transient pauses caused by
congestion or the application itself. Since the serving RTT is
smaller than the target_RTT, the worst case bursts that might be
generated under these conditions are smaller than called for by
Section 7.4
Note that this test potentially cause transient 346 packet queues at 8. Examples
the bottleneck.
9. Calibration In this section we present TDS for a couple of performance
specifications.
If using derated metrics, or when something goes wrong, the results Tentatively: 5 Mb/s*50 ms, 1 Mb/s*50ms, 250kbp*100mS
must be calibrated against a traditional BTC. The preferred
diagnostic follow-up to calibration issues is to run open end-to-end 8.1. Near serving HD streaming video
measurements on an open platform, such as Measurement Lab
[http://www.measurementlab.net/] Today the best quality HD video requires slightly less than 5 Mb/s
[HDvideo]. Since it is desirable to serve such content locally, we
assume that the content will be within 50 mS, which is enough to
cover continental Europe or either US coast.
5 Mb/s over a 50 ms path
+----------------------+-------+---------+
| End to End Parameter | Value | units |
+----------------------+-------+---------+
| target_rate | 5 | Mb/s |
| target_RTT | 50 | ms |
| traget_MTU | 1500 | bytes |
| target_pipe_size | 22 | packets |
| target_run_length | 1452 | packets |
+----------------------+-------+---------+
Table 1
This example uses the most conservative TCP model and no derating.
8.2. Far serving SD streaming video
Standard Quality video typically fits in 1 Mb/s [SDvideo]. This can
be reasonably delivered via longer paths with larger. We assume
100mS.
5 Mb/s over a 50 ms path
+----------------------+-------+---------+
| End to End Parameter | Value | units |
+----------------------+-------+---------+
| target_rate | 1 | Mb/s |
| target_RTT | 100 | ms |
| traget_MTU | 1500 | bytes |
| target_pipe_size | 9 | packets |
| target_run_length | 243 | packets |
+----------------------+-------+---------+
Table 2
This example uses the most conservative TCP model and no derating.
8.3. Bulk delivery of remote scientific data
This example corresponds to 100 Mb/s bulk scientific data over a
moderately long RTT. Note that the target_run_length is infeasible
for most networks.
100 Mb/s over a 200 ms path
+----------------------+---------+---------+
| End to End Parameter | Value | units |
+----------------------+---------+---------+
| target_rate | 100 | Mb/s |
| target_RTT | 200 | ms |
| traget_MTU | 1500 | bytes |
| target_pipe_size | 1741 | packets |
| target_run_length | 9093243 | packets |
+----------------------+---------+---------+
Table 3
9. Validation
This document permits alternate models and parameter derating, as
described in Section 5.2 and Section 5.3. In exchange for this
latitude in the modelling process it requires the ability to
demonstrate authentic applications and protocol implementations
meeting the target end-to-end performance goals over infrastructure
that infinitessimally passes the TDS.
The validation process relies on constructing a test network such
that all of the individual load tests pass only infinitessimally, and
proving that an authentic application running over a real TCP
implementation (or other protocol as appropriate) can be expected to
meet the end-to-end target parameters on such a network.
For example using our example in our HD streaming video TDS described
in Section 8.1, the bottleneck data rate should be 5 Mb/s, the per
packet random background loss probability should be 1/1453, for a run
length of 1452 packets, the bottleneck queue should be 22 packets and
the front path should have just enough buffering to withstand 22
packet line rate bursts. We want every one of the TDS tests to fail
if we slightly increase the relevant test parameter, so for example
sending a 23 packet slowstart bursts should cause excess (possibly
deterministic) packet drops at the dominant queue at the bottleneck.
On this infinitessimally passing network it should be possible for a
real ral application using a stock TCP implementation in the vendor's
default configuration to attain 5 Mb/s over an 50 mS path.
@@@@ Need to better specify the workload: both short and long flows.
The difficult part of this process is arranging for each subpath to
infinitesimally pass the individual tests. We suggest two
approaches: constraining resources in devices by configuring them not
to use all available buffer space or data rate; and preloading
subpaths with cross traffic. Note that is it important that a single
environment is constructed that infinitessimally passes all tests,
otherwise there is a chance that TCP can exploit extra latitude in
some parameters (such as data rate) to partially compensate for
constraints in other parameters.
If a TDS validated according to these procedures is used to inform
public dialog, the validation experiment itself should also be public
with sufficient precision for the experiment to be replicated by
other researchers. All components should either be open source of
fully specified proprietary implementations that are available to the
research community.
TODO: paper proving the validation process.
10. Acknowledgements 10. Acknowledgements
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. probability in the target run length.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
11. Informative References 11. Informative References
skipping to change at page 29, line 13 skipping to change at page 35, line 38
January 2013. January 2013.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource]
Fan, X., Mathis, M., and D. Hamon, "Git Repository for
mping: An IP Level Performance Diagnostic", Sept 2013,
<https://github.com/m-lab/mping>.
[MBMSource]
Hamon, D., "Git Repository for Model Based Metrics",
Sept 2013, <https://github.com/m-lab/MBM>.
[Pathdiag] [Pathdiag]
Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen, Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen,
"Pathdiag: Automated TCP Diagnosis", Passive and Active "Pathdiag: Automated TCP Diagnosis", Passive and Active
Measurement , June 2008. Measurement , June 2008.
[BScope] Broswerscope, "Browserscope Network tests", Sept 2012, [BScope] Broswerscope, "Browserscope Network tests", Sept 2012,
<http://www.browserscope.org/?category=network>. <http://www.browserscope.org/?category=network>.
[Rtool] R Development Core Team, "R: A language and environment [Rtool] R Development Core Team, "R: A language and environment
for statistical computing. R Foundation for Statistical for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0, URL Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
http://www.R-project.org/", , 2011. http://www.R-project.org/", , 2011.
[StatQC] Montgomery, D., "Introduction to Statistical Quality [StatQC] Montgomery, D., "Introduction to Statistical Quality
Control - 2nd ed.", ISBN 0-471-51988-X, 1990. Control - 2nd ed.", ISBN 0-471-51988-X, 1990.
[CVST] Krueger, T. and M. Braun, "R package: Fast Cross- [CVST] Krueger, T. and M. Braun, "R package: Fast Cross-
Validation via Sequential Testing", version 0.1, 11 2012. Validation via Sequential Testing", version 0.1, 11 2012.
Appendix A. Model Derivations [LMCUBIC] Ledesma Goyzueta, R. and Y. Chen, "A Deterministic Loss
Model Based Analysis of CUBIC, IEEE International
This appendix describes several different ways to calculate Conference on Computing, Networking and Communications
target_run_length and the implication of the chosen calculation. (ICNC), E-ISBN : 978-1-4673-5286-4", January 2013.
Rederive MSMO97 under two different assumptions: target_rate =
link_rate and target_rate < 2 * link_rate.
Show equivalent derivation for CUBIC.
Commentary on the consequence of the choice.
Appendix B. old text
This entire section is contains scraps of text to be moved, removed
or absorbed elsewhere in the document
B.1. An earlier document
Step 0: select target end-to-end parameters: a target rate and target
RTT. The primary test will be to confirm that the link quality is
sufficient to meet the specified target rate for the link under test,
when extended to the target RTT by an ideal network. The target rate
must be below the actual link rate and nominally the target RTT would
be longer than the link RTT. There should probably be a convention
for the relationship between link and target rates (e.g. 85%).
For example on a 10 Mb/s link, the target rate might be 1 MBytes/s, Appendix A. Model Derivations
at an RTT of 100 mS (a typical continental scale path).
Step 1: On the basis of the target rate and RTT and your favorite TCP The reference target_run_length described in Section 5.2 is based on
performance model, compute the "required run length", which is the very conservative assumptions: that all window above target_pipe_size
required number of consecutive non-losses between loss episodes. The contributes to a standing queue that raises the RTT, and that classic
run length resembles one over the loss probability, if clustered Reno congestion control is in effect. In this section we provide two
losses only count as a single event. Also select "test duration" and alternative calculations using different assumptions.
"test rate". The latter would nominally the same as the target rate,
but might be different in some situations. There must be
documentation connecting the test rate, duration and required run
length, to the target rate and RTT selected in step 0.
Continuing the above example: Assuming a 1500 Byte MTU. The It may seem out of place to allow such latitude in a measurement
calculated model loss rate for a single TCP stream is about 0.01% (1 standard, but the section provides offsetting requirements.
loss in 1E4 packets).
Step 2, the actual measurement proceeds as follows: Start an These models provide estimates that make the most sense if network
unconstrained bulk data flow using any modern TCP (with large buffers performance is viewed logarithmically. In the operational internet,
and/or autotuning). During the first interval (no rate limits) data rates span more than 8 orders of magnitude, RTT spans more than
observe the slowstart (e.g. tcpdump) and measure: Peak burst size; 3 orders of magnitude, and loss probability spans at least 8 orders
link clock rate (delivery rate for each round); peak data rate for of magnitude. When viewed logarithmically (as in decibels), these
the fastest single RTT interval; fraction of segments lost at the end correspond to 80 dB of dynamic range. On an 80 db scale, a 3 dB
of slowstart. After the flow has fully recovered from the slowstart error is less than 4% of the scale, even though it might represent a
(details not important) throttle the flow down to the test rate (by factor of 2 in raw parameter.
clamping cwnd or application pacing at the sender or receiver).
While clamped to the test rate, observe the losses (run length) for
the chosen test duration. The link passes the test if the slowstart
ends with less than approximately 50% losses and no timeouts, the
peak rate is at least the target rate, and the measured run length is
better than the required run length. There will also need to be some
ancillary metrics, for example to discard tests where the receiver
closes the window, invalidating the slowstart test. [This needs to
be separated into multiple subtests]
Optional step 3: In some cases it might make sense to compute an Although this document gives a lot of latitude for calculating
"extrapolated rate", which is the minimum of the observed peak rate, target_run_length, people designing suites of tests need to consider
and the rate computed from the specified target RTT and the observed the effect of their choices on the ongoing conversation and tussle
run length by using a suitable TCP performance model. The about the relevance of "TCP friendliness" as an appropriate model for
extrapolated rate should be annotated to indicate if it was run capacity allocation. Choosing a target_run_length that is
length or peak rate limited, since these have different predictive substantially smaller than the reference target_run_length specified
values. in Section 5.2 is equivalent to saying that it is appropriate for the
transport research community to abandon "TCP friendliness" as a
fairness model and to develop more aggressive Internet transport
protocols, and for applications to continue (or even increase) the
number of connections that they open concurrently.
Other issues: A.1. Aggregate Reno
If the link RTT is not substantially smaller than the target RTT and In Section 5.2 it is assumed that the target rate is the same as the
the actual run length is close to the target rate, a standards link rate, and any excess window causes a standing queue at the
compliant TCP implementation might not be effective at accurately bottleneck. This might be representative of a non-shared access
controlling the data rate. To be independent of the details of the link. An alternative situation would be a heavily aggregated subpath
TCP implementation, failing to control the rate has to be treated as where individual flows do not significantly contribute to the
a spoiled measurement, not a infrastructure failure. This can be queueing delay, and losses are determined monitoring the average data
overcome by "stiffening" TCP by using a non-standard congestion rate, for example by the use of a virtual queue as in [AFD]. In such
control algorithm. For example if the rate controlling by clamping a scheme the RTT is constant and TCP's AIMD congestion control causes
cwnd then use "relentless TCP" style reductions on loss, and lock the data rate to fluctuate in a sawtooth. If the traffic is being
ssthresh to the cwnd clamp. Alternatively, implement an explicit controlled in a manner that is consistent with the metrics here, goal
rate controller for TCP. In either case the test must be abandoned would be to make the actual average rate equal to the
(aborted) if the measured run length is substantially below the target_data_rate.
target run length.
If the test is run "in situ" in a production environment, there also We can derive a model for Reno TCP and delayed ACK under the above
needs to be baseline tests using alternate paths to confirm that set of assumptions: for some value of Wmin, the window will sweep
there are no bottlenecks or congested links between the test end from Wmin to 2*Wmin in 2*Wmin RTT. Between losses each sawtooth
points and the link under test. delivers (1/2)(Wmin+2*Wmin)(2Wmin) packets in 2*Wmin round trip
times. However, unlike the queueing case where Wmin =
Target_pipe_size, we want the average of Wmin and 2*Wmin to be the
target_pipe_size, so the average rate is the target rate. Thus we
want Wmin = (2/3)*target_pipe_size.
It might make sense to run multiple tests with different parameters, (@@@@ something is wrong above) Substituting these together we get:
for example infrequent tests with test rate equal to the target rate,
and more frequent, less disruptive tests with the same target rate
but the test rate equal to 1% of the target rate. To observe the
required run length, the low rate test would take 100 times longer to
run.
Returning to the example: a full rate test would entail sending 690 target_run_length = (8/3)(target_pipe_size^2)
pps (1 MByte/s) for several tens of seconds (e.g. 50k packets), and
observing that the total loss rate is below 1:1e4. A less disruptive
test might be to send at 6.9 pps for 100 times longer, and observing
B.2. End-to-end parameters from subpaths Note that this is always 88% of the reference run length.
[This entire section needs to be overhauled and should be skipped on A.2. CUBIC
a first reading. The concepts defined here are not used elsewhere.]
The following optional parameters apply for testing generalized end- CUBIC has three operating regions. The model for the expected value
to-end paths that include subpaths with known specific types of of window size derived in [LMCUBIC] assumes operation in the
behaviors that are not well represented by simple queueing models: "concave" region only, which is a non-TCP friendly region for long-
lived flows. The authors make the following assumptions: packet loss
probability, p, is independent and periodic, losses occur one at a
time, and they are true losses due to tail drop or corruption. This
definition of p aligns very well with our definition of
target_run_length and the requirement for progressive loss (AQM).
Bottleneck link clock rate: This applies to links that are using Although CUBIC window increase depends on continuous time, the
virtual queues or other techniques to police or shape users authors transform the time to reach the maximum Window size in terms
traffic at lower rates full link rate. The bottleneck link clock of RTT and a parameter for the multiplicative rate decrease on
rate should be representative of queue drain times for short observing loss, beta (whose default value is 0.2 in CUBIC). The
bursts of packets on an otherwise unloaded link. expected value of Window size, E[W], is also dependent on C, a
Channel hold time: For channels that have relatively expensive parameter of CUBIC that determines its window-growth aggressiveness
channel arbitration algorithms, this is the typical (maximum?) (values from 0.01 to 4).
time that data and or ACKs are held pending acquiring the channel.
While under heavy load, the RTT may be inflated by this parameter,
unless it is built into the target RTT
Preload traffic volume: If the user's traffic is shaped on the basis
of average traffic volume, this is volume necessary to invoke
"heavy hitter" policies.
Unloaded traffic volume: If the user's traffic is shaped on the
basis of average traffic volume, this is the maximum traffic
volume that a test can use and stay within a "light user"
policies.
Note on a ConEx enabled network [ConEx], the word "traffic" in the E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4
last two items should be replaced by "congestion" i.e. "preload
congestion volume" and "unloaded congestion volume".
B.3. Per subpath parameters and, further assuming Poisson arrival, the mean throughput, x, is
[This entire section needs to be overhauled and should be skipped on x = E[W]/RTT
a first reading. The concepts defined here are not used elsewhere.]
Some single parameter tests also need parameter of the subpath. We note that under these conditions (deterministic single losses),
the value of E[W] is always greater than 0.8 of the maximum window
size ~= reference_run_length. (as far as I can tell)
subpath RTT: RTT of the subpath under test. Commentary on the consequence of the choice.
subpath link clock rate: If different than the Bottleneck link clock
rate
B.4. Version Control Appendix B. Version Control
Formatted: Fri Jun 21 18:23:29 PDT 2013 Formatted: Mon Oct 21 15:42:35 PDT 2013
Authors' Addresses Authors' Addresses
Matt Mathis Matt Mathis
Google, Inc Google, Inc
1600 Amphitheater Parkway 1600 Amphitheater Parkway
Mountain View, California 93117 Mountain View, California 93117
USA USA
Email: mattmathis@google.com Email: mattmathis@google.com
 End of changes. 185 change blocks. 
779 lines changed or deleted 1048 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/