IP Performance Working Group                                   M. Mathis
Internet-Draft                                               Google, Inc
Intended status: Experimental                                  A. Morton
Expires: April 24, August 18, 2014                                       AT&T Labs
                                                        October 21, 2013
                                                       February 14, 2014

                  Model Based Bulk Performance Metrics
               draft-ietf-ippm-model-based-metrics-01.txt
               draft-ietf-ippm-model-based-metrics-02.txt

Abstract

   We introduce a new class of model based metrics designed to determine
   if a long network an end-to-end Internet path can meet predefined end-to-end application transport
   performance targets by applying a suite of IP diagnostic tests to
   successive subpaths.  The subpath at a time subpath-at-a-time tests are designed to
   exclude all known conditions which might
   accurately detect if any subpath will prevent the full end-to-end
   path from meeting the user's specified target application performance.

   This approach makes it possible to to determine the  Each IP performance
   requirements needed to support the desired end-to-end TCP
   performance.
   diagnostic test consists of a precomputed traffic pattern and a
   statistical criteria for evaluating packet delivery.

   The IP metrics diagnostics tests are based on traffic patterns that are
   precomputed to mimic TCP or other transport protocol over a long path
   but are precomputed independently independent of the actual behavior details of the transport protocol over the subpath under test.
   Likewise the success criteria depends on the target performance and
   not the actual performance of the subpath.  This makes the
   measurements open loop, eliminating nearly all of the difficulties
   encountered by traditional bulk transport
   metrics, which fundamentally depend on metrics.

   This document does not fully define diagnostic tests, but provides a
   framework for designing suites of diagnostics tests that are tailored
   the confirming the target performance.

   By making the tests open loop, we eliminate standards congestion
   control equilibrium
   behavior.

   A natural consequence of this methodology is verifiable network
   measurement: measurements from any given vantage point can behavior, which otherwise causes every measured
   parameter to be
   verified by repeating them from other vantage points. sensitive to every component of the system.  As an
   open loop test, various measurable properties become independent, and
   potentially subject to an algebra enabling several important new
   uses.

   Interim DRAFT Formatted: Mon Oct 21 15:42:35 PDT 2013 Fri Feb 14 14:07:33 PST 2014

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on April 24, August 18, 2014.

Copyright Notice

   Copyright (c) 2013 2014 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  5
     1.1.  TODO . . . . . . . . . . . . . . . . . . . . . . . . . . .  6  7
   2.  Terminology  . . . . . . . . . . . . . . . . . . . . . . . . .  6  7
   3.  New requirements relative to RFC 2330  . . . . . . . . . . . .  9 10
   4.  Background . . . . . . . . . . . . . . . . . . . . . . . . . . 10 11
     4.1.  TCP properties . . . . . . . . . . . . . . . . . . . . . . 12
     4.2.  Diagnostic Approach  . . . . . . . . . . . . . . . . . . . 13
   5.  Common Models and Parameters . . . . . . . . . . . . . . . . . 14 15
     5.1.  Target End-to-end parameters . . . . . . . . . . . . . . . 14 15
     5.2.  Common Model Calculations  . . . . . . . . . . . . . . . . 15
     5.3.  Parameter Derating . . . . . . . . . . . . . . . . . . . . 16
   6.  Common testing procedures  . . . . . . . . . . . . . . . . . . 16 17
     6.1.  Traffic generating techniques  . . . . . . . . . . . . . . 16 17
       6.1.1.  Paced transmission . . . . . . . . . . . . . . . . . . 16 17
       6.1.2.  Constant window pseudo CBR . . . . . . . . . . . . . . 17 18
       6.1.3.  Scanned window pseudo CBR  . . . . . . . . . . . . . . 18
       6.1.4.  Concurrent or channelized testing  . . . . . . . . . . 18 19
       6.1.5.  Intermittent Testing . . . . . . . . . . . . . . . . . 19
       6.1.6.  Intermittent Scatter Testing . . . . . . . . . . . . . 20
     6.2.  Interpreting the Results . . . . . . . . . . . . . . . . . 20
       6.2.1.  Test outcomes  . . . . . . . . . . . . . . . . . . . . 20
       6.2.2.  Statistical criteria for measuring run_length  . . . . 21 22
         6.2.2.1.  Alternate criteria for measuring run_length  . . . 24
       6.2.3.  Reordering Tolerance . . . . . . . . . . . . . . . . . 23 25
     6.3.  Test Qualifications  . . . . . . . . . . . . . . . . . . . 23
       6.3.1.  Verify the Traffic Generation Accuracy 26
   7.  Diagnostic Tests . . . . . . . . 23
       6.3.2.  Verify the absence of cross traffic . . . . . . . . . 24
       6.3.3.  Additional test preconditions . . . . . . . . . . . . 25
   7.  Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 25 27
     7.1.  Basic Data Rate and Run Length Tests . . . . . . . . . . . 25 27
       7.1.1.  Run Length at Paced Full Data Rate . . . . . . . . . . 26 27
       7.1.2.  run length  Run Length at Full Data Windowed Rate  . . . . . . . . 26 28
       7.1.3.  Background Run Length Tests  . . . . . . . . . . . . . 26 28
     7.2.  Standing Queue tests . . . . . . . . . . . . . . . . . . . 26 28
       7.2.1.  Congestion Avoidance . . . . . . . . . . . . . . . . . 28 29
       7.2.2.  Bufferbloat  . . . . . . . . . . . . . . . . . . . . . 28 30
       7.2.3.  Non excessive loss . . . . . . . . . . . . . . . . . . 28 30
       7.2.4.  Duplex Self Interference . . . . . . . . . . . . . . . 28 30
     7.3.  Slowstart tests  . . . . . . . . . . . . . . . . . . . . . 29 30
       7.3.1.  Full Window slowstart test . . . . . . . . . . . . . . 29 31
       7.3.2.  Slowstart AQM test . . . . . . . . . . . . . . . . . . 29 31
     7.4.  Sender Rate Burst tests  . . . . . . . . . . . . . . . . . 29 31
     7.5.  Combined Tests . . . . . . . . . . . . . . . . . . . . . . 30 32
       7.5.1.  Sustained burst test . . . . . . . . . . . . . . . . . 30 32
       7.5.2.  Live Streaming Media . . . . . . . . . . . . . . . . . 31 33
   8.  Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 34
     8.1.  Near serving HD streaming video  . . . . . . . . . . . . . 32 34
     8.2.  Far serving SD streaming video . . . . . . . . . . . . . . 32 34
     8.3.  Bulk delivery of remote scientific data  . . . . . . . . . 33 35

   9.  Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 33 35
   10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34 37
   11. Informative References . . . . . . . . . . . . . . . . . . . . 35 37
   Appendix A.  Model Derivations . . . . . . . . . . . . . . . . . . 36 39
     A.1.  Aggregate  Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 37 39
     A.2.  CUBIC  . . . . . . . . . . . . . . . . . . . . . . . . . . 37 40
   Appendix B.  Complex Queueing  . . . . . . . . . . . . . . . . . . 41
   Appendix C.  Version Control . . . . . . . . . . . . . . . . . . . 38 42
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 38 42

1.  Introduction

   Model based bulk

   Bulk performance metrics evaluate an Internet path's ability to carry
   bulk data.  Model based bulk performance metrics rely on mathematical
   TCP models are used to design a targeted diagnostic suite (TDS) of IP
   performance tests which can be applied independently to each subpath
   of the full end-to-end path.  A  These targeted diagnostic suite is constructed such that suites allow
   independent tests of the subpaths will to accurately predict detect if any subpath
   will prevent the full end-to-end path
   can deliver from delivering bulk data at
   the specified performance target, independent of the measurement
   vantage points or other details of the test procedures used to measure for each subpath.
   measurement.

   The end-to-end target performance is determined by the needs of the
   user or application, outside the scope of this document.  For bulk
   data transport, the primary performance parameter of interest is the
   target data rate.  However, since TCP's ability to compensate for
   less than ideal network conditions is fundamentally affected by the
   Round Trip Time (RTT) and the Maximum Transmission Unit (MTU) of the
   entire end-to-end path over which the data traverses, these
   parameters must also be specified in advance.  They may reflect a
   specific real path through the Internet or an idealized path
   representing a typical user community.  The target values for these
   three parameters, Data Rate, RTT and MTU, inform the mathematical
   models used to design the TDS.

   Each IP diagnostic test in the a TDS consists of a precomputed traffic
   pattern and statistical criteria for evaluating packet delivery.

   TCP

   Mathematical models are used to design traffic patterns that mimic
   TCP or other bulk transport protocol operating at the target performance data
   rate, MTU and RTT over a full range of conditions, including flows
   that are bursty at multiple time scales.  The traffic patterns are
   computed in advance based on the properties three target parameters of the full end-to-end end-
   to-end path and independent of the properties of individual subpaths.
   As much as possible the measurement traffic is generated
   deterministically in ways that
   minimizes minimize the extent to which test
   methodology, measurement points, measurement vantage or path
   partitioning effect affect the details of the measurement traffic.

   Models

   Mathematical models are also used to compute the bounds on the packet
   delivery statistics for acceptable the IP performance.  Since these
   statistics, such as packet loss, are typically aggregated from all
   subpaths of the end-to-end path, the end-to-end statistical bounds
   need to be apportioned as a separate bound for each subpath.  Note
   that links that are expected to be bottlenecks are expected to
   contribute more packet loss and/or delay.  In compensation, other
   links have to be constrained to contribute less packet loss and
   delay.  The criteria for passing each test are of a TDS is an apportioned
   share of the total bound determined by the mathematical model from
   the end-to-end target
   performance and are independent of the subpath under test. performance.

   In addition to passing or failing, a test can be deemed to be
   inconclusive if for a number of reasons including, the precomputed
   traffic pattern was not authentically accurately generated, test
   preconditions were not met or the measurement results
   were not statistically significant.

   TCP's ability to compensate for less than ideal network conditions is
   fundamentally affected by the RTT and MTU of the end-to-end Internet
   path that it traverses.  The end-to-end path determines fixed bounds
   on these parameters.  The target values for these three parameters,
   Data Rate, RTT and MTU, are determined by the application, its
   intended use significant, and the physical infrastructure over which it is
   intended to traverse.  These parameters are used to inform the models
   used others such as failing to design the TDS.
   meet some test preconditions.

   This document describes a framework for deriving the traffic patterns and
   delivery statistics for model based metrics.  It does not fully
   specify any measurement techniques.  Important details such as packet
   type-p selection, sampling techniques, vantage selection, etc etc. are out
   of scope for this document.
   not specified here.  We imagine Fully Specified Targeted Diagnostic
   Suites (FSTDS), that fully defines define all of these details.  We use TDS to
   refer to the subset of such a specification that is in scope for this
   document.  A TDS includes specification for the
   traffic and delivery statistics for the diagnostic tests themselves, target parameters, documentation of the
   models and any assumptions or derating used to derive the diagnostic test parameters parameters,
   specifications for the traffic and delivery statistics for the tests
   themselves, and a description of the a test setup that can be used to calibrate
   validate the models, as described in later sections. tests and models.

   Section 2 defines terminology used throughout this document.

   It has been difficult to develop BTC Bulk Transport Capacity [RFC3148]
   metrics due to some overlooked requirements described in Section 3
   and some intrinsic problems with using protocols for measurement,
   described in Section 4.

   In Section 5 we describe the models and common parameters used to
   derive the targeted diagnostic suite.  In Section 6 we describe
   common testing procedures.  Each subpath is evaluated using suite of
   far simpler and more predictable diagnostic tests described in
   Section 7.  In Section 8 we present three example TDS, TDS', one that
   might be representative of HD video, when served fairly close to the
   user, a second that might be representative of standard video, served
   from a greater distance, and a third that might be representative of an
   network designed to support
   high performance bulk download. data delivered over a transcontinental path.

   There exists a small risk that model based metric itself might yield
   a false pass result, in the sense that every subpath of an end-to-end
   path passes every IP diagnostic test and yet a real application falls fails
   to attain the performance target over the end-to-end path.  If this
   happens, then the validation procedure described in Section 9 needs
   to be used to prove and potentially revise the models.

   Future document documents will define model based metrics for other traffic
   classes and application types, such as real time streaming media.

1.1.  TODO

   Please send comments on this draft to ippm@ietf.org.  See
   http://goo.gl/02tkD for more information including: interim drafts,
   an up to date todo list and information on contributing.

   Formatted: Mon Oct 21 15:42:35 PDT 2013 Fri Feb 14 14:07:33 PST 2014

2.  Terminology

   Terminology about paths, etc.  See [RFC2330] and
   [I-D.morton-ippm-lmap-path].

   [data] sender  Host sending data and receiving ACKs, typically via
      TCP. ACKs.
   [data] receiver  Host receiving data and sending ACKs, typically via
      TCP. ACKs.
   subpath  A portion of the full path.  Note that there is no
      requirement that subpaths be non-overlapping.
   Measurement Point  Measurement points as described in
      [I-D.morton-ippm-lmap-path].
   test path  A path between two measurement points that includes a
      subpath of the end-to-end path under test, plus possibly
      additional and could include
      infrastructure between the measurement points and the subpath.
   [Dominant] Bottleneck  The Bottleneck that determines a flow's self
      clock.  It generally determines the dominates
      traffic statistics for the entire path.  It typically determines a
      flow's self clock timing, packet loss and ECN marking rate.  See
      Section 4.1.
   front path  The subpath from the data sender to the dominant
      bottleneck.
   back path  The subpath from the dominant bottleneck to the receiver.
   return path  The path taken by the ACKs from the data receiver to the
      data sender.
   cross traffic  Other, potentially interfering, traffic competing for
      resources (network and/or queue capacity).

   Properties determined by the end-to-end path and application.  They
   are described in more detail in Section 5.1.

   Application Data Rate  General term for the data rate as seen by the
      application above the transport layer.  This is the payload data
      rate, and excludes TCP/IP (or transport and lower level headers(TCP/IP or
      other protocol) headers protocols) and as well as retransmissions and
      retransmits. other data
      that does not contribute to the total quantity of data delivered
      to the application.

   Link Data Rate  General term for the data rate as seen by the link or
      lower layers.  It  The link data rate includes transport and IP
      headers, retransmits and other transport layer overhead.  This
      document is agnostic as to whether the link data rate includes or
      excludes framing, MAC MAC, or other lower layer overheads, except that
      they must be treated uniformly.
   end-to-end target parameters:  Application or transport performance
      goals for the end-to-end path.  They include the target data rate,
      RTT and MTU described below.
   Target Data Rate:  The application or data rate, typically the ultimate
      user's performance goal.  When converted to link data rate, it must be slightly
      smaller than the actual link data rate, otherwise there is no
      margin for compensating for RTT or other path properties.  These
      test will be excessively brittle if the target data rate does not
      include any built in headroom.
   Target RTT (Round Trip Time):  The baseline (minimum) RTT of the
      longest end-to-end path the over which the application expects to meet
      the target performance.  This  TCP and other transport protocol's
      ability to compensate for path problems is generally proportional
      to the number of round trips per second.  The Target RTT
      determines both key parameters of the traffic patterns (e.g. burst
      sizes) and the thresholds on acceptable traffic statistics.  The
      Target RTT must be specified considering authentic packets sizes:
      MTU sized packets on the forward path,
      header_overhead ACK sized packets
      (typically the header_overhead) on the return (ACK) path.
   Target MTU (Maximum Transmission Unit):  The maximum MTU supported by
      the end-to-end path the over which the application expects to meet
      the target performance.  Assume 1500 Bytes per Byte packet unless otherwise
      specified.  If some subpath forces a smaller MTU, then it becomes
      the target MTU, and all model calculations and subpath tests must
      use the same smaller MTU.
   Effective Bottleneck Data Rate:  This is the bottleneck data rate
      that might be
      inferred from the ACK stream, by looking at how much data the ACK
      stream reports was delivered per unit time.  If the path is thinning
      ACKs or batching packets the effective bottleneck rate can be much
      higher than the average link rate.  See Section 4.1 and Appendix B
      for more details.
   [sender] [interface]
   [sender | interface] rate:  The burst data rate, constrained by the
      data sender's interfaces.  Today 1 or 10 Gb/s are typical.
   Header overhead:
   Header_overhead:  The IP and TCP header sizes, which are the portion
      of each MTU not available for carrying application payload.
      Without loss of generality this is assumed to be the size for
      returning acknowledgements (ACKs).  For TCP, the Maximum Segment
      Size (MSS) is the Target MTU minus the header overhead. header_overhead.

   Basic parameters common to models and subpath tests.  They are
   described in more detail in Section 5.2.  Note that these are mixed
   between application transport performance (excludes headers) and link
   IP performance (includes headers).

   pipe size  A general term for number of packets needed in flight (the
      window size) to exactly fill some network path or subpath.  This
      is the window size which in is normally the onset of queueing.
   target_pipe_size:  The number of packets in flight (the window size)
      needed to exactly meet the target rate, with a single stream and
      no cross traffic for the specified application target data rate, RTT
      RTT, and MTU.  It is the amount of circulating data required to
      meet the target data rate, and implies the scale of the bursts
      that the network might experience.
   run length  A general term for the observed, measured measured, or specified
      number of packets that are (to be) delivered between losses or ECN
      marks.  Nominally one over the loss or ECN marking probability.
   target_run_length  Required run length computed from probability, if
      there are independently and identically distributed.
   target_run_length  The target_run_length is an estimate of the target data
      rate, RTT
      minimum required headway between losses or ECN marks necessary to
      attain the target_data_rate over a path with the specified
      target_RTT and MTU. target_MTU, as computed by a mathematical model of
      TCP congestion control.  A reference calculation is show in
      Section 5.2 and alternatives in Appendix A

   Ancillary parameters used for some tests

   derating:  Under some conditions the standard models are too
      conservative.  The modeling framework permits some latitude in
      relaxing or derating some test parameters as described in
      Section 5.3 in exchange for a more stringent TDS validation
      procedures, described in Section 9.
   subpath_data_rate  The maximum IP data rate supported by a subpath.
      This typically includes TCP/IP overhead, including headers,
      retransmits, etc.
   test_path_RTT  The RTT (using appropriate packet sizes) between two measurement points. points using
      appropriate data and ACK packet sizes.
   test_path_pipe  The amount of data necessary to fill a test path.
      Nominally the test path RTT times the subpath_data_rate (which
      should be part of the end-to-end subpath).
   test_window  The window necessary to meet the target_rate over a
      subpath.  Typically test_window=target_data_rate*test_RTT/
      target_MTU.
      (target_MTU - header_overhead).

   Tests can be classified into groups according to their applicability applicability.

   Capacity tests  determine if a network subpath has sufficient
      capacity to deliver the target performance.  As long as the test
      traffic is within the proper envelope for the target end-to-end
      performance, the average packet losses or ECN must be below the
      threshold computed by the model.  As such, they capacity tests reflect
      parameters that can transition from passing to failing as a
      consequence of cross traffic, additional presented load or the
      actions of other network users.  By definition, capacity tests
      also consume significant network resources (data capacity and/or
      buffer space), and the test schedules must be balanced by their
      cost.
   Monitoring tests  are design designed to capture the most important aspects
      of a capacity test, but without causing unreasonable presenting excessive ongoing load
      themselves.  As such they may miss some details of the network network's
      performance, but can serve as a useful reduced cost reduced-cost proxy for a
      capacity test.
   Engineering tests  evaluate how network algorithms (such as AQM and
      channel allocation) interact with TCP style TCP-style self clocked protocols
      and adaptive congestion control based on packet loss and ECN
      marks.  These tests are likely to have complicated interactions
      with other traffic and under some conditions can be inversely
      sensitive to load.  For example a test to verify that an AQM
      algorithm causes ECN marks or packet drops early enough to limit
      queue occupancy may experience a false pass results result in the presence
      of bursty cross traffic.  It is important that engineering tests
      be performed under a wide range of conditions, including both in
      situ and bench testing, and over a wide variety of load
      conditions.  Ongoing monitoring is less likely to be useful for
      engineering tests, although sparse in situ testing might be
      appropriate.

   General Terminology:

   Targeted Diagnostic Test (TDS)  A set of IP Diagnostics designed to
      determine if a subpath can sustain flows at a specific
      target_data_rate over a path that has a target_RTT using
      target_MTU sided packets.
   Fully Specified Targeted Diagnostic Test  A TDS together with
      additional specification such as "type-p", etc which are out of
      scope for this document, but need to be drawn from other standards
      documents.
   apportioned  To divide and allocate, as in budgeting packet loss
      rates across multiple subpaths to accumulate below a specified
      end-to-end loss rate.
   open loop  A control theory term used to describe a class of
      techniques where systems that exhibit circular dependencies can be
      analyzed by suppressing some of the dependences, such that the
      resulting dependency graph is acyclic.

3.  New requirements relative to RFC 2330

   [Move this entire section to a future paper]

   Model Based Metrics are designed to fulfill some additional
   requirement that were not recognized at the time RFC 2330 [RFC2330] was written. written
   [RFC2330].  These missing requirements may have significantly
   contributed to policy difficulties in the IP measurement space.  Some
   additional requirements are:
   o  Metrics  IP metrics must be actionable by the ISP - they have to be
      interpreted in terms of behaviors or properties at the IP or lower
      layers, that an ISP can test, repair and verify.
   o  Metrics must be vantage point invariant over a significant range
      of measurement point choices (e.g., measurement points as
      described in [I-D.morton-ippm-lmap-path]), choices, including off path measurement
      points.  The only requirements on MP selection should be that the
      portion of the test path that is not under test is effectively
      ideal (or is non ideal in calibratable ways) ways that can be calibrated out of the
      measurements) and the test RTT between the MPs is below some
      reasonable bound.
   o  Metrics must be repeatable by multiple parties. parties with no specialized
      access to MPs or diagnostic infrastructure.  It must be possible
      for different parties to make the same measurement and observe the
      same results.  In particular it is specifically important that
      both a consumer (or their delegate) and ISP be able to perform the
      same measurement and get the same result.

   NB: All of the metric requirements in RFC 2330 should be reviewed and
   potentially revised.  If such a document is opened soon enough, this
   entire section should be dropped.

4.  Background

   [Move to a future paper, abridge here, ]

   At the time the IPPM WG was chartered, sound Bulk Transport Capacity
   measurement was known to be beyond our capabilities.  By hindsight it
   is now clear why it is such a hard problem:
   o  TCP is a control system with circular dependencies - everything
      affects performance, including components that are explicitly not
      part of the test.
   o  Congestion control is an equilibrium process, such that transport
      protocols change the network (raise loss probability and/or RTT)
      to conform to their behavior.
   o  TCP's ability to compensate for network flaws is directly
      proportional to the number of roundtrips per second (i.e.
      inversely proportional to the RTT).  As a consequence a flawed
      link may pass a short RTT local test even though it fails when the
      path is extended by a perfect network to some larger RTT.
   o  TCP has a meta Heisenberg problem - Measurement and cross traffic
      interact in unknown and ill defined ways.  The situation is
      actually worse than the traditional physics problem where you can
      at least estimate the relative momentum of the measurement and
      measured particles.  For network measurement you can not in
      general determine the relative "elasticity" of the measurement
      traffic and cross traffic, so you can not even gage gauge the relative
      magnitude of their effects on each other.

   The MBM approach is to "open loop" TCP by precomputing traffic
   patterns that

   These properties are typically generated by TCP operating at a consequence of the given
   target parameters, and evaluating delivery statistics (losses, ECN
   marks and delay).  In this approach the measurement software
   explicitly controls equilibrium behavior
   intrinsic to how all throughput optimizing protocols interact with
   the data rate, transmission pattern or cwnd
   (TCP's primary congestion network.  The protocols rely on control state variables) systems based on multiple
   network estimators to create
   repeatable traffic patterns that mimic TCP behavior but are
   independent regulate the quantity of data sent into the actual
   network.  The data in turn alters network behavior of and the subpath under test.
   These patterns are manipulated to probe properties observed
   by the network to verify estimators, such that it
   can deliver all there are circular dependencies between
   every component and every property.  Since some of these estimators
   are non-linear, the traffic patterns that a transport protocol entire system is
   likely to generate under normal operation at the target rate nonlinear, and RTT.

   Models are used any change
   anywhere causes difficult to determine the actual test parameters (burst size,
   loss rate, etc) from predict changes in every parameter.

   Model Based Metrics overcome these problems by forcing the target parameters.  The basic method is
   measurement system to
   use models be open loop: the delivery statistics (akin to estimate specific
   the network properties required to
   sustain a given transport flow (or set of flows), estimators) do not affect the traffic.  The traffic and using a suite
   traffic patterns (bursts) are computed on the basis of metrics to confirm that the target
   performance.  In order for a network meets to pass, the required properties.

   A resulting delivery
   statistics and corresponding network is expected estimators have to be able to sustain a Bulk such that
   they would not cause the control systems slow the traffic below the
   target rate.

4.1.  TCP properties

   TCP flow of a
   given data rate, MTU and RTT when the following conditions SCTP are met:
   o self clocked protocols.  The raw link rate dominant steady state
   behavior is higher than the target to have an approximately fixed quantity of data rate.
   o and
   acknowledgements (ACKs) circulating in the network.  The raw packet run length is larger than required receiver
   reports arriving data by a suitable
      TCP performance model
   o  There is sufficient buffering at the dominant bottleneck to absorb
      a slowstart rate burst large enough returning ACKs to get the flow out of
      slowstart at a suitable window size.
   o  There is sufficient buffering in data sender, the front path to absorb and
      smooth data
   sender interface rate bursts at all scales that are likely
      to be generated typically responds by sending exactly the application, any channel arbitration in same quantity of
   data back into the
      ACK path or other mechanisms.
   o  When there is a standing queue at a bottleneck for a shared media
      subpath, there are suitable bounds on how network.  The total quantity of data plus the data and
   represented by ACKs
      interact, for example due to circulating in the channel arbitration mechanism.
   o  When there network is a slowly rising standing queue at referred to as the bottleneck
   window.  The mandatory congestion control algorithms incrementally
   adjust the
      onset of packet loss has to be at an appropriate point (time window by sending slightly more or
      queue depth) and progressive.

   The tests to verify these condition are described less data in Section 7.

   A singleton [RFC2330] measurement response
   to each ACK.  The fundamentally important property of this systems is
   that it is entirely self clocked: The data transmissions are a pass/fail evaluation
   reflection of a
   given path or subpath at a given performance.  Note that measurements
   to confirm that a link passes at one particular performance might not
   be be useful to predict if the link will pass at a different
   performance.

   A TDS does have several valuable properties, such as natural ways to
   define several different composition metrics [RFC5835].

   [Add text on algebra on metrics (A-Frame from [RFC2330]) and
   tomography.]  The Spatial Composition of fundamental IPPM metrics has
   been studied and standardized.  For example, the algebra to combine
   empirical assessments of loss ratio to estimate complete path
   performance is described in section 5.1.5. of [RFC6049].  We intend
   to use this and other composition metrics as necessary.

   We are developing a tool that can perform many of the tests described
   here[MBMSource].

4.1.  TCP properties

   [Move this entire section to a future paper]

   TCP and SCTP are self clocked protocols.  The dominant steady state
   behavior is to have an approximately fixed quantity of data and
   acknowledgements (ACKs) circulating in the network.  The receiver
   reports arriving data by returning ACKs to the data sender, the data
   sender most frequently responds by sending exactly the same quantity
   of data back into the network.  The quantity of data plus the data
   represented by ACKs circulating in the network is referred to as the
   window.  The mandatory congestion control algorithms incrementally
   adjust the widow by sending slightly more or less data in response to
   each ACK.  The fundamentally important property of this systems is
   that it is entirely self clocked: The data transmissions are a
   reflection of the ACKs the ACKs that were delivered by the network, the ACKs
   are a reflection of the data arriving from the network.

   A number of phenomena can cause bursts of data, even in idealized
   networks that are modeled as simple queueing systems.

   During slowstart the data rate is doubled on each RTT by sending
   twice as much data as was delivered to the receiver on the prior RTT.
   For slowstart to be able to fill such a network the network must be
   able to tolerate slowstart bursts up to the full pipe size inflated
   by the anticipated window reduction on the first loss or ECN mark.
   For example, with classic Reno congestion control, an optimal
   slowstart has to end with a burst that is twice the bottleneck rate
   for exactly one RTT in duration.  This burst causes a queue which is
   exactly equal to the pipe size (the (i.e. the window is exactly twice the
   pipe size) so when the window is halved, halved in response to the first
   loss, the new window will be exactly the pipe size.

   Another source

   Note that if the bottleneck data rate is significantly slower than
   the rest of bursts are application pauses.  If the application path, the slowstart bursts will not cause significant
   queues anywhere else along the path; they primarily exercise the
   queue at the dominant bottleneck.

   Other sources of bursts include application pauses and channel
   allocation mechanisms.  Appendix B describes the treatment of channel
   allocation systems.  If the application pauses (stops reading or
   writing data) for some fraction of one RTT, state-of-the-art TCP to "catches up"
   catches up to the earlier window size by sending a burst of data at
   the full sender interface rate.  To fill such a network with a
   realistic application, the network has to be able to tolerate
   interface rate bursts from the data sender large enough to cover
   application pauses.

   Note that if the bottleneck data rate is significantly slower than
   the rest of the path, the slowstart bursts will not cause significant
   queues anywhere else along the path; they primarily exercise the
   queue at the dominant bottleneck.  Furthermore, although

   Although the interface rate bursts caused by the application are likely to be typically smaller than last
   burst of a slowstart, they are at a higher data rate so they can
   potentially exercise queues at arbitrary points along the "front path" front path
   from the data sender up to and including the queue at the dominant
   bottleneck.

   For many network technologies a simple queueing  There is no model does not apply:
   the network schedules, thins or otherwise alters the timing of ACKs
   and data, generally to raise the efficiency of the channel allocation
   process when confronted with relatively widely spaced small ACKs.
   These efficiency strategies are ubiquitous for half duplex, wireless how frequent or broadcast media.

   Altering the ACK stream generally has two consequences: raising the
   effective bottleneck data what sizes of
   sender rate making slowstart burst at higher rates
   (possibly as high as the sender's interface rate) and effectively
   raising bursts should be tolerated.

   To verify that a path can meet a performance target, it is necessary
   to independently confirm that the RTT by path can tolerate bursts in the time
   dimensions that the ACKs were postponed.  The first
   effect can be partially mitigated caused by reclocking ACKs once they these mechanisms.  Three cases are
   beyond the bottleneck on the return path
   likely to the sender, however this
   further raises the effective RTT.  The most extreme example of this
   class of behaviors is a half duplex channel that is never released
   until the current end point has no pending traffic.  Such
   environments cause self clocked protocols revert be sufficient:

   o  Slowstart bursts sufficient to extremely
   inefficient stop and wait behavior, get connections started properly.
   o  Frequent sender interface rate bursts that are small enough where
      they send an entire window
   of data as a single burst, followed by the entire window of ACKs on
   the return path.

   If a particular end-to-end path contains a link or device that alters
   the ACK stream, then the entire path from the sender up to the
   bottleneck must can be tested at assumed not to significantly affect delivery
      statistics.  (Implicitly derated by selecting the burst parameters implied by size).
   o  Infrequent sender interface rate full target_pipe_size bursts that
      do affect the ACK
   scheduling algorithm. delivery statistics.  (Target_run_length is
      derated).

4.2.  Diagnostic Approach

   The most important parameter MBM approach is to open loop TCP by precomputing traffic patterns
   that are typically generated by TCP operating at the Effective
   Bottleneck Data Rate, which is given target
   parameters, and evaluating delivery statistics (packet loss, ECN
   marks and delay).  In this approach the average rate at which measurement software
   explicitly controls the ACKs
   advance snd.una.  Note data rate, transmission pattern or cwnd
   (TCP's primary congestion control state variables) to create
   repeatable traffic patterns that thinning the ACKs (relying on mimic TCP behavior but are
   independent of the
   cumulative nature actual behavior of seg.ack the subpath under test.  These
   patterns are manipulated to probe the network to permit discarding some ACKs) is
   implies an effectively infinite bottleneck data rate.

   To verify that a path it can meet
   deliver all of the performance target, it is
   necessary to independently confirm traffic patterns that a transport protocol is
   likely to generate under normal operation at the entire path can tolerate
   bursts target rate and RTT.

   By opening the protocol control loops, we remove most sources of
   temporal and spatial correlation in the dimensions traffic delivery statistics,
   such that are likely each subpath's contribution to be induced by the
   application end-to-end statistics
   can be assumed to be independent and any stationary (The delivery
   statistics depend on the fine structure of the data or ACK scheduling anywhere transmissions,
   but not on long time scale state imbedded in the path.  Two
   common cases are the most important: slowstart bursts at twice sender, receiver or
   other network components.)  Therefore each subpath's contribution to
   the
   effective bottleneck data rate; and somewhat smaller sender interface
   rate bursts.

   The slowstart rate bursts must end-to-end delivery statistics can be at least as least as large
   target_pipe_size packets and should assumed to be twice independent,
   and spatial composition techniques such as large (so the peak
   queue occupancy at [RFC5835] apply.

   In typical networks, the dominant bottleneck would be approximately
   target_pipe_size).

   There is no general model for how well the network needs to tolerate
   sender interface rate bursts.  All existing TCP implementations send
   full sized full rate bursts under some typically uncommon conditions,
   such as application pauses that approximately match contributes the RTT, or when
   ACKs are lost or thinned.  Strawman: partial window bursts (some
   fraction majority
   of target_pipe_size) should be tolerated without
   significantly raising the loss probability.  Full target_pipe_size
   bursts may slightly increase the loss probability.  Interface rate
   bursts as large as twice target_pipe_size should not cause
   deterministic packet drops.

5.  Common Models and Parameters

5.1.  Target End-to-end parameters

   The target end to end parameters are the target data rate, target RTT loss and target MTU as defined in Section 2 These parameters are
   determined by ECN marks.  Often the needs rest of the application or the ultimate end user
   and path makes
   insignificant contribution to these properties.  A TDS should
   apportion the end-to-end Internet path over which budget for the application is
   expected to operate.  The target specified parameters are in units that make
   sense
   (primarily packet loss and ECN marks) to each subpath or group of
   subpaths.  For example the upper layer: payload bytes delivered dominant bottleneck may be permitted to
   contribute 90% of the application,
   above TCP.  They exclude overheads associated with TCP and IP
   headers, retransmitts and other protocols (e.g.  DNS).  In addition,
   other loss budget, while the rest of the path is only
   permitted to contribute 10%.

   A TDS or FSTDS MUST apportion all relevant packet delivery statistics
   between different subpaths, such that the spatial composition of the
   metrics yields end-to-end parameters include statics which are within the effective bottleneck data
   rate, bounds
   determined by the sender interface models.

   A network is expected to be able to sustain a Bulk TCP flow of a
   given data rate rate, MTU and RTT when the TCP/IP header sizes
   (overhead).

   Note that following conditions are met:
   o  The raw link rate is higher than the target parameters can be specified for data rate.
   o  The observed run length is larger than required by a hypothetical
   path, for example suitable TCP
      performance model
   o  There is sufficient buffering at the dominant bottleneck to construct TDS designed for bench testing in absorb
      a slowstart rate burst large enough to get the
   absence flow out of
      slowstart at a real application, or for a real physical test, for in
   situ testing of production infrastructure.

   The number of concurrent connections suitable window size.
   o  There is explicitly not a parameter to
   this model [unlike earlier drafts].  If a subpath requires multiple
   connections sufficient buffering in order to meet the specified performance, that must be
   stated explicitly front path to absorb and
      smooth sender interface rate bursts at all scales that are likely
      to be generated by the procedure described application, any channel arbitration in Section 6.1.4
   applies.

5.2.  Common Model Calculations

   The most important derived parameter the
      ACK path or other mechanisms.
   o  When there is target_pipe_size (in
   packets), which a standing queue at a bottleneck for a shared media
      subpath, there are suitable bounds on how the data and ACKs
      interact, for example due to the channel arbitration mechanism.
   o  When there is a slowly rising standing queue at the window size --- bottleneck the number
      onset of packets needed
   exactly meet packet loss has to be at an appropriate point (time or
      queue depth) and progressive.  This typically requires some form
      of Automatic Queue Management [RFC2309].

   We are developing a tool that can perform many of the tests described
   here[MBMSource].

5.  Common Models and Parameters

5.1.  Target End-to-end parameters

   The target end-to-end parameters are the target data rate, with no cross traffic for the specified target RTT
   and MTU.  It is given by:

   target_pipe_size = target_rate * target_RTT / ( target_MTU -
   header_overhead )

   If the transport protocol (e.g.  TCP) average window size is smaller
   than this, it will not meet the target rate.

   The reference target_run_length, is a very conservative model for the
   minimum required spacing between losses or ECN marks.  The reference
   target_run_length can derived MTU as follows: assume defined in Section 2.  These parameters are
   determined by the
   subpath_data_rate is infinitesimally larger than needs of the target_data_rate
   plus application or the required header overheads.  Then target_pipe_size also
   predicts ultimate end user
   and the onset of queueing.  If end-to-end Internet path over which the transport protocol (e.g.
   TCP) has a window size that application is larger than the target_pipe_size, the
   excess packets will raise
   expected to operate.  The target parameters are in units that make
   sense to upper layers: payload bytes delivered to the RTT, typically by forming a standing
   queue at application,
   above TCP.  They exclude overheads associated with TCP and IP
   headers, retransmits and other protocols (e.g.  DNS).

   Other end-to-end parameters defined in Section 2 include the bottleneck.

   Assume
   effective bottleneck data rate, the transport protocol is using standard Reno style Additive
   Increase, Multiplicative Decrease congestion control [RFC5681] sender interface data rate and
   the receiver is using standard delayed ACKs.  With delayed ACKs there TCP/IP header sizes (overhead).

   The target data rate must be 2*target_pipe_size roundtrips between losses.  Otherwise the
   multiplicative window reduction triggered smaller than all link data rates by a loss would cause the
   network
   enough headroom to be underfilled.  We derive carry the number of packets between
   losses from transport protocol overhead, explicitly
   including retransmissions and an allowance fluctuations in the area under actual
   data rate, needed to meet the AIMD sawtooth following [MSMO97].
   They must be no more frequent than every 1 specified average rate.  Specifying a
   target rate with insufficient headroom are likely to result in
   (3/2)*target_pipe_size*(2*target_pipe_size) packets.  This simplifies
   to:

   target_run_length = 3*(target_pipe_size^2)
   brittle measurements having little predictive value.

   Note that this calculation is very conservative and is based on the target parameters can be specified for a
   number hypothetical
   path, for example to construct TDS designed for bench testing in the
   absence of assumptions that may not apply.  Appendix A discusses these
   assumptions and provides some alternative models.  If a less
   conservative model is used, a fully specified TDS real application, or FSTDS MUST
   document the actual method for computing target_run_length along with
   the rationale a real physical test, for the underlying assumptions and the ratio in
   situ testing of chosen
   target_run_length production infrastructure.

   The number of concurrent connections is explicitly not a parameter to
   this model.  If a subpath requires multiple connections in order to
   meet the reference target_run_length calculated
   above.

   These two parameters, target_pipe_size specified performance, that must be stated explicitly and target_run_length,
   directly imply most of
   the individual procedure described in Section 6.1.4 applies.

5.2.  Common Model Calculations

   The end-to-end target parameters for are used to derive the tests below.
   Target_pipe_size
   target_pipe_size and the reference target_run_length.

   The target_pipe_size, is the average window size, the amount of circulating data
   required size in packets needed to
   meet the target data rate, and implies the scale of the
   bursts that for the network might experience. specified target RTT and MTU.  It is
   given by:

   target_pipe_size = target_rate * target_RTT / ( target_MTU -
   header_overhead )
   Target_run_length is the
   amount an estimate of data the minimum required headway
   between losses or ECN marks standard for
   standard marks, as computed by a mathematical model of
   TCP congestion control.  The individual parameters are for each diagnostic test derivation here follows [MSMO97], and by
   design is quite conservative.  The alternate models described
   below. in
   Appendix A generally yield smaller run_lengths (higher loss rates),
   but may not apply in all situations.  In a few any case there are not well established alternate models for what
   is considered correct network operation.  In many of these cases the
   problems might either
   should be partially mitigated by future improvements compared to TCP implementations.

5.3.  Parameter Derating

   Since some aspects of the models are very conservative, this
   framework permits some latitude in derating test parameters.  Rather
   than trying to formalize more complicated models we permit some test
   parameters to be relaxed as long reference target_run_length computed here.

   Reference target_run_length is derived as they meet some additional
   procedural constraints:
   o  The TDS or FSTDS MUST document follows: assume the
   subpath_data_rate is infinitesimally larger than the target_data_rate
   plus the required header_overhead.  Then target_pipe_size also
   predicts the onset of queueing.  A larger window will cause a
   standing queue at the bottleneck.

   Assume the transport protocol is using standard Reno style Additive
   Increase, Multiplicative Decrease congestion control [RFC5681] (but
   not Appropriate Byte Counting [RFC3465]) and justify the actual method used
      compute receiver is using
   standard delayed ACKs.  Reno increases the derated metric parameters.
   o  The validation procedures described in Section 9 window by one packet every
   pipe_size worth of ACKs.  With delayed ACKs this takes 2 Round Trip
   Times per increase.  To exactly fill the pipe losses must be used no
   closer than when the peak of the AIMD sawtooth reached exactly twice
   the target_pipe_size otherwise the multiplicative window reduction
   triggered by the loss would cause the network to
      demonstrate be underfilled.
   Following [MSMO97] the feasibility number of meeting packets between losses must be the performance targets
      with infrastructure that infinitessimally passes
   area under the derated
      tests.
   o  The validation process itself AIMD sawtooth.  They must be documented no more frequent than
   every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets,
   which simplifies to:

   target_run_length = 3*(target_pipe_size^2)

   Note that this calculation is such very conservative and is based on a way
   number of assumptions that other researchers can duplicate the validation experiments.

   Except as noted, all tests below assume no derating.  Tests where
   there is may not currently apply.  Appendix A discusses these
   assumptions and provides some alternative models.  If a well established less
   conservative model for the required
   parameters include derating as is used, a way to indicate flexibility in fully specified TDS or FSTDS MUST
   document the
   parameters.

6.  Common testing procedures

6.1. actual method for computing target_run_length along with
   the rationale for the underlying assumptions and the ratio of chosen
   target_run_length to the reference target_run_length calculated
   above.

   These two parameters, target_pipe_size and target_run_length,
   directly imply most of the individual parameters for the tests in
   Section 7.

5.3.  Parameter Derating

   Since some aspects of the models are very conservative, this
   framework permits some latitude in derating test parameters.  Rather
   than trying to formalize more complicated models we permit some test
   parameters to be relaxed as long as they meet some additional
   procedural constraints:
   o  The TDS or FSTDS MUST document and justify the actual method used
      compute the derated metric parameters.
   o  The validation procedures described in Section 9 must be used to
      demonstrate the feasibility of meeting the performance targets
      with infrastructure that infinitesimally passes the derated tests.
   o  The validation process itself must be documented is such a way
      that other researchers can duplicate the validation experiments.

   Except as noted, all tests below assume no derating.  Tests where
   there is not currently a well established model for the required
   parameters explicitly include derating as a way to indicate
   flexibility in the parameters.

6.  Common testing procedures

6.1.  Traffic generating techniques

6.1.1.  Paced transmission

   Paced (burst) transmissions: send bursts of data on a timer to meet a
   particular target rate and pattern.  In all cases the specified data
   rate can either be the application or link rates.  Header overheads
   must be included in the calculations as appropriate.
   Paced single packets:  Send individual packets at the specified rate
      or headway.
   Burst:  Send sender interface rate bursts on a timer.  Specify any 3
      of: average rate, packet size, burst size (number of packets) and
      burst headway (burst start to start).  These bursts are typically
      sent as back-to-back packets at the testers interface rate.
   Slowstart bursts:  Send 4 packet sender interface rate bursts at an
      average data rate equal to twice effective bottleneck link rate
      (but not more than the sender interface rate).  This corresponds
      to the average rate during a TCP slowstart when Appropriate Byte
      Counting [ABC] [RFC3465] is present or delayed ack is disabled.  Note
      that if the effective bottleneck link rate is more than half of
      the sender interface rate, slowstart bursts become sender
      interface rate bursts.
   Repeated Slowstart bursts:  Slowstart bursts are typically part of
      larger scale pattern of repeated bursts, such as sending
      target_pipe_size packets as slowstart bursts on a target_RTT
      headway (burst start to burst start).  Such a stream has three
      different average rates, depending on the averaging time scale. interval.  At
      the finest time scale the average rate is the same as the sender
      interface rate, at a medium scale the average rate is twice the
      effective bottleneck link rate and at the longest time scales the
      average rate is equal to the target data rate.

   Note that if the effective bottleneck link rate is more than half in conventional measurement theory exponential
   distributions are often used to eliminate many sorts of correlations.
   For the sender interface rate, slowstart bursts become sender interface
   rate bursts.

6.1.2.  Constant window pseudo procedures above, the correlations are created by the network
   elements and accurately reflect their behavior.  At some point in the
   future, it may be desirable to introduce noise sources into the above
   pacing models, but the are not warranted at this time.

6.1.2.  Constant window pseudo CBR

   Implement pseudo constant bit rate by running a standard protocol
   such as TCP with a fixed bound on the window size.  The rate is only
   maintained in average over each RTT, and is subject to limitations of
   the transport protocol.

   The bound on the window size is computed from the target_data_rate
   and the actual RTT of the test path.

   If the transport protocol fails to maintain the test rate within
   prescribed data rates, the test MUST NOT be considered passing.  If
   there is a signature of a network problem (e.g. the run length is too
   small) then limits the test can would typically be considered to fail.  Since packet loss
   and ECN marks are required to reduce the data rate for standard
   transport protocols, inconclusive
   or failing, depending depending on what mechanism caused the test specification must include suitable
   allowances in reduced
   rate.  See the prescribed data rates.  If there is not sufficient
   signature discussion of a network problem, then failing to make the prescribed
   data rate must be considered inconclusive.  Otherwise there are some
   cases where tester failures might cause false negative test results. outcomes in Section 6.2.1.

6.1.3.  Scanned window pseudo CBR

   Same as the above, except the window is scanned across a range of
   sizes designed to include two key events, the onset of queueing and
   the onset of packet loss or ECN marks.  The window is scanned by
   incrementing it by one packet for every 2*target_pipe_size delivered
   packets.  This mimics the additive increase phase of standard
   congestion avoidance and normally separates the the window increases
   by approximately twice the target_RTT.

   There are two versions of this test: one built by applying a window
   clamp to standard congestion control and one one built by stiffening
   a non-standard transport protocol.  When standard congestion control
   is in effect, any losses or ECN marks cause the transport to revert
   to a window smaller than the clamp such that the scanning clamp
   looses loses
   control the window size.  The NPAD pathdiag tool is an example of
   this class of algorithms [Pathdiag].

   Alternatively a non-standard congestion control algorithm can respond
   to losses by transmitting extra data, such that it (attempts) to
   maintain maintains the
   specified window size independent of losses or ECN marks.  Such a
   stiffened transport explicitly violates mandatory Internet congestion
   control and is not suitable for in situ testing.  It is only
   appropriate for engineering testing under laboratory conditions.  The
   Windowed Ping tools implemented such a test [WPING].  This tool has
   been updated and is under test.[mpingSource]

   The test procedures in Section 7.2 describe how to the partition the
   scans into regions and how to interpret the results.

6.1.4.  Concurrent or channelized testing

   The procedures described in his document are only directly applicable
   to single stream performance measurement, e.g. one TCP connection.
   In an Ideal ideal world, we would disallow all performance claims based
   multiple concurrent stream streams but this is not practical due to at least
   two different issues.  First, many very high rate link technologies
   are channelized, channelized and pin individual flows to specific channels to
   minimize reordering or solve other problems and second second, TCP itself has
   scaling limits.  Although the former problem might be overcome
   through different design decisions, the later problem is more deeply
   rooted.

   All standard [RFC 5681] [RFC5681] and de facto standard [CUBIC] congestion control
   algorithms [CUBIC] have scaling limits, in the sense that as a long
   fast network over (LFN) with a fixed RTT and MTU gets faster faster, all
   congestion control algorithms get less accurate.  In general their noise immunity drops
   (a single packet drop should have less effect as individual packets
   become smaller relative to the window size) accurate and the control frequency
   of the AIMD sawtooth also drops, meaning that as TCP is using more
   total capacity it gets less information about the state of a consequence
   have difficulty filling the network and other traffic. [SLowScaling].  These properties
   are a direct consequence of the original Reno AIMD congestion control design
   and are implicitly required by the requirement in RFC 5681 that all transport protocols be "TCP friendly"
   [Guidelines] have
   uniform response to congestion.

   There are a number of reason reasons to want to specify performance in term
   of multiple concurrent flows.  Although there are
   a number of downsides to @@@@

   The use of multiple connections in the Internet has been very
   controversial since the beginning of the World-Wide-Web[first
   complaint].  Modern browsers open many connections [BScope].  Experts
   associated with IETF transport area have frequently spoken against
   this practice [long list].  It is not inappropriate to assume some
   small number of concurrent connections (e.g. 4 or 6), to compensate
   for limitation in TCP.  However, choosing too large a number is at
   risk of being interpreted as a signal by the web browser community
   that flows, however this practice has been embraced by the Internet service provider
   community.  It may not be desirable to send such a signal.

   Note that the current proposal for httpbis [SPDY] is specifically
   designed to work best with a single TCP connection per client server
   pair, because it uses adaptive compression which requires sending
   separate compression dictionaries per connection.  As long as TCP can
   use IW10 and some of the transport parameter can be cached, multiple
   connections provide a negative gain, due to the replicated
   compression overhead.

   The specification to use multiple connections approach is not
   recommended for data rates below several Mb/s, which can be attained
   with run lengths under 10000. 10000 packets.  Since run length goes as the
   square of the data rates, rate, at higher rates (see Section 8.3) the run lengths can be
   unfeasibly large, and multiple connection might be the only feasible
   approach.  For an example of this problem see Section 8.3.

   If multiple connections are deemed necessary to meet aggregate
   performance targets then this MUST be stated both the design of the
   TDS and in any claims about network performance.  The tests MUST be
   performed concurrently with the specified number of connections.  For
   the the tests that using bursty traffic, the bursts should be
   synchronized across flows.

6.1.5.  Intermittent Testing

   Any test which does not depend on queueing (e.g. the CBR tests) or
   experiences periodic zero outstanding data during normal operation
   (e.g. between bursts for the various burst tests), can be formulated
   as an intermittent test. test, to reduce the perceived impact on other
   traffic.  The approach is to insert periodic pauses in the test at
   any point when there is no expected queue occupancy.

   Intermittent testing can be used for ongoing monitoring for changes
   in subpath quality with minimal disruption users.  It should
   be used  However it is not
   suitable in conjunction with environments where there are reactive links[REACTIVE].

6.1.6.  Intermittent Scatter Testing

   Intermittent scatter testing is a technique for non-disruptively
   evaluating the full rate test because this method
   assesses an average_run_length over front path from a long time interval w.r.t. user
   sessions.  It may false fail due sender to other legitimate congestion
   causing traffic or may false pass changes in underlying link
   properties (e.g. a modem retraining to an out of contract lower
   rate).

   [Need text about bias (false pass) in the shadow of loss caused by
   excessive bursts]

6.1.6.  Intermittent Scatter Testing

   Intermittent scatter testing: when testing the network path to or
   from an ISP subscriber aggregation
   point (CMTS, DSLAM, etc),
   intermittent tests can be spread within an ISP at full load by intermittently testing across a
   pool of users subscriber access links, such that no
   one users experiences the full impact of the testing, even though the each subscriber sees
   tolerable test traffic loads.  The load on the front path should be
   limited to or be no more than that which would be caused by a single
   test to an known to otherwise be idle subscriber.  This test in
   aggregate mimics a full load test from a content provider to the ISP subscriber
   aggregation point is sustained
   at full rate. point.

   Intermittent scatter testing can be used to reduce the measurement
   noise introduced by unknown traffic on customer access links.

6.2.  Interpreting the Results

6.2.1.  Test outcomes

   A singleton

   To perform an exhaustive test of an end-to-end network path, each
   test of the TDS is a pass/fail measurement applied to each subpath of a subpath. an end-to-end path.  If
   any subpath fails any test then an application running over the end-to-end end-
   to-end path is can also be expected to fail to attain the target
   performance under some conditions.

   In addition we use "inconclusive outcome" to indicate that passing or failing, a test
   failed can be deemed to attain the required test conditions.  A test be
   inconclusive for a number of reasons.  Proper instrumentation and
   treatment of inclusive outcomes is critical to the accuracy and
   robustness of Model Based Metrics.  Tests can be inconclusive if the
   precomputed traffic pattern was not authentically
   generated, test preconditions were not met or accurately generated; the
   measurement results were not statistically significantly.

   This is important significant; and others
   causes such as failing to meet some required preconditions for the extent
   test.

   For example consider a test that the diagnostic tests use
   protocols which themselves include built implements Constant Window Pseudo
   CBR (Section 6.1.2) by adding rate controls and detailed traffic
   instrumentation to TCP (e.g.  [RFC4898]).  TCP includes built in
   control systems which might interfere with some aspect of the test.  For example consider sending data rate.  If
   such a test that is implemented by adding rate controls and loss
   instrumentation to TCP: meeting meets the the run length specification while failing to
   attain the specified data rate it must be treated as an inconclusive
   result, because we can not a priori determine if the reduced data
   rate was caused by a TCP problem or a network problem, or if the
   reduced data rate had a material effect on the run length
   measurement.  (Note measurement
   itself.

   Note that if the measured for load tests such as this example, an observed run length was
   that is too small,
   the test small can be considered to have failed the test because
   it doesn't really matter that the test didn't attain the required
   data rate). rate.

   The vantage independence really important new properties of Model Based Metrics depends on
   the accuracy MBM, such as vantage
   independence, are a direct consequence of opening the distinction between conclusive (pass control loops
   in the protocols, such that the test traffic does not depend on
   network conditions or fail) traffic received.  Any mechanism that
   introduces feedback between the traffic measurements and
   inconclusive tests. the traffic
   generation is at risk of introducing nonlinearities that spoil these
   properties.  Any exceptional event that indicates that such feedback
   has happened should cause the test to be considered inconclusive.

   One way to view inconclusive tests is that they reflect situations
   where the signature a test outcome is ambiguous between problems
   with the limitations of the subpath network
   and problems with the diagnostic test itself.
   One some unknown limitation of the goals for evolving diagnostic test designs will be to keep
   sharpening this distinction.

   One of the goals of evolving the testing process, procedures and
   measurement point selection should be to minimize itself, which was
   presumably caused by some uncontrolled feedback from the number of
   inconclusive tests. network.

   Note that procedures that attempt to sweep the target parameter space
   to find the bounds on some parameter (for example to find the highest
   data rate for a subpath) are likely to break the location independent
   properties of Model Based Metrics, because the boundary between
   passing and inconclusive is extremely likely sensitive to be the RTT sensitive, because TCP's
   ability to compensate for problems scales with the number of round
   trips per second.

6.2.2.  Statistical criteria for measuring run_length

   When evaluating the observed run_length, we need to determine
   appropriate packet stream sizes and acceptable error levels for
   efficient methods of measurement.  In practice, can we compare  Repeating the
   empirically estimated loss probabilities same procedure from another vantage
   point with the targets as the
   sample size grows?  How large a different RTT is likely get a different result, because
   TCP will get lower performance on the path with the longer RTT.

   One of the goals for evolving TDS designs will be to keep sharpening
   distinction between inconclusive, passing and failing tests.  The
   criteria for for passing, failing and inclusive tests MUST be
   explicitly stated for every test in the TDS or FSTDS.

   One of the goals of evolving the testing process, procedures tools
   and measurement point selection should be to minimize the number of
   inconclusive tests.

   It may be useful to keep raw data delivery statistics for deeper
   study of the behavior of the network path and to measure the tools.
   This can help to drive tool evolution.  Under some conditions it
   might be possible to reevaluate the raw data for satisfying alternate
   performance targets.  However such procedures are likely to introduce
   sampling bias and other implicit feedback which can cause false
   results and exhibit MP vantage sensitivity.

6.2.2.  Statistical criteria for measuring run_length

   When evaluating the observed run_length, we need to determine
   appropriate packet stream sizes and acceptable error levels for
   efficient measurement.  In practice, can we compare the empirically
   estimated packet loss and ECN marking probabilities with the targets
   as the sample size grows?  How large a sample is needed to say that
   the measurements of packet transfer indicate a particular run-length run length
   is present?

   The generalized measurement can be described as recursive testing:
   send packets (individually or in patterns) and observe the packet
   transfer
   delivery performance (loss ratio or other metric, any defect marking we
   define).

   As each packet is sent and measured, we have an ongoing estimate of
   the performance in terms of defect the ratio of packet loss or ECN mark to
   total packet ratio (or packets (i.e. an empirical probability).  We continue to send
   until conditions support a conclusion or a maximum sending limit has
   been reached.

   We have a target_defect_probability, target_mark_probability, 1 defect mark per target_run_length,
   where a "defect" "mark" is defined as a lost packet, a packet with ECN mark,
   or other impairment. signal.  This constitutes the null Hypothesis:

   H0:  no more than one defect mark in target_run_length =
      3*(target_pipe_size)^2 packets

   and we can stop sending packets if on-going measurements support
   accepting H0 with the specified Type I error = alpha (= 0.05 for
   example).

   We also have an alternative Hypothesis to evaluate: if performance is
   significantly lower than the target_defect_probability. target_mark_probability.  Based on
   analysis of typical values and practical limits on measurement
   duration, we choose four times the H0 probability:

   H1:  one or more defects marks in (target_run_length/4) packets

   and we can stop sending packets if measurements support rejecting H0
   with the specified Type II error = beta (= 0.05 for example), thus
   preferring the alternate hypothesis H1.

   H0 and H1 constitute the Success and Failure outcomes described
   elsewhere in the memo, and while the ongoing measurements do not
   support either hypothesis the current status of measurements is
   inconclusive.

   The problem above is formulated to match the Sequential Probability
   Ratio Test (SPRT) [StatQC], which [StatQC].  Note that as originally framed the
   events under consideration were all manufacturing defects.  In
   networking, ECN marks and lost packets are not defects but signals,
   indicating that the transport protocol should slow down.

   The Sequential Probability Ratio Test also starts with a pair of
   hypothesis specified as above:

   H0:  p0 = one defect in target_run_length
   H1:  p1 = one defect in target_run_length/4
   As packets are sent and measurements collected, the tester evaluates
   the cumulative defect count against two boundaries representing H0
   Acceptance or Rejection (and acceptance of H1):

   Acceptance line:  Xa = -h1 + sn
   Rejection line:  Xr = h2 + sn
   where n increases linearly for each packet sent and

   h1 =  { log((1-alpha)/beta) }/k
   h2 =  { log((1-beta)/alpha) }/k
   k  =  log{ (p1(1-p0)) / (p0(1-p1)) }
   s  =  [ log{ (1-p0)/(1-p1) } ]/k
   for p0 and p1 as defined in the null and alternative Hypotheses
   statements above, and alpha and beta as the Type I and Type II error.

   The SPRT specifies simple stopping rules:

   o  Xa < defect_count(n) < Xb: continue testing
   o  defect_count(n) <= Xa: Accept H0
   o  defect_count(n) >= Xb: Accept H1

   The calculations above are implemented in the R-tool for Statistical
   Analysis,
   Analysis [Rtool] , in the add-on package for Cross-Validation via
   Sequential Testing (CVST) [http://www.r-project.org/] [Rtool] [CVST] .

   Using the equations above, we can calculate the minimum number of
   packets (n) needed to accept H0 when x defects are observed.  For
   example, when x = 0:

   Xa = 0  = -h1 + sn
   and  n = h1 / s

6.2.3.  Reordering Tolerance

   All tests must be instrumented

6.2.2.1.  Alternate criteria for measuring run_length

   An alternate calculation, contributed by Alex Gilgur (Google).

   The probability of failure within an interval whose length is
   target_run_length is given by an exponential distribution with rate =
   1 / target_run_length (a memoryless process).  The implication of
   this is that it will be different, depending on the total count of
   packets that have been through the pipe, the formula being:

   P(t1 < T < t2) = R(t1) - R(t2),

   where

 T = number of packets at which a failure will occur with probability P;
 t = number of packets:
 t1 = number of packets (e.g., when failure last occurred)
 t2 = t1 + target_run_length
 R = failure rate:
 R(t1) = exp (-t1/target_run_length)
 R(t2) = exp (-t2/target_run_length)

   The algorithm:

   initialize the packet.counter = 0
   initialize the failed.packet.counter = 0
   start the loop
   if paket_response = ACK:
   increment the packet.counter
   else:
   ### The packet failed
   increment the packet.counter
   increment the failed.packet.counter

   P_fail_observed = failed.packet.counter/packet.counter

   upper_bound =  packet.counter + target.run.length / 2
   lower_bound =  packet.counter - target.run.length / 2

   R1 = exp( -upper_bound / target.run.length)
   R0 = R(max(0, lower_bound)/ target.run.length)

   P_fail_predicted = R1-R0
   Compare P_fail_observed vs. P_fail_predicted
   end-if
   continue the loop

   This algorithm allows accurate comparison of the observed failure
   probability with the corresponding values predicted based on a fixed
   target_failure_rate, which is equal to 1.0 / target_run_length.

6.2.3.  Reordering Tolerance

   All tests must be instrumented for packet level reordering [RFC4737].

   NB:
   However, there is no global consensus for how much reordering tolerance is
   appropriate or reasonable.  ("None" is absolutely unreasonable.) should be
   acceptable.  Over the last two decades the general trend has been to
   make protocols and applications more tolerant to reordering, in
   response to the gradual increase in reordering in the network.  This
   increase has been due to the gradual deployment of parallelism in the
   network, as a consequence of such technologies as multithreaded route
   lookups and Equal Cost Multipath (ECMP) routing.  These techniques to
   increase network parallelism are critical to enabling overall
   Internet growth to exceed Moore's Law.

   Section 5 of [RFC4737] proposed a metric that may be sufficient to
   designate isolated reordered packets as effectively lost, because
   TCP's retransmission response would be the same.

   [As a strawman, we propose the following:]

   TCP should be able to adapt to reordering as long as the reordering
   extent is no more than the maximum of one half window or 1 mS,
   whichever is larger.  Note that there is a fundamental tradeoff
   between tolerance to reordering and how quickly algorithms such as
   fast retransmit can repair losses.  Within this limit on reorder
   extent, there should be no bound on reordering density.

   NB: Traditional TCP implementations were not compatible with this
   metric, however newer implementations still need to be evaluated

   Parameters:
   Reordering displacement:  the maximum of one half of target_pipe_size
      or 1 mS.

6.3.  Test Qualifications

   This entire section need to be completely overhauled. @@@@ It might
   be summarized as "needs to be specified in a FSTDS"

   Things FSTDS".

   Send pre-load traffic as needed to monitor before, during and after activate radios with a test.

6.3.1.  Verify the Traffic Generation Accuracy

   [Excess detail for this doc.  To be summarized]

   for most tests, sleep mode,
   or other "reactive network" elements (term defined in
   [draft-morton-ippm-2330-update-01]).

   In general failing to accurately generate the test traffic
   indicates an inconclusive tests, since it has to be
   treated as an inconclusive test, since it must be presumed that the
   error in traffic generation might have affected the test outcome.  To
   the extent that the network itself had an effect on the the traffic
   generation (e.g. in the standing queue tests) the possibility exists
   that allowing too large of error margin in the traffic generation
   might introduce feedback loops that comprise the vantage independents
   properties of these tests.

   Parameters:
   Maximum Data Rate Error  The permitted amount that the test traffic
      can be different than specified for the current test.  This is a
      symmetrical bound.
   Maximum Data Rate Overage  The permitted amount that the test traffic
      can be above than specified for the current test.
   Maximum Data Rate Underage  The permitted amount that the test
      traffic can be less than specified for the current test.

6.3.2.  Verify the absence of cross traffic

   [Excess detail for this doc.  To be summarized]

   The proper treatment of cross traffic is different for different
   subpaths.  In general when testing infrastructure which is associated
   with only one subscriber, the test should be treated as inconclusive
   it that subscriber is active on the network.  However, for shared
   infrastructure,
   infrastructure managed by an ISP, the question at hand is likely to
   be testing if
   provider ISP has sufficient total capacity.  In such cases the
   presence of cross traffic due to other subscribers is explicitly part
   of the network conditions and its effects are explicitly part of the
   test.

   @@@@ Need to distinguish between ISP managed sharing and

   These two cases do not cover all subpaths.  For example, WiFI which
   itself shares unmanaged
   sharing. e.g.  WiFi channel space with other devices is unlikely
   to be unsuitable for any prescriptive measurement.

   Note that canceling tests due to load on subscriber lines may
   introduce sampling errors bias for testing other parts of the
   infrastructure.  For this reason tests that are scheduled but not run
   due to load should be treated as a special case of "inconclusive".

   Use a passive packet or SNMP monitoring to verify that the

7.  Diagnostic Tests

   The diagnostic tests below are organized by traffic
   volume on pattern: basic
   data rate and run length, standing queues, slowstart bursts, and
   sender rate bursts.  We also introduce some combined tests which are
   more efficient the subpath agrees with expense of conflating the traffic generated by a test.
   Ideally this should be performed before, during signatures of different
   failures.

7.1.  Basic Data Rate and after each test.

   The goal is provide quality assurance on Run Length Tests

   We propose several versions of the overall measurement
   process, basic data rate and specifically to detect run length
   test.  All measure the following measurement
   failure: number of packets delivered between losses or
   ECN marks, using a user observes unexpectedly poor application performance,
   the ISP observes data stream that the access link is running rate controlled at the rated
   capacity.  Both fail to observe that the user's computer has been
   infected by a virus which is spewing traffic as fast as it can.

   Parameters:
   Maximum Cross Traffic Data Rate  The amount of excess traffic
      permitted.  Note that this will be different for different tests.

   One possible method is an adaptation of: www-didc.lbl.gov/papers/
   SCNM-PAM03.pdf D Agarwal etal.  "An Infrastructure for Passive
   Network Monitoring of Application Data Streams".  Use the same
   technique as that paper to trigger the capture of SNMP statistics for
   the link.

6.3.3.  Additional test preconditions

   [Excess detail for this doc.  To be summarized]

   Send pre-load traffic as needed to activate radios with a sleep mode,
   or other "reactive network" elements (term defined in
   [draft-morton-ippm-2330-update-01]).

   Use the procedure above to confirm that the pre-test background
   traffic is low enough.

7.  Diagnostic Tests

   The diagnostic tests are organized by which properties are being
   tested: run length, standing queues; slowstart bursts; sender rate
   bursts; and combined tests.  The combined tests reduce overhead at
   the expense of conflating the signatures of multiple failures.

7.1.  Basic Data Rate and Run Length Tests

   We propose several versions of the basic data rate and run length
   test.  All measure the number of packets delivered between losses or
   ECN marks, using a data stream that is rate controlled at or below or below
   the target_data_rate.

   The tests below differ in how the data rate is controlled.  The data
   can be paced on a timer, or window controlled at full target data
   rate.  The first two tests implicitly confirm that sub_path has
   sufficient raw capacity to carry the target_data_rate.  They are
   recommend for relatively infrequent testing, such as an installation
   or auditing process.  The third, background run length, is a low rate
   test designed for ongoing monitoring for changes in subpath quality.

   All rely on the receiver accumulating packet delivery statistics as
   described in Section 6.2.2 to score the outcome:

   Pass: it is statistically significant that the observed run length is
   larger than the target_run_length.

   Fail: it is statistically significant that the observed run length is
   smaller than the target_run_length.

   A test is considered to be inconclusive if it failed to meet the data
   rate as specified below, meet the qualifications defined in
   Section 6.3 or neither run length statistical hypothesis was
   confirmed in the allotted test duration.

7.1.1.  Run Length at Paced Full Data Rate

   Confirm that the observed run length is at least the
   target_run_length while relying on timer to send data at the
   target_rate using the procedure described in in Section 6.1.1 with a
   burst size of 1 (single packets).

   The test is considered to be inconclusive if the packet transmission
   can not be accurately controlled for any reason.

7.1.2.  run length  Run Length at Full Data Windowed Rate

   Confirm that the observed run length is at least the
   target_run_length while sending at an average rate equal to the
   target_data_rate, by controlling (or clamping) the window size of a
   conventional transport protocol to a fixed value computed from the
   properties of the test path, typically
   test_window=target_data_rate*test_RTT/target_MTU.

   Since losses and ECN marks generally cause transport protocols to at
   least temporarily reduce their data rates, this test is expected to
   be less precise about controlling its data rate.  It should not be
   considered inconclusive as long as at least some of the round trips
   reached the full target_data_rate, without incurring losses.  To pass
   this test the network MUST deliver target_pipe_size packets in
   target_RTT time without any losses or ECN marks at least once per two
   target_pipe_size round trips, in addition to meeting the run length
   statistical test.

7.1.3.  Background Run Length Tests

   The background run length is a low rate version of the target target
   rate test above, designed for ongoing lightweight monitoring for
   changes in the observed subpath run length without disrupting users.
   It should be used in conjunction with one of the above full rate
   tests because it does not confirm that the subpath can support raw
   data rate.

   Existing loss metrics such as [RFC 6673] [RFC6673] might be appropriate for
   measuring background run length.

7.2.  Standing Queue tests

   These test confirm that the bottleneck is well behaved across the
   onset of packet loss, which typically follows after the onset of
   queueing.  Well behaved generally means lossless for transient
   queues, but once the queue has been sustained for a sufficient period
   of time (or reaches a sufficient queue depth) there should be a small
   number of losses to signal to the transport protocol that it should
   reduce its window.  Losses that are too early can prevent the
   transport from averaging at the target_data_rate.  Losses that are
   too late indicate that the queue might be subject to bufferbloat
   [Bufferbloat] and inflict excess queuing delays on all flows sharing
   the bottleneck. bottleneck queue.  Excess losses make loss recovery problematic
   for the transport protocol.  Non-linear or erratic RTT fluctuations
   suggest poor interactions between the channel acquisition systems and
   the transport self clock.  All of the tests in this section use the
   same basic scanning algorithm but score the link on the basis of how
   well it avoids each of these problems.

   For some technologies the data might not be subject to increasing
   delays, in which case the data rate will vary with the window size
   all the way up to the onset of losses or ECN marks.  For theses
   technologies, the discussion of queueing does not apply, but it is
   still required that the onset of losses (or ECN marks) be at an
   appropriate point and progressive.

   Use the procedure in Section 6.1.3 to sweep the window across the
   onset of queueing and the onset of loss.  The tests below all assume
   that the scan emulates standard additive increase and delayed ACK by
   incrementing the window by one packet for every 2*target_pipe_size
   packets delivered.  A scan can be divided into three regions: below
   the onset of queueing, a standing queue, and at or beyond the onset
   of loss.

   Below the onset of queueing the RTT is typically fairly constant, and
   the data rate varies in proportion to the window size.  Once the data
   rate reaches the link rate, the data rate becomes fairly constant,
   and the RTT increases in proportion to the the window size.  The
   precise transition from one region to the other can be identified by
   the maximum network power, defined to be the ratio data rate over the
   RTT[POWER].

   For technologies that do not have conventional queues, start the scan
   at a window equal to the test_window, i.e. starting at the target
   rate, instead of the power point.

   If there is random background loss (e.g. bit errors, etc), precise
   determination of the onset of packet loss may require multiple scans.
   Above the onset of loss, all transport protocols are expected to
   experience periodic losses.  For the stiffened transport case they
   will be determined by the AQM algorithm in the network or the details
   of how the the window increase function responds to loss.  For the
   standard transport case the details of periodic losses are typically
   dominated by the behavior of the transport protocol itself.

7.2.1.  Congestion Avoidance

   A link passes the congestion avoidance standing queue test if more
   than target_run_length packets are delivered between the power point
   (or test_window) and the first loss or ECN mark.  If this test is
   implemented using a standards congestion control algorithm with a
   clamp, it can be used in situ in the production internet as a
   capacity test.  For an example of such a test see [NPAD].

7.2.2.  Bufferbloat

   This test confirms that there is some mechanism to limit buffer
   occupancy (e.g. that prevents bufferbloat).  Note that this is not
   strictly a requirement for single stream bulk performance, however if
   there is no mechanism to limit buffer occupancy then a single stream
   with sufficient data to deliver is likely to cause the problems
   described in [RFC 2309] [RFC2309] and [Bufferbloat].  This may cause only minor
   symptoms for the dominant flow, but has the potential to make the
   link unusable for all other flows and applications.

   Pass if the onset of loss is before a standing queue has introduced
   more delay than than twice target_RTT, or other well defined limit.
   Note that there is not yet a model for how much standing queue is
   acceptable.  The factor of two chosen here reflects a rule of thumb.
   Note that in conjunction with the previous test, this test implies
   that the first loss should occur at a queueing delay which is between
   one and two times the target_RTT.

7.2.3.  Non excessive loss

   This test confirm that the onset of loss is not excessive.  Pass if
   losses are bound by the the fluctuations in the cross traffic, such
   that transient load (bursts) do not cause dips in aggregate raw
   throughput. e.g. pass as long as the losses are no more bursty than
   are expected from a simple drop tail queue.  Although this test could
   be made more precise it is really included here for pedantic
   completeness.

7.2.4.  Duplex Self Interference

   This engineering test confirms a bound on the interactions between
   the forward data path and the ACK return path.  Fail if the RTT rises
   by more than some fixed bound above the expected queueing time
   computed from trom the excess window divided by the link data rate.
   @@@@ This needs further testing.

7.3.  Slowstart tests

   These tests mimic slowstart: data is sent at twice the effective
   bottleneck rate to exercise the queue at the dominant bottleneck.

   They are deemed inconclusive if the elapsed time to send the data
   burst is not less than half of the time to receive the ACKs. (i.e.
   sending data too fast is ok, but sending it slower than twice the
   actual bottleneck rate as indicated by the ACKs is deemed
   inconclusive).  Space the bursts such that the average data rate is
   equal to the target_data_rate.

7.3.1.  Full Window slowstart test

   This is a capacity test to confirm that slowstart is not likely to
   exit prematurely.  Send slowstart bursts that are target_pipe_size
   total packets.

   Accumulate packet delivery statistics as described in Section 6.2.2
   to score the outcome.  Pass if it is statistically significant that
   the observed run length is larger than the target_run_length.  Fail
   if it is statistically significant that the observed run length is
   smaller than the target_run_length.

   Note that these are the same parameters as the Sender Full Window
   burst test, except the burst rate is at slowestart rate, rather than
   sender interface rate.

7.3.2.  Slowstart AQM test

   Do a continuous slowstart (send data continuously at slowstart_rate),
   until the first loss, stop, allow the network to drain and repeat,
   gathering statistics on the last packet delivered before the loss,
   the loss pattern, maximum observed RTT and window size.  Justify the
   results.  There is not currently sufficient theory justifying
   requiring any particular result, however design decisions that affect
   the outcome of this tests also affect how the network balances
   between long and short flows (the "mice and elephants" problem) problem).

   This is an engineering test: It would be best performed on a
   quiescent network or testbed, since cross traffic has the potential
   to change the results.

7.4.  Sender Rate Burst tests

   These tests determine how well the network can deliver bursts sent at
   sender's interface rate.  Note that this test most heavily exercises
   the front path, and is likely to include infrastructure nominally may be out of scope.
   scope for a subscriber ISP.

   Also, there are a several details that are not precisely defined.
   For starters there is not a standard server interface rate. 1 Gb/s is
   and 10 Gb/s are very common today, but higher rates (e.g. 10 Gb/s) are becoming will become cost
   effective and can be expected to be dominant some time in the future.

   Current standards permit TCP to send a full window bursts following
   an application pause.  Congestion Window Validation [RFC 2861], [RFC2861], is not
   required, but even if was it does not take effect until an
   application pause is longer than an RTO.  Since this is standard
   behavior, it is desirable that the network be able to deliver it, such
   bursts, otherwise application pauses will cause unwarranted losses.

   It is also understood in the application and serving community that
   interface rate bursts have a cost to the network that has to be
   balanced against other costs in the servers themselves.  For example
   TCP Segmentation Offload [TSO] reduces server CPU in exchange for
   larger network bursts, which increase the stress on network buffer
   memory.

   There is not yet theory to unify these costs or to provide a
   framework for trying to optimize global efficiency.  We do not yet
   have a model for how much the network should tolerate server rate
   bursts.  Some bursts must be tolerated by the network, but it is
   probably unreasonable to expect the network to be able to efficiently
   deliver all data as a series of bursts.

   For this reason, this is the only test for which we explicitly
   encourage detrateing.  A TDS should include a table of pairs of
   derating parameters: what burst size to use as a fraction of the
   target_pipe_size, and how much each burst size is permitted to reduce
   the run length, relative to to the target_run_length. @@@@ Needs more
   work and experimentation.

7.5.  Combined Tests

   These tests are more efficient from a deployment/operational
   perspective, but may not be possible to diagnose if they fail.

7.5.1.  Sustained burst test

   Send target_pipe_size*derate sender interface rate bursts every
   target_RTT*derate, for derate between 0 and 1.  Verify that the
   observed run length meets target_run_length.  Key observations:
   o  This test is subpath RTT invariant, as long as the tester can
      generate the required pattern.
   o  The subpath under test is expected to go idle for some fraction of
      the time: (subpath_data_rate-target_rate)/subpath_data_rate.
      Failing to do so suggests a problem with the procedure. procedure and an
      inconclusive test result.
   o  This test is more strenuous than the slowstart tests: they are not
      needed if the link passes this test with derate=1.
   o  A link that passes this test is likely to be able to sustain
      higher rates (close to subpath_data_rate) for paths with RTTs
      smaller than the target_RTT.  Offsetting this performance
      underestimation is part of the rationale behind permitting
      derating in general.

   o  This test can be implemented with standard instrumented TCP[RFC
      4898],
      TCP[RFC4898], using a specialized measurement application at one
      end and a minimal service at the other end [RFC 863, RFC 864].  It
      may require tweaks to the TCP implementation.  [MBMSource]
   o  This test is efficient to implement, since it does not require
      per-packet timers, and can make use of TSO in modern NIC hardware.
   o  This test is not totally sufficient: the standing window
      engineering tests are also needed to be sure that the link is well
      behaved at and beyond the onset of congestion.
   o  This one test can be proven to be the one capacity test to
      supplant them all.

7.5.2.  Live Streaming Media

   Model Based Metrics can be implemented as a side effect of serving
   any non-throughput maximizing traffic, traffic*, such as streaming media, by
   applying with
   some additional controls to and instrumentation in the traffic. servers.  The
   essential requirement is that the traffic be constrained such that
   even with arbitrary application pauses, bursts and data rate fluctuations
   fluctuations, the traffic stays within the envelope determined defined by all of the
   individual tests described above, for a specific TDS.

   If the serving RTT serving_data_rate is less than or equal to the target_RTT, this constraint is
   most easily implemented by clamping
   target_data_rate and the serving_RTT (the RTT between the sender and
   client) is less than the target_RTT, this constraint is most easily
   implemented by clamping the transport window size to
   test_window=target_data_rate*serving_RTT/target_MTU.  This
   test_window size to:

   serving_window_clamp=target_data_rate*serving_RTT/
   (target_MTU-header_overhead)

   The serving_window_clamp will limit the both the serving data rate
   and burst sizes to be no larger than the procedures in Section 7.1.2
   and Section 7.4, assuming 7.4 or Section 7.5.1.  Since the serving RTT is smaller
   than the target_RTT, the worst case bursts that might be generated
   under these conditions will be smaller than called for by Section 7.4
   and the sender rate burst size derating equal to sizes are implicitly derated by the serving_RTT
   serving_window_clamp divided by the target_RTT. target_pipe_size at the very
   least.  (The traffic might be smoother than specified by the sender
   interface rate bursts test.)

   Note that if the application tolerates fluctuations in its actual
   data rate (say by use of a playout buffer) it is important that the
   target_data_rate be above the actual average rate needed by the
   application so it can recover after transient pauses caused by
   congestion or the application itself.  Since

   Alternatively the serving RTT is
   smaller sender data rate and bursts might be explicitly
   controlled by a host shaper or pacing at the sender.  This would
   provide better control and work for serving_RTTs that are larger than
   the target_RTT, the worst case bursts but it is substantially more complicated to
   implement.  With this technique, any traffic might be used for
   measurement.

   * Note that this technique might be
   generated under these conditions applied to any content, if users
   are smaller than called for by
   Section 7.4 willing to tolerate reduced data rate to inhibit TCP equilibrium
   behavior.

8.  Examples

   In this section we present TDS for a couple of performance
   specifications.

   Tentatively: 5 Mb/s*50 ms, 1 Mb/s*50ms, 250kbp*100mS

8.1.  Near serving HD streaming video

   Today the best quality HD video requires slightly less than 5 Mb/s
   [HDvideo].  Since it is desirable to serve such content locally, we
   assume that the content will be within 50 mS, which is enough to
   cover continental Europe or either US coast. coast from a single site.

                         5 Mb/s over a 50 ms path

                +----------------------+-------+---------+
                | End to End Parameter | Value | units   |
                +----------------------+-------+---------+
                | target_rate          | 5     | Mb/s    |
                | target_RTT           | 50    | ms      |
                | traget_MTU           | 1500  | bytes   |
                | target_pipe_size     | 22    | packets |
                | target_run_length    | 1452  | packets |
                +----------------------+-------+---------+

                                  Table 1

   This example uses the most conservative TCP model and no derating.

8.2.  Far serving SD streaming video

   Standard Quality video typically fits in 1 Mb/s [SDvideo].  This can
   be reasonably delivered via longer paths with larger.  We assume
   100mS.

                         5

                         1 Mb/s over a 50 100 ms path

                +----------------------+-------+---------+
                | End to End Parameter | Value | units   |
                +----------------------+-------+---------+
                | target_rate          | 1     | Mb/s    |
                | target_RTT           | 100   | ms      |
                | traget_MTU           | 1500  | bytes   |
                | target_pipe_size     | 9     | packets |
                | target_run_length    | 243   | packets |
                +----------------------+-------+---------+

                                  Table 2

   This example uses the most conservative TCP model and no derating.

8.3.  Bulk delivery of remote scientific data

   This example corresponds to 100 Mb/s bulk scientific data over a
   moderately long RTT.  Note that the target_run_length is infeasible
   for most networks.

                        100 Mb/s over a 200 ms path

               +----------------------+---------+---------+
               | End to End Parameter | Value   | units   |
               +----------------------+---------+---------+
               | target_rate          | 100     | Mb/s    |
               | target_RTT           | 200     | ms      |
               | traget_MTU           | 1500    | bytes   |
               | target_pipe_size     | 1741    | packets |
               | target_run_length    | 9093243 | packets |
               +----------------------+---------+---------+

                                  Table 3

9.  Validation

   This document permits alternate

   Since some aspects of the models and parameter derating, as
   described in are likely to be too conservative,
   Section 5.2 and Section 5.3. 5.3 permit alternate protocol models and test
   parameter derating.  In exchange for this latitude in the modelling process it requires the ability to
   demonstrate
   process, we require demonstrations that such a TDS can robustly
   detect links that will prevent authentic applications and using state-of-
   the-art protocol implementations from meeting the target end-to-end specified
   performance goals over infrastructure
   that infinitessimally passes the TDS.

   The validation process relies on constructing a test network such
   that all of the targets.  This correctness criteria is potentially
   difficult to prove, because it implicitly requires validating a TDS
   against all possible links and subpaths.

   We suggest two strategies, both of which should be applied: first,
   publish a fully open description of the TDS, including what
   assumptions were used and and how it was derived, such that the
   research community can evaluate these decisions, test them and
   comment on there applicability; and second, demonstrate that an
   applications running over an infinitessimally passing testbed do meet
   the performance targets.

   An infinitessimally passing testbed resembles a epsilon-delta proof
   in calculus.  Construct a test network such that all of the
   individual load tests pass of the TDS only infinitessimally, pass by small (infinitesimal)
   margins, and
   proving demonstrate that an a variety of authentic application applications
   running over a real TCP
   implementation implementations (or other protocol as
   appropriate) can be expected to
   meet meets the end-to-end target parameters on over such a
   network.  The workloads should include multiple types of streaming
   media and transaction oriented short flows (e.g. synthetic web
   traffic ).

   For example using our example in our HD streaming video TDS described
   in Section 8.1, the bottleneck data rate should be 5 Mb/s, the per
   packet random background loss probability should be 1/1453, for a run
   length of 1452 packets, the bottleneck queue should be 22 packets and
   the front path should have just enough buffering to withstand 22
   packet line rate bursts.  We want every one of the TDS tests to fail
   if we slightly increase the relevant test parameter, so for example
   sending a 23 packet slowstart bursts should cause excess (possibly
   deterministic) packet drops at the dominant queue at the bottleneck.
   On this infinitessimally passing network it should be possible for a
   real ral application using a stock TCP implementation in the vendor's
   default configuration to attain 5 Mb/s over an 50 mS path.

   @@@@ Need to better specify the workload: both short and long flows.

   The most difficult part of this process setting up such a testbed is arranging for each subpath to
   infinitesimally pass the individual tests.  We suggest two
   approaches: constraining resources in the network devices by configuring them not to use all available
   resources (limiting available buffer space or data rate; rate); and
   preloading subpaths with cross traffic.  Note that is it important
   that a single environment is be constructed that which infinitessimally
   passes all tests, tests at the same time, otherwise there is a chance that
   TCP can exploit extra latitude in some parameters (such as data rate)
   to partially compensate for constraints in other parameters.

   If parameters (queue
   space, or viceversa).

   To the extent that a TDS validated according to these procedures is used to inform public dialog, dialog it should be
   fully publicly documented, including the details of the tests, what
   assumptions were used and how it was derived.  All of the details of
   the validation experiment itself should also be public with sufficient precision
   detail for the experiment experiments to be replicated by other researchers.
   All components should either be open source of fully specified described
   proprietary implementations that are available to the research
   community.

   TODO: paper proving the validation process.

   This work here is inspired by open tools running on an open platform,
   using open techniques to collect open data.  See Measurement Lab
   [http://www.measurementlab.net/]

10.  Acknowledgements

   Ganga Maguluri suggested the statistical test for measuring loss
   probability in the target run length.  Alex Gilgur for helping with
   the statistics and contributing and alternate model.

   Meredith Whittaker for improving the clarity of the communications.

11.  Informative References

   [RFC2309]  Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering,
              S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G.,
              Partridge, C., Peterson, L., Ramakrishnan, K., Shenker,
              S., Wroclawski, J., and L. Zhang, "Recommendations on
              Queue Management and Congestion Avoidance in the
              Internet", RFC 2309, April 1998.

   [RFC2330]  Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
              "Framework for IP Performance Metrics", RFC 2330,
              May 1998.

   [RFC4737]  Morton, A., Ciavattone, L., Ramachandran,

   [RFC2861]  Handley, M., Padhye, J., and S. Floyd, "TCP Congestion
              Window Validation", RFC 2861, June 2000.

   [RFC3148]  Mathis, M. and M. Allman, "A Framework for Defining
              Empirical Bulk Transfer Capacity Metrics", RFC 3148,
              July 2001.

   [RFC3465]  Allman, M., "TCP Congestion Control with Appropriate Byte
              Counting (ABC)", RFC 3465, February 2003.

   [RFC4898]  Mathis, M., Heffner, J., and R. Raghunarayan, "TCP
              Extended Statistics MIB", RFC 4898, May 2007.

   [RFC4737]  Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
              S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
              November 2006.

   [RFC5681]  Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
              Control", RFC 5681, September 2009.

   [RFC5835]  Morton, A. and S. Van den Berghe, "Framework for Metric
              Composition", RFC 5835, April 2010.

   [RFC6049]  Morton, A. and E. Stephan, "Spatial Composition of
              Metrics", RFC 6049, January 2011.

   [RFC6673]  Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673,
              August 2012.

   [I-D.morton-ippm-lmap-path]
              Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
              A. Morton, "A Reference Path and Measurement Points for
              LMAP", draft-morton-ippm-lmap-path-00 (work in progress),
              January 2013.

   [MSMO97]   Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
              Macroscopic Behavior of the TCP Congestion Avoidance
              Algorithm", Computer Communications Review volume 27,
              number3, July 1997.

   [WPING]    Mathis, M., "Windowed Ping: An IP Level Performance
              Diagnostic", INET 94, June 1994.

   [mpingSource]
              Fan, X., Mathis, M., and D. Hamon, "Git Repository for
              mping: An IP Level Performance Diagnostic", Sept 2013,
              <https://github.com/m-lab/mping>.

   [MBMSource]
              Hamon, D., "Git Repository for Model Based Metrics",
              Sept 2013, <https://github.com/m-lab/MBM>.

   [Pathdiag]
              Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen,
              "Pathdiag: Automated TCP Diagnosis", Passive and Active
              Measurement , June 2008.

   [BScope]   Broswerscope, "Browserscope Network tests", Sept 2012,
              <http://www.browserscope.org/?category=network>.

   [StatQC]   Montgomery, D., "Introduction to Statistical Quality
              Control - 2nd ed.", ISBN 0-471-51988-X, 1990.

   [Rtool]    R Development Core Team, "R: A language and environment
              for statistical computing. R Foundation for Statistical
              Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
              http://www.R-project.org/",  , 2011.

   [StatQC]   Montgomery, D., "Introduction to Statistical Quality
              Control - 2nd ed.", ISBN 0-471-51988-X, 1990.

   [CVST]     Krueger, T. and M. Braun, "R package: Fast Cross-
              Validation via Sequential Testing", version 0.1, 11 2012.

   [LMCUBIC]  Ledesma Goyzueta, R. and Y. Chen, "A Deterministic Loss
              Model Based Analysis of CUBIC, IEEE International
              Conference on Computing, Networking and Communications
              (ICNC), E-ISBN : 978-1-4673-5286-4", January 2013.

Appendix A.  Model Derivations

   The reference target_run_length described in Section 5.2 is based on
   very conservative assumptions: that all window above target_pipe_size
   contributes to a standing queue that raises the RTT, and that classic
   Reno congestion control is with delayed ACKs are in effect.  In this
   section we provide two alternative calculations using different
   assumptions.

   It may seem out of place to allow such latitude in a measurement
   standard, but the section provides offsetting requirements.

   These models provide

   The estimates that provided by these models make the most sense if network
   performance is viewed logarithmically.  In the operational internet, Internet,
   data rates span more than 8 orders of magnitude, RTT spans more than
   3 orders of magnitude, and loss probability spans at least 8 orders
   of magnitude.  When viewed logarithmically (as in decibels), these
   correspond to 80 dB of dynamic range.  On an 80 db scale, a 3 dB
   error is less than 4% of the scale, even though it might represent a
   factor of 2 in raw untransformed parameter.

   Although this

   This document gives a lot of latitude for calculating
   target_run_length, however people designing suites of tests need to a TDS should consider the
   effect of their choices on the ongoing conversation and tussle about the relevance of
   "TCP friendliness" as an appropriate model for Internet capacity
   allocation.  Choosing a target_run_length that is substantially
   smaller than the reference target_run_length specified in Section 5.2 is equivalent to saying
   strengthens the argument that it is may be appropriate for the
   transport research community to abandon "TCP
   friendliness" as a the Internet fairness model model.  This gives developers
   incentive and permission to develop even more aggressive Internet transport
   protocols, applications
   and protocols, for applications to continue (or even increase) example by increasing the number of connections
   that they open concurrently.

A.1.  Aggregate  Queueless Reno

   In Section 5.2 it is assumed that the target rate is the same as the
   link rate, and any excess window causes a standing queue at the
   bottleneck.  This might be representative of a non-shared access
   link.  An alternative situation would be a heavily aggregated subpath
   where individual flows do not significantly contribute to the
   queueing delay, and losses are determined monitoring the average data
   rate, for example by the use of a virtual queue as in [AFD].  In such
   a scheme the RTT is constant and TCP's AIMD congestion control causes
   the data rate to fluctuate in a sawtooth.  If the traffic is being
   controlled in a manner that is consistent with the metrics here, goal
   would be to make the actual average rate equal to the
   target_data_rate.

   We can derive a model for Reno TCP and delayed ACK under the above
   set of assumptions: for some value of Wmin, the window will sweep
   from Wmin to 2*Wmin in 2*Wmin RTT.  Between losses each sawtooth
   delivers (1/2)(Wmin+2*Wmin)(2Wmin) packets in 2*Wmin round trip
   times.  However, unlike  Unlike the queueing case where
   Wmin = Target_pipe_size, we want the average of Wmin and 2*Wmin to be
   the target_pipe_size, so the average rate is the target rate.  Thus
   we want Wmin = (2/3)*target_pipe_size.

   (@@@@ something is wrong above)

   Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin)
   packets in 2*Wmin round trip times.

   Substituting these together we get:

   target_run_length = (8/3)(target_pipe_size^2) (4/3)(target_pipe_size^2)

   Note that this is always 88% 44% of the reference run length.  This makes sense
   because under the assumptions in Section 5.2 the AMID sawtooth caused
   a queue at the bottleneck, which raised the effective RTT by 50%.

A.2.  CUBIC

   CUBIC has three operating regions.  The model for the expected value
   of window size derived in [LMCUBIC] assumes operation in the
   "concave" region only, which is a non-TCP friendly region for long-
   lived flows.  The authors make the following assumptions: packet loss
   probability, p, is independent and periodic, losses occur one at a
   time, and they are true losses due to tail drop or corruption.  This
   definition of p aligns very well with our definition of
   target_run_length and the requirement for progressive loss (AQM).

   Although CUBIC window increase depends on continuous time, the
   authors transform the time to reach the maximum Window size in terms
   of RTT and a parameter for parameter for the multiplicative rate decrease on
   observing loss, beta (whose default value is 0.2 in CUBIC).  The
   expected value of Window size, E[W], is also dependent on C, a
   parameter of CUBIC that determines its window-growth aggressiveness
   (values from 0.01 to 4).

   E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4

   and, further assuming Poisson arrival, the mean throughput, x, is
   x = E[W]/RTT

   We note that under these conditions (deterministic single losses),
   the value of E[W] is always greater than 0.8 of the maximum window
   size ~= reference_run_length. (as far as I can tell)

Appendix B.  Complex Queueing

   For many network technologies simple queueing models do not apply:
   the network schedules, thins or otherwise alters the timing of ACKs
   and data, generally to raise the efficiency of the channel allocation
   process when confronted with relatively widely spaced small ACKs.
   These efficiency strategies are ubiquitous for half duplex, wireless
   and broadcast media.

   Altering the ACK stream generally has two consequences: it raises the
   effective bottleneck data rate, making slowstart burst at higher
   rates (possibly as high as the sender's interface rate) and it
   effectively raises the RTT by the average time that the ACKs were
   delayed.  The first effect can be partially mitigated by reclocking
   ACKs once they are beyond the bottleneck on the return path to the
   sender, however this further raises the effective RTT.

   The most extreme example of this sort of behavior would be a half
   duplex channel that is not released as long as end point currently
   holding the channel has pending traffic.  Such environments cause
   self clocked protocols under full load to revert to extremely
   inefficient stop and wait behavior, where they send an entire window
   of data as a single burst, followed by the multiplicative rate decrease on
   observing loss, beta (whose default value is 0.2 in CUBIC).  The
   expected value entire window of Window size, E[W], is also dependent ACKs on C,
   the return path.

   If a
   parameter of CUBIC particular end-to-end path contains a link or device that determines its window-growth aggressiveness
   (values alters
   the ACK stream, then the entire path from 0.01 the sender up to 4).

   E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4

   and, further assuming Poisson arrival, the mean throughput, x,
   bottleneck must be tested at the burst parameters implied by the ACK
   scheduling algorithm.  The most important parameter is

   x = E[W]/RTT

   We note the Effective
   Bottleneck Data Rate, which is the average rate at which the ACKs
   advance snd.una.  Note that under these conditions (deterministic single losses), thinning the value ACKs (relying on the
   cumulative nature of E[W] seg.ack to permit discarding some ACKs) is always greater than 0.8 of
   implies an effectively infinite bottleneck data rate.  It is
   important to note that due to the maximum window
   size ~= reference_run_length. (as far as I self clock, ill conceived channel
   allocation mechanisms can tell)

   Commentary increase the stress on upstream links in a
   long path.

   Holding data or ACKs for channel allocation or other reasons (such as
   error correction) always raises the consequence effective RTT relative to the
   minimum delay for the path.  Therefore it may be necessary to replace
   target_RTT in the calculation in Section 5.2 by an effective_RTT,
   which includes the target_RTT reflecting the fixed part of the choice. path
   plus a term to account for the extra delays introduced by these
   mechanisms.

Appendix B. C.  Version Control

   Formatted: Mon Oct 21 15:42:35 PDT 2013 Fri Feb 14 14:07:33 PST 2014

Authors' Addresses

   Matt Mathis
   Google, Inc
   1600 Amphitheater Parkway
   Mountain View, California  93117  94043
   USA

   Email: mattmathis@google.com

   Al Morton
   AT&T Labs
   200 Laurel Avenue South
   Middletown, NJ  07748
   USA

   Phone: +1 732 420 1571
   Email: acmorton@att.com
   URI:   http://home.comcast.net/~acmacm/