Network Working Group X.Zhu,Zhu Internet-Draft R. PanInternet DraftIntended status: Experimental M.A. Ramalho,Ramalho Expires: April 11, 2016 S. MenaIntended Status: Informational C. Ganzhorn,P.E.JonesExpires: October 29, 2015J. Fu Cisco Systems S.De Aronco Ecole Polytechnique Federale de Lausanne April 28,D'Aronco EPFL C. Ganzhorn October 9, 2015 NADA: A Unified Congestion Control Scheme for Real-Time Mediadraft-ietf-rmcat-nada-00draft-ietf-rmcat-nada-01 AbstractNetwork-Assisted Dynamic Adaptation (NADA) isThis document describes NADA (network-assisted dynamic adaptation), a novel congestion control scheme for interactive real-time media applications, such as video conferencing. InNADA,the proposed scheme, the sender regulates its sending rate based on either implicit or explicit congestionsignalingsignaling, in aconsistent manner. As one example of explicit signaling, NADAunified approach. The scheme can benefit from explicit congestion notification (ECN) markings from network nodes. It also maintains consistent sender behavior in the absence ofexplicit signalingsuch markings, by reacting to queuingdelaydelays and packetloss. This document describes the overall system architecture for NADA, as well as recommended behavior at the sender and the receiver.losses instead. Status ofthisThis Memo This Internet-Draft is submittedto IETFin full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force(IETF), its areas, and its working groups.(IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."The list of current Internet-Drafts can be accessed at http://www.ietf.org/1id-abstracts.html The list ofThis Internet-DraftShadow Directories can be accessed at http://www.ietf.org/shadow.htmlwill expire on April 11, 2016. Copyrightand LicenseNotice Copyright (c)20122015 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . ..3 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . ..3 3. SystemModel . .Overview . . . . . . . . . . . . . . . . . . . . . . . 3 4.NADA Receiver Behavior . . . . . .Core Congestion Control Algorithm . . . . . . . . . . . . . . 44.1 Estimation of one-way delay and queuing delay4.1. Mathematical Notations . . . . . . .4 4.2 Estimation of packet loss/marking ratio. . . . . . . . . . 54.3 Non-linear warping of delay .4.2. Receiver-Side Algorithm . . . . . . . . . . . . . . .6 4.4 Aggregating congestion signals. . 7 4.3. Sender-Side Algorithm . . . . . . . . . . . . .7 4.5 Estimating receiving rate. . . . . 9 5. Practical Implementation of NADA . . . . . . . . . . . .7 4.6 Sending periodic feedback. . 10 5.1. Receiver-Side Operation . . . . . . . . . . . . . . .7 4.7 Discussions on delay metrics. . 10 5.1.1. Estimation of one-way delay and queuing delay . . . . 11 5.1.2. Estimation of packet loss/marking ratio . . . . . . . 11 5.1.3. Estimation of receiving rate . . .8 5. NADA Sender Behavior. . . . . . . . . 11 5.2. Sender-Side Operation . . . . . . . . . . . .9 5.1 Reference rate calculation. . . . . . 12 5.2.1. Rate shaping buffer . . . . . . . . . . .10 5.1.1 Accelerated ramp up. . . . . . 12 5.2.2. Adjusting video target rate and sending rate . . . . 13 6. Discussions and Further Investigations . . . . . . . .10 5.1.2. Gradual rate update. . . 13 6.1. Choice of delay metrics . . . . . . . . . . . . . . .11 5.2 Video encoder rate control. . 13 6.2. Method for delay, loss, and marking ratio estimation . . 14 6.3. Impact of parameter values . . . . . . . . . . . . .12 5.3 Rate shaping buffer. . 14 6.4. Sender-based vs. receiver-based calculation . . . . . . . 15 6.5. Incremental deployment . . . . . . . . . . .12 5.4 Adjusting video target rate and sending rate. . . . . .. . 12 6. Incremental Deployment16 7. Implementation Status . . . . . . . . . . . . . . . . . . . .13 7. Implementation Status16 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . .13 8. IANA Considerations16 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . .14 9.17 10. References . . . . . . . . . . . . . . . . . . . . . . . . .. 14 9.117 10.1. Normative References . . . . . . . . . . . . . . . . . .. 14 9.217 10.2. Informative References . . . . . . . . . . . . . . . . .. 1417 Appendix A. Network Node Operations . . . . . . . . . . . . . .. 15 A.119 A.1. Default behavior of drop tail queues . . . . . . . . . .. . . . . 16 A.219 A.2. RED-based ECN marking . . . . . . . . . . . . . . . . . .. . . . . . 16 A.3 PCN marking . . . . . . . . . . . . . . . .19 A.3. Random Early Marking with Virtual Queues . . . . . . . .1620 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . .. 1721 1. Introduction Interactive real-time media applications introduce a unique set of challenges for congestion control. Unlike TCP, the mechanism used for real-time media needs to adapt quickly to instantaneous bandwidth changes, accommodate fluctuations in the output of video encoder rate control, and cause low queuing delay over the network. An ideal scheme should also make effective use of all types of congestion signals, including packet loss, queuing delay, and explicit congestion notification (ECN) [RFC3168] markings.Based onThe requirements for theabove considerations, thiscongestion control algorithm are outlined in [I-D.ietf-rmcat-cc-requirements]. This document describesaan experimental congestion control scheme called network-assisted dynamic adaptation (NADA). The NADA design benefits from explicit congestion control signals (e.g., ECN markings) from the network, yet also operates when only implicit congestion indicators (delay and/or loss) are available. In addition, it supports weighted bandwidth sharing among competing video flows.This documentation describes the overall system architecture, recommended designs at the sender and receiver, as well as expected network node operations.The signaling mechanism consists of standard RTP timestamp [RFC3550] and standard RTCP feedback reports. 2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as describedin RFC 2119[RFC2119]. 3. SystemModel The overall system consists ofOverview Figure 1 shows thefollowing elements: * Sourceend-to-end system for real-time mediastream, in the form of consecutive raw video frames and audio samples; *transport that NADA operates in. +---------+ r_vin +--------+ +--------+ +----------+ | Media |<--------| RTP | |Network | | RTP | | Encoder |========>| Sender |=======>| Node |====>| Receiver | +---------+ r_vout +--------+ r_send +--------+ +----------+ /|\ | | | +---------------------------------+ RTCP Feedback Report Figure 1: System Overview o Media encoder with rate control capabilities. Ittakesencodes the source media streamand encodes it tointo an RTP streamat awith target bit rateR_v. Note that ther_vin. The actual output rate from the encoderR_or_vout may fluctuate around the targetR_v. Also,r_vin. In addition, the encoder can only change its bit rate at rather coarse time intervals, e.g., once every 0.5 seconds.*o RTPsender,sender: responsible for calculating thetarget bitNADA reference rateR_nbased on network congestion indicators (delay, loss, or ECN marking reports from the receiver), for updating the video encoder with a new target rateR_v,r_vin, and for regulating the actual sending rateR_sr_send accordingly.A rate shaping buffer is employed to absorb the instantaneous difference between video encoder output rate R_v and sending rate R_s. The buffer size L_s, together with R_n, influences the calculation of actual sending rate R_s and video encoder target rate R_v.The RTP sender alsogeneratesprovides an RTP timestampinfor each outgoingpackets. *packet. o RTPreceiver,receiver: responsible for measuring and estimatingend-to- endend-to-end delay based on sender RTPtimestamp. In the presence of packet loss and ECN markings, it keeps track oftimestamp, packet loss and ECN markingratios.ratios, as well as receiving rate (r_recv) of the flow. It calculates theequivalent delay x_naggregated congestion signal (x_n) that accounts for queuing delay, ECN marking, and packetloss, as well aslosses, and determines thederivative (i.e.,mode for sender rateof change) of this congestion signal as x'_n.adaptation (rmode) based on whether the flow has encountered any standing non-zero congestion. The receiverfeeds both pieces of information (x_n and x'_n)sends periodic RTCP reports back to thesender via periodic RTCP reports. *sender, containing values of x_n, rmode, and r_recv. o Networknode,node with several modes of operation. The system can work with the default behavior of a simple drop tail queue. It can also benefit from advanced AQM features such as PIE, FQ-CoDel, RED-based ECN marking, and PCN marking using a token bucket algorithm. Note that network node operation is out of scope for the design of NADA. 4. Core Congestion Control Algorithm Like TCP-Friendly Rate Control (TFRC) [Floyd-CCR00] [RFC5348], NADA is a rate-based congestion control algorithm. In its simplest form, thefollowing, we will elaborate on the respective operations atsender reacts to theNADA receiver and sender. 4. NADA Receiver Behavior The receiver continuously monitors end-to-end per-packet statistics in terms of delay, loss, and/or ECN marking ratios. It then aggregates all formscollection of network congestion indicatorsintoin the form of anequivalent delayaggregated congestion signal, andperiodically reports this back tooperates in one of two modes: o Accelerated ramp-up: when thesender. In addition,bottleneck is deemed to be underutilized, thereceiver tracksrate increases multiplicatively with respect to thereceivingrate ofthe flowpreviously successful transmissions. The rate increase mutliplier (gamma) is calculated based on observed round- trip-time andincludes that in thetarget feedbackmessage. 4.1 Estimation of one-way delay and queuing delay The delay estimation process in NADA follows a similar approach as in earlier delay-based congestion control schemes, such as LEDBAT [RFC6817]. NADA estimates the forward delayinterval, so ashaving a constant base delay component plus a time varyingto limit self- inflicted queuingdelay component. The base delay is estimated asdelay. o Gradual rate update: in theminimum value of one-way delay observed over a relatively long period (e.g., tenspresence ofminutes), whereasnon-zero aggregate congestion signal, theindividual queuing delay valuesending rate istakenadjusted in reaction tobe the difference between one-way delayboth its value (x_n) andbase delay. Inits change in value (x_diff). This section introduces the list of mathematicalterms, for packet n arrivingnotations and describes the core congestion control algorithm at the sender and receiver,one-way delay is calculated as: z_n = t_r,n - t_s,n, where t_s,n and t_r,n are sender and receiver timestamps,respectively.A real-world implementation should also properly handleAdditional details on recommended practicalissues such as wrap-aroundimplementations are described in Section 5.1 and Section 5.2. 4.1. Mathematical Notations This section summarizes thevaluelist ofz_n, which are omitted from the above simple expression for brevity. The base delay, d_f, is estimated asvariables and parameters used in theminimum value of previously observed z_n's overNADA algorithm. +--------------+-------------------------------------------------+ | Notation | Variable Name | +--------------+-------------------------------------------------+ | t_curr | Current timestamp | | t_last | Last time sending/receiving arelatively long period. This assumes that the driftfeedback | | delta | Observed interval betweensendingcurrent andreceiving clocks remains bounded by a small value. Correspondingly, the queuing delay experienced by the packet n is estimated as: d_nprevious | | | feedback reports: delta =z_n - d_f. The individual sample values of queuingt_curr-t_last | | r_n | Reference rate based on network congestion | | r_send | Sending rate | | r_recv | Receiving rate | | r_vin | Target rate for video encoder | | r_vout | Output rate from video encoder | | d_base | Estimated baseline delayshould be further| | d_fwd | Measured and filteredagainst various non-congestion-induced noise, such as spikes due to processing "hiccup" at the network nodes. We denote the resulting queuingone-way delayvalue as d_hat_n. Our current implementation employs a simple 5-point median filter over per-packet queuing| | d_n | Estimated queueing delayestimates, followed by an exponential smoothing filter. We have found such relatively simple treatment to suffice in guarding against processing| | d_tilde | Equivalent delayoutliers observed in wired connections. For wireless connections with a higherafter non-linear warping | | p_mark | Estimated packetdelay variation (PDV), more sophisticated techniques on de-noising, outlier rejection, and trend analysis may be needed. Like other delay-basedECN marking ratio | | p_loss | Estimated packet loss ratio | | x_n | Aggregate congestioncontrol schemes, performancesignal | | x_prev | Previous value of aggregate congestion signal | | x_diff | Change in aggregate congestion signal w.r.t. | | | its previous value: x_diff = x_n - x_prev | | rmode | Rate update mode: (0 = accelerated ramp-up; | | | 1 = gradual update) | | gamma | Rate increase multiplier in accelerated ramp-up | | | mode | | rtt | Estimated round-trip-time at sender | | buffer_len | Rate shaping buffer occupancy measured in bytes | +--------------+-------------------------------------------------+ Figure 2: List of variables. +---------------+---------------------------------+----------------+ | Notation | Parameter Name | Default Value | +--------------+----------------------------------+----------------+ | PRIO | Weight of priority ofNADA depends ontheaccuracyflow | 1.0 | RMIN | Minimum rate ofits delay measurement and estimation module. Appendix Aapplication | 150 Kbps | | | supported by media encoder | | | RMAX | Maximum rate of application | 1.5 Mbps | | | supported by media encoder | | | X_REF | Reference congestion level | 20ms | | KAPPA | Scaling parameter for gradual | 0.5 | | | rate update calculation | | | ETA | Scaling parameter for gradual | 2.0 | | | rate update calculation | | | TAU | Upper bound of RTT in[RFC6817] provides an extensive discussiongradual | 500ms | | | rate update calculation | | | DELTA | Target feedback interval | 100ms | | LOGWIN | Observation window in time for | 500ms | | | calculating packet summary | | | | statistics at receiver | | | QEPS | Threshold for determining queuing| 10ms | | | delay build up at receiver | | +..............+..................................+................+ | QTH | Delay threshold for non-linear | 100ms | | | warping | | | QMAX | Delay upper bound for non-linear | 400ms | | | warping | | | DLOSS | Delay penalty for loss | 1.0s | | DMARK | Delay penalty for ECN marking | 200ms | +..............+..................................+................+ | GAMMA_MAX | Upper bound onthis aspect. 4.2 Estimationrate increase | 20% | | | ratio for accelerated ramp-up | | | QBOUND | Upper bound on self-inflicted | 50ms | | | queuing delay during ramp up | | +..............+..................................+................+ | FPS | Frame rate ofpacket loss/marking ratio The receiver detects packet losses via gapsincoming video | 30 | | BETA_S | Scaling parameter for modulating | 0.1 | | | outgoing sending rate | | | BETA_V | Scaling parameter for modulating | 0.1 | | | video encoder target rate | | | ALPHA | Smoothing factor inthe RTP sequence numbersexponential | 0.1 | | | smoothing ofreceived packets. It then calculates instantaneouspacket lossratio as the ratio between the number of missing packets over the numberand | | | | marking ratios | +--------------+----------------------------------+----------------+ Figure 3: List oftotal transmitted packets in the given time window (e.g., during the most recent 500ms). This instantaneous value is passed over an exponential smoothing filter,algorithm parameters. 4.2. Receiver-Side Algorithm The receiver-side algorithm can be outlined as below: On initialization: set d_base = +INFINITY set p_loss = 0 set p_mark = 0 set r_recv = 0 set both t_last andthe filtered result is reported back to the sendert_curr asthe observedcurrent time On receiving a media packet: obtain current timestamp t_curr obtain from packetloss ratio p_L. We note that more sophisticated methods inheader sending time stamp t_sent obtain one-way delay measurement: d_fwd = t_curr - t_sent update baseline delay: d_base = min(d_base, d_fwd) update queuing delay: d_n = d_fwd - d_base update packet loss ratiocalculation, such as that adopted by TFRC [Floyd-CCR00], will likely be beneficial. These alternatives are currently under investigation. Estimation ofestimate p_loss update packet marking ratiop_M, when ECN is enabled at bottleneck network nodes along the path, will follow the same procedure as above. Here it is assumed that ECN marking information at the IP header are somehow passed along to the transport layer by theestimate p_mark update measurement of receivingendpoint. 4.3 Non-linearrate r_recv On time to send a new feedback report (t_curr - t_last > DELTA): calculate non-linear warping ofdelaydelay d_tilde if packet loss exists calculate aggregate congestion signal x_n determine mode of rate adaptation for sender: rmode send RTCP feedback report containing values of: rmode, x_n, and r_recv update t_last = t_curr In order for a delay-based flow to hold its groundand sustain a reasonable share of bandwidth in the presence of awhen competing against loss-basedflowflows (e.g., loss-based TCP), it is important to distinguish between different levels of observed queuing delay. For instance, a moderate queuing delay value below 100ms is likelyself-inflictedself- inflicted or induced by otherdelay- baseddelay-based flows, whereas a high queuing delay value of several hundreds of milliseconds may indicate the presence of a loss-based flow that does not refrain from increased delay.InspiredWhen packet losses are observed, the estimated queuing delay follows a non-linear warping inspired by the delay-adaptive congestion window backoff policy in[Budzisz-TON11] -- the work by itself is a window-based congestion control scheme with fair coexistence with TCP -- we devise the following non-linear warping of estimated queuing delay value: d_tilde_n = (d_hat_n),[Budzisz-TON11]: / d_n, ifd_hat_n < d_th; (d_maxd_n<QTH; | | (QMAX -d_hat_n)^4 d_tilde_nd_n)^4 d_tilde =d_th --------------------,< QTH ----------------, ifd_th<d_hat_n<d_max; (d_maxQTH<d_n<QMAX (1) | (QMAX -d_th)^4 d_tilde_n =QTH)^4 | \ 0,if d_hat_n > d_max.otherwise. Here, the queuing delay value is unchanged when it is below the first thresholdd_th;QTH; it isdiscountedscaled down following a non-linear curve when its value falls betweend_thQTH andd_max;QMAX; aboved_max,QMAX, the high queuing delay value no longer counts toward congestion control.When queuing delay is in the range (0, d_th), NADA operates in pure delay-based mode if no losses/markings are present. When queuing delay exceeds d_max, NADA reacts to loss/marking only. In between d_th and d_max, the sending rate will converge and stabilize at an operating point with a fairly high queuing delay and non-zero packet loss ratio. In our current implementation d_th is chosen as 50ms and d_max is chosen as 400ms. The impact of the choice of d_th and d_max will be investigated in future work. 4.4 Aggregating congestion signalsThereceiver aggregates all three forms ofaggregate congestion signalin terms of an equivalent delay:is: x_n =d_tilde_nd_tilde +p_M*d_Mp_mark*DMARK +p_L*d_L, (1) where d_Mp_loss*DLOSS. (2) Here, DMARK isaprescribedfictitiousdelayvaluepenalty associated with ECN markings(e.g., d_M = 200 ms),andd_LDLOSS isaprescribedfictitiousdelayvaluepenalty associated with packetlosses (e.g., d_L = 1 second). By introducing a large fictitious delay penalty for ECN marking and packet loss, the proposed scheme leads to low end-to-end actual delay in the presence of such events. While thelosses. The value ofd_M and d_L are fixedDLOSS andpredetermined in the current design, a scheme for automatically tuning these values basedDMARK does not depend ondesired bandwidth sharing behavior inconfigurations at thepresencenetwork node, but does assume that ECN markings, when available, occur before losses. Furthermore, the values ofother competing loss-basedDLOSS and DMARK need to be set consistently across all NADA flows(e.g., loss-based TCP) is being studied.for them to compete fairly. In the absence ofECN marking from the network,packet marking and losses, the value of x_nfalls backreduces to the observed queuing delayd_n for packet n when queuing delay is low and no packets are lost over a lightly congested path.d_n. In that case the NADA algorithm operates inpurely delay-based mode. 4.5 Estimating receiving rate Estimation of receiving ratethe regime of delay-based adaptation. Given observed per-packet delay and loss information, theflowreceiver isfairly straightforward. NADA maintainsalso in a good position to determine whether the network is underutilized and recommend the corresponding rate adaptation mode for the sender. The criteria for operating in accelerated ramp-up mode are: o No recent packet losses within the observation windowof 500ms,LOGWIN; andsimply divides the total sizeo No build-up ofpackets arriving during thatqueuing delay: d_fwd-d_base < QEPS for all previous delay samples within the observation windowoverLOGWIN. Otherwise thetime span.algorithm operates in graduate update mode. 4.3. Sender-Side Algorithm The sender-side algorithm is outlined as follows: on initialization: set r_n = RMIN set rtt = 0 set x_prev = 0 set t_last and t_curr as current time on receiving feedback report: obtain current timestamp: t_curr obtain values of rmode, x_n, and r_recv from feedback report update estimation of rtt measure feedback interval: delta = t_curr - t_last if rmode == 0: update r_n following accelerated ramp-up rules else: update r_n following gradual update rules clip rate r_n within the range of [RMIN, RMAX] x_prev = x_n t_last = t_curr In accelerated ramp-up mode, the rate r_n isdenotedupdated asR_r. 4.6 Sending periodicfollows: QBOUND gamma = min(GAMMA_MAX, -----------) (3) rtt+DELTA r_n = (1+gamma) r_recv (4) The rate increase multiplier gamma is calculated as a function of upper bound of self-inflicted queuing delay (QBOUND), round-trip-time (rtt), and target feedbackPeriodically, the receiver feeds backinterval DELTA. It has atuplemaximum value of GAMMA_MAX. The rationale behind (3)-(4) is that the longer it takes for the sender to observe self-inflicted queuing delay build-up, the more conservative the sender should be in increasing its rate, hence the smaller the rate increase multiplier. In gradual update mode, the rate r_n is updated as: x_offset = x_n - PRIO*X_REF*RMAX/r_n (5) x_diff = x_n - x_prev (6) delta x_offset r_n = r_n - KAPPA*-------*------------*r_n TAU TAU x_diff - KAPPA*ETA*---------*r_n (7) TAU The rate changes in proportion to the previous rate decision. It is affected by two terms: offset of themost recent valuesaggregate congestion signal from its value at equilibrium (x_offset) and its change (x_diff). Calculation of x_offset depends on maximum rate of<d_hat_n, x_n, x'_n, R_r> in RTCP feedback messages to aidthesender inflow (RMAX), itscalculationweight oftarget rate.priority (PRIO), as well as a reference congestion signal (X_REF). Thequeuing delayvalued_hat_nof X_REF isincluded along withchosen that the maximum rate of RMAX can be achieved when the observed congestion signal level is below PRIO*X_REF. At equilibrium, thecompositeaggregated congestion signal stablizes at x_nso= PRIO*X_REF*RMAX/r_n. This ensures that when multiple flows share thesender can decide whether the network is truly underutilized (see Sec. 6.1.1 Accelerated ramp-up). Thesame bottleneck and observe a common value ofx'_n correspondsx_n, their rates at equilibrium will be proportional tothe derivative (i.e.,their respective priority levels (PRIO) and maximum rateof change) of(RMAX). As mentioned in thecomposite congestion signal: x_n - x_(n-k) x'_n = ---------------, (2) delta wheresender-side algorithm, theinterval between consecutive RTCP feedback messagesfinal rate isdenoted as delta. The packet indices corresponding toclipped within thecurrent and previous feedback are n and (n-k), respectively. The choice of target feedback interval needs to strikedynamic range specified by theright balanceapplication: r_n = min(r_n, RMAX) (8) r_n = max(r_n, RMIN) (9) The above operations ignore many practical issues such as clock synchronization betweentimely feedbacksender andlow RTCP feedback message counts. Through simulation studiesreceiver, filtering of noise in delay measurements, andfrequency-domain analysis, it was determined that a feedback interval below 250msbase delay expiration. These willnot break up the feedback control loopbe addressed in later sections describing practical implementation of the NADAcongestion controlalgorithm.Thus, it is recommended to use a target feed interval5. Practical Implementation of100ms. This will resultNADA 5.1. Receiver-Side Operation The receiver continuously monitors end-to-end per-packet statistics ina feedback bandwidthterms of16Kbps with 200 bytes per feedback message, less than 0.1% overhead for a 1Mbps flow. 4.7 Discussions on delay metrics The current design works with relative one-way-delay (OWD) as the main indicationdelay, loss, and/or ECN marking ratios. It then aggregates all forms ofcongestion. The valuecongestion indicators into the form of an equivalent delay and periodically reports this back to therelative OWD is obtained by maintainingsender. In addition, theminimum valuereceiver tracks the receiving rate ofobserved OWD over a relatively long time horizonthe flow andsubtractincludes thatout from the observed absolute OWD value. Such an approach cancels out the fixed difference betweenin thesenderfeedback message. 5.1.1. Estimation of one-way delay andreceiver clocks. It has been widely adopted by otherqueuing delay The delay estimation process in NADA follows a similar approach as in earlier delay-based congestion controlapproachesschemes, such as LEDBAT [RFC6817].As discussed in [RFC6817],NADA estimates the forward delay as having a constant base delay component plus a timehorizon for trackingvarying queuing delay component. The base delay is estimated as the minimumOWD needs to be chosen with care: it must bevalue of one-way delay observed over a relatively longenough for an opportunity to observeperiod (e.g., tens of minutes), whereas theminimum OWD with zeroindividual queuing delayalongvalue is taken to be thepath,difference between one-way delay andsufficiently short sobase delay. The individual sample values of queuing delay should be further filtered against various non-congestion-induced noise, such as spikes due totimely reflect "true" changes inprocessing "hiccup" at the network nodes. Current implementation employs a 15-tab minimumOWD introduced by route changes and other rare events.filter over per-packet queuing delay estimates. 5.1.2. Estimation of packet loss/marking ratio Thepotential drawbackreceiver detects packet losses via gaps inrelying on relative OWD as the congestion signal is that when multiple flows share the same bottleneck,theflowRTP sequence numbers of received packets. Packets arrivinglate atout-of-order are discarded, and count towards losses. The instantaneous packet loss ratio p_inst is estimated as thenetwork experiencing a non-empty queue may mistakenly considerratio between thestanding queuing delay as partnumber of missing packets over thefixed path propagation delay. This will lead to slightly unfair bandwidth sharing among the flows. Alternatively, one could movenumber of total transmitted packets within theper-packet statistical handlingrecent observation window LOGWIN. The packet loss ratio p_loss is obtained after exponential smoothing: p_loss = ALPHA*p_inst + (1-ALPHA)*p_loss. (10) The filtered result is reported back to the senderinstead and use RTT in lieu of OWD, assuming that per-packet ACKs are available. The main drawbackas the observed packet loss ratio p_loss. Estimation ofthis latter approachpacket marking ratio p_mark follows the same procedure as above. It is assumed that ECN marking information at thescheme willIP header can beconfused by congestion inpassed to thereverse direction. Note thattransport layer by thechoicereceiving endpoint. 5.1.3. Estimation ofeither delay metric (relative OWD vs. RTT) involves no change inreceiving rate It is fairly straighforward to estimate theproposedreceiving rateadaptation algorithm atr_recv. NADA maintains a recent observation window with time span of LOGWIN, and simply divides thesender. Therefore, comparingtotal size of packets arriving during that window over thepros and cons regarding which delay metric to adopt can be kepttime span. The receiving rate (r_recv) is included asan orthogonal directionpart ofinvestigation. 5. NADA Sender Behaviorthe feedback report. 5.2. Sender-Side Operation Figure14 provides a detailed view of the NADA sender. Upon receipt of an RTCP feedback report from the receiver, the NADA senderupdates its calculation ofcalculates the reference rateR_n.r_n as specified in Section 4.3. It further adjusts both the target rate for the live video encoderR_vr_vin and the sending rateR_sr_send over the network based on the updated value ofR_n, as well asr_n and rate shaping buffer occupancy buffer_len. The NADA sender behavior stays thesizesame in the presence of all types of congestion indicators: delay, loss, and ECN marking. This unified approach allows a graceful transition of therate shaping buffer. Inscheme as thefollowing, we describe these modules in further detail,network shifts dynamically between light andexplain how they interact with each other. -------------------- | |heavy congestion levels. +----------------+ |Reference RateCalculate | <---- RTCP report |Calculator | | | -------------------- | | R_n | --------------------------Reference Rate | -----------------+ | r_n +------------+-------------+ | |\ / \ / -------------------- -----------------\|/ \|/ +-----------------+ +---------------+ | Calculate Video | | Calculate | |VideoTarget| | SendingRate | | Sending RateCalculator | | Calculator | | | ||-------------------- -----------------+-----------------+ +---------------+ | /|\ /|\ |R_v|r_vin | | | |----------------------- | | | | R_s ------------ |L_s | |\|/ +-------------------+ | +----------+ | buffer_len | r_send | Video |R_o --------------r_vout -----------+ \|/ | Encoder|----------> | | | | | ---------------> | | | | | | | video|--------> |||||||||=================> +----------+ -----------+ RTP packets------------ --------------Rate Shaping Buffer Figure14: NADA Sender Structure5.1 Reference rate calculation5.2.1. Rate shaping buffer The operation of the live video encoder is out of the scope of the design for the congestion control scheme in NADA. Instead, its behavior is treated as a black box. A rate shaping buffer is employed to absorb any instantaneous mismatch between encoder rate output r_vout and regulated sending rate r_send. Its current level of occupancy is measured in bytes and is denoted as buffer_len. A large rate shaping buffer contributes to higher end-to-end delay, which may harm the performance of real-time media communications. Therefore, the senderinitializeshas a strong incentive to prevent the rate shaping buffer from building up. The mechanisms adopted are: o To deplete the rate shaping buffer faster by increasing the sending rate r_send; and o To limit incoming packets of the rate shaping buffer by reducing the video encoder target rate r_vin. 5.2.2. Adjusting video target rate and sending rate The target rate for the live video encoder deviates from the network congestion control rate r_n based on the level of occupancy in the rate shaping buffer: r_vin = r_n - BETA_V*8*buffer_len*FPS. (11) The actual sending rate r_send is regulated in a similar fashion: r_send = r_n + BETA_S*8*buffer_len*FPS. (12) In (11) and (12), the first term indicates thereferencerateR_n as R-min by default, or to a value specified bycalculated from network congestion feedback alone. The second term indicates theupper-layer application. [Editor's note: should proper choiceinfluence ofstartingthe ratevalue be withinshaping buffer. A large rate shaping buffer nudges thescope ofencoder target rate slightly below -- and theCC solution? ] The referencesending rateR_n is calculated based on receiver feedback information regarding queuing delay d_tilde_n, composite congestion signal x_n, its derivative x'_n, as well asslightly above -- thereceivingreference rateR_r. The sender switches between two modes of operation: * Accelerated ramp up: ifr_n. Intuitively, thereported queuing delay is close to zero and both valuesamount ofx_n and x'_n are closeextra rate offset needed tozero, indicating empty queues along the path ofcompletely drain theflow and, consequently, underutilized network bandwidth; or * Gradualrateupdate: in all other conditions, wherebyshaping buffer within thereceiver reports onduration of astanding or increasing/decreasing queue and/or composite congestion signal. 5.1.1 Accelerated ramp up Insingle video frame is given by 8*buffer_len*FPS, where FPS stands for theabsenceframe rate ofa non-zero congestion signalthe video. The scaling parameters BETA_V and BETA_S can be tuned toguidebalance between thesendingcompeting goals of maintaining a small ratecalculation,shaping buffer and deviating thesender needs to ramp up its estimated bandwidth as quickly as possible without introducing excessive queuing delay. Ideallysystem from theflow should inflict no more than T_th millisecondsreference rate point. 6. Discussions and Further Investigations 6.1. Choice ofqueuingdelayat the bottleneck duringmetrics The current design works with relative one-way-delay (OWD) as theramp-up process. A typicalmain indication of congestion. The value ofT_ththe relative OWD is50ms. Noteobtained by maintaining the minimum value of observed OWD over a relatively long time horizon and subtract that out from the observed absolute OWD value. Such an approach cancels out the fixed difference between the senderwill be aware of any queuing delay introducedand receiver clocks. It has been widely adopted byits rate increase after at least one round-trip time. In addition,other delay-based congestion control approaches such as [RFC6817]. As discussed in [RFC6817], thebottleneck bandwidth C is greater than or equaltime horizon for tracking the minimum OWD needs to be chosen with care: it must be long enough for an opportunity to observe thereceive rate R_r reported fromminimum OWD with zero queuing delay along themost recent "no congestion" feedback message. The rate R_n is updated as follows: T_th gamma = min [gamma_0, ---------------] (3) RTT_0+delta_0 R_n = (1+gamma) R_r (4) In (3)path, and(4), the multiplier gamma for rate increase is upper-bounded by a fixed ratio gamma_0 (e.g., 20%), as wellsufficiently short so asa ratio which dependsto timely reflect "true" changes in minimum OWD introduced by route changes and other rare events. The potential drawback in relying onT_th, base RTTrelative OWD asmeasured duringthenon-congested phase, and target ACK interval delta_0. The rationale behind thiscongestion signal is that when multiple flows share therate increase multiplier should decrease with the delay insame bottleneck, thefeedback control loop, and that RTT_0 + delta_0 provides a worst-case estimate of feedback control delay whenflow arriving late at the networkis not congested. 5.1.2. Gradual rate update When the receiver reports indicateexperiencing a non-empty queue may mistakenly consider the standingcongestion level, NADA operates in gradual update mode, and calculates its reference rate as: kappa * delta_s R_n <-- R_n + ---------------- * (theta-(R_n-R_min)*x_hat) (5) tau_o^2 where theta = w*(R_max - R_min)*x_ref. (6) x_hat = x_n + eta*tau_o* x'_n (7) In (5), delta_s refersqueuing delay as part of the fixed path propagation delay. This will lead to slightly unfair bandwidth sharing among thetime interval between currentflows. Alternatively, one could move the per-packet statistical handling to the sender instead andprevious rate updates. Noteuse relative round-trip-time (RTT) in lieu of relative OWD, assuming thatdelta_sper-packet acknowledgements are available. The main drawback of RTT-based approach is thesame as the RTCP report interval at the receiver (see delta from (2)) when the backward path is un- congested. In (6), R_min and R_max denotenoise in thecontent-dependent rate rangemeasured delay in theencoder can produce. The weighting factor reflecting a flow's priority is w. The reference congestion signal x_ref is chosen soreverse direction. Note that themaximum rate of R_max can be achieved when x_hat = w*x_ref. Properchoice ofthe scaling parameters eta and kappaeither delay metric (relative OWD vs. RTT) involves no change in(5)the proposed rate adaptation algorithm. Therefore, comparing the pros and(7)cons regarding which delay metric to adopt canensure system stability so longbe kept as an orthogonal direction of investigation. 6.2. Method for delay, loss, and marking ratio estimation Like other delay-based congestion control schemes, performance of NADA depends on theRTT falls below the upper boundaccuracy oftau_o.its delay measurement and estimation module. Appendix A in [RFC6817] provides an extensive discussion on this aspect. The current recommendeddefault valuepractice oftau_o is chosen as 500ms.simply applying a 15-tab minimum filter suffices in guarding against processing delay outliers observed in wired connections. Forboth modeswireless connections with a higher packet delay variation (PDV), more sophisticated techniques on de-noising, outlier rejection, and trend analysis may be needed. More sophisticated methods in packet loss ratio calculation, such as that adopted by [Floyd-CCR00], will likely be beneficial. These alternatives are currently under investigation. 6.3. Impact ofoperations,parameter values In thefinal referencegradual rateR_n is clipped withinupdate mode, therange of [R_min, R_max]. Note also thatparameter TAU indicates thesender does not need any explicit knowledgeupper bound of round-trip-time (RTT) in feedback control loop. Typically, themanagement scheme inside the network. Rather, it reactsobserved feedback interval delta is close to theaggregationtarget feedback interval DELTA, and the relative ratio ofall formsdelta/TAU versus ETA dictates the relative strength ofcongestion indications (delay, loss, and explicit markings) viainfluence from thecompositeaggregate congestionsignals x_n and x'_n fromsignal offset term (x_offset) versus its recent change (x_diff), respectively. These two terms are analogous to thereceiverintegral and proportional terms in acoherent manner. 5.2 Video encoder rate controlproportional-integral (PI) controller. Thevideo encoder rate control procedure has the following characteristics: * Rate changes can happen only at large intervals, on the orderrecommended choice ofseconds. * The encoder output rate may fluctuate aroundTAU=500ms, DELTA=100ms and ETA = 2.0 corresponds to a relative ratio of 1:10 between thetarget rate R_v. * The encoder output rate is further constrained by video content complexity. The rangegains of thefinal rate output is [R_min, R_max]. Note that it is content-dependentintegral andmay vary over time. The operation ofproportional terms. Consequently, thelive video encoderrate adaptation isout of the scope ofmostly driven by thedesign forchange in the congestioncontrol scheme in NADA. Instead, its behavior is treated assignal with ablack box. 5.3 Rate shaping buffer A rate shaping buffer is employedlong-term shift towards its equilibrium value driven by the offset term. Finally, the scaling parameter KAPPA determines the overall speed of the adaptation and needs toabsorb any instantaneous mismatchstrike a balance betweenencoder rate output R_oresponsiveness andregulated sending rate R_s.stability. Thesizechoice of thebuffer evolves from time t-tautarget feedback interval DELTA needs totime t as: L_s(t) = max [0, L_s(t-tau)+(R_o-R_s)*tau].strike the right balance between timely feedback and low RTCP feedback message counts. Alarge rate shaping buffer contributestarget feedback interval of DELTA=100ms is recommended, corresponding tohigher end-to-end delay, which may harma feedback bandwidth of 16Kbps with 200 bytes per feedback message --- less than 0.1% overhead for a 1 Mbps flow. Furthermore, both simulation studies and frequency-domain analysis have established that a feedback interval below 250ms will not break up theperformancefeedback control loop ofreal-time media communications. Therefore, the sender has a strong incentive to constrainNADA congestion control. In calculating thesizenon-linear warping of delay in (1), theshaping buffer.current design uses fixed values of QTH and QMAX. Itcan either deplete it faster by increasingis possible to adapt thesending rate R_s, or limit its growth by reducingvalue of both based on past observations of queuing delay in thetarget rate forpresence of packet losses. In calculating thevideo encoder rate control R_v. 5.4 Adjusting video target rateaggregate congestion signal x_n, the choice of DMARK andsending rate The target rate forDLOSS influence thelive video encoder is updated based on bothsteady-state packet loss/marking ratio experienced by thereference rate R_nflow at a given available bandwidth. Higher values of DMARK and DLOSS result in lower steady-state loss/marking ratios, but are more susceptible to therate shaping buffer size L_s, as follows: L_s R_v = R_n - beta_v * -------. (8) tau_v Similarly, the outgoing rate is regulated based on bothimpact of individual packet loss/marking events. While thereference rate R_nvalue of DMARK andthe rate shaping buffer size L_s, such that: L_s R_s = R_n + beta_s * -------. (9) tau_v In (8)DLOSS are fixed and(9), the first term indicatespredetermined in therate calculated from network congestion feedback alone. The second term indicatescurrent design, a scheme for automatically tuning these values based on desired bandwidth sharing behavior in theinfluencepresence of other competing loss-based flows (e.g., loss-based TCP) is under investigation. [Editor's note: Choice of start value: is this in scope of congestion control, or should this be decided by therate shaping buffer. A large rate shaping buffer nudgesapplication?] 6.4. Sender-based vs. receiver-based calculation In theencoder target rate slightly below -- andcurrent design, thesending rate slightly above --aggregated congestion signal x_n is calculated at thereference rate R_n. Intuitively,receiver, keeping theamount of extra rate offset needed tosender operation completelydrain the rate shaping buffer withinindependent of thesame time frameform ofencoder rate adaptation tau_v is given by L_s/tau_v. The scaling parameters beta_v and beta_sactual network congestion indications (delay, loss, or marking). Alternatively, one canbe tuned to balance betweenmove thecompeting goalslogics ofmaintaining a small rate shaping buffer(1) anddeviating(2) to thesystem fromsender. Such an approach requires slightly higher overhead in thereferencefeedback messages, which should contain individual fields on queuing delay (d_n), packet loss ratio (p_loss), packet marking ratio (p_mark), receiving ratepoint. 6.(r_recv), and recommended rate adaptation mode (rmode). 6.5. IncrementalDeploymentdeployment One nice property of NADA is the consistent video endpoint behavior irrespective of network node variations. This facilitates gradual, incremental adoption of the scheme. To start off with, theencoderproposed congestion control mechanism can be implemented without any explicit support from the network, and relies solely on observed one-way delay measurements and packet loss ratios as implicit congestion signals. When ECN is enabled at the network nodes with RED-based marking, the receiver can fold its observations of ECN markings into the calculation of the equivalent delay. The sender can react to these explicit congestion signals without any modification. Ultimately, networks equipped with proactive marking based on token bucket level metering can reap the additional benefits of zero standing queues and lower end-to-end delay and work seamlessly with existing senders and receivers. 7. Implementation Status The NADA scheme has been implemented inthe ns-2[ns-2] and [ns-3] simulationplatform [ns2].platforms. Extensive ns-2 simulation evaluations of an earlier version of the draft are documented in [Zhu-PV13]. Evaluation results of the current draft over several test cases in[I-D.draft-sarker-rmcat-eval-test][I-D.ietf-rmcat-eval-test] have been presented at recent IETF meetings [IETF-90][IETF-91]. The scheme has also been implemented and evaluated in a lab setting as described in [IETF-90]. Preliminary evaluation results of NADA in single-flow and multi-flow scenarios have been presented in [IETF-91]. 8. IANA ConsiderationsThere areThis document makes noactions forrequest of IANA. 9. Acknowledgements The authors would like to thank Randell Jesup, Luca De Cicco, Piers O'Hanlon, Ingemar Johansson, Stefan Holmer, Cesar Ilharco Magalhaes, Safiqul Islam, Mirja Kuhlewind, and Karen Elisabeth Egede Nielsen for their valuable questions and comments on earlier versions of this draft. 10. References9.110.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/ RFC2119, March1997.1997, <http://www.rfc-editor.org/info/rfc2119>. [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP", RFC 3168, DOI 10.17487/RFC3168, September2001.2001, <http://www.rfc-editor.org/info/rfc3168>. [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, "RTP: A Transport Protocol for Real-Time Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, July2003. 9.2 Informative References [RFC3168] Ramakrishnan, K., Floyd, S.,2003, <http://www.rfc-editor.org/info/rfc3550>. [I-D.ietf-rmcat-eval-criteria] Singh, V. andD. Black, "The Addition of ExplicitJ. Ott, "Evaluating CongestionNotification (ECN) to IP", RFC 3168,Control for Interactive Real-time Media", draft-ietf-rmcat-eval- criteria-03 (work in progress), March 2015. [I-D.ietf-rmcat-eval-test] Sarker, Z., Singh, V., Zhu, X., and M. Ramalho, "Test Cases for Evaluating RMCAT Proposals", draft-ietf-rmcat- eval-test-02 (work in progress), September2001.2015. [I-D.ietf-rmcat-cc-requirements] Jesup, R. and Z. Sarker, "Congestion Control Requirements for Interactive Real-Time Media", draft-ietf-rmcat-cc- requirements-09 (work in progress), December 2014. 10.2. Informative References [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, "Recommendations on Queue Management and Congestion Avoidance in the Internet", RFC 2309, DOI 10.17487/RFC2309, April1998.1998, <http://www.rfc-editor.org/info/rfc2309>. [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP Friendly Rate Control (TFRC): Protocol Specification", RFC 5348, DOI 10.17487/RFC5348, September 2008, <http://www.rfc-editor.org/info/rfc5348>. [RFC6660] Briscoe, B., Moncaster, T., and M. Menth, "Encoding Three Pre-Congestion Notification (PCN) States in the IP Header Using a Single Diffserv Codepoint (DSCP)", RFC 6660, DOI 10.17487/RFC6660, July 2012, <http://www.rfc-editor.org/info/rfc6660>. [RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind,M.,"Low Extra Delay Background Transport (LEDBAT)", RFC 6817, DOI 10.17487/RFC6817, December20122012, <http://www.rfc-editor.org/info/rfc6817>. [Floyd-CCR00] Floyd, S., Handley, M., Padhye, J., and J. Widmer,J.,"Equation-based Congestion Control for Unicast Applications", ACM SIGCOMM Computer CommunicationsReview,Review vol.30.30, no.4.,4, pp. 43-56, October 2000. [Budzisz-TON11] Budzisz,L. et al.,L., Stanojevic, R., Schlote, A., Baker, F., and R. Shorten, "On the Fair Coexistence of Loss- andDelay-BasedDelay- Based TCP", IEEE/ACM Transactions onNetworking,Networking vol. 19, no. 6, pp. 1811-1824, December 2011.[ns2] "The Network Simulator - ns-2", http://www.isi.edu/nsnam/ns/[Zhu-PV13] Zhu, X. and R. Pan,R.,"NADA: A Unified Congestion Control Scheme for Low-Latency Interactive Video", in Proc. IEEE International Packet Video Workshop(PV'13).(PV'13) San Jose, CA,USA.USA, December 2013.[I-D.draft-sarker-rmcat-eval-test] Sarker, Z., Singh, V.,[ns-2] "The Network Simulator - ns-2", <http://www.isi.edu/nsnam/ns/>. [ns-3] "The Network Simulator - ns-3", <https://www.nsnam.org/>. [IETF-90] Zhu, X.,andRamalho, M.,"Test Cases for Evaluating RMCAT Proposals", draft-sarker-rmcat-eval-test-01 (work in progress), June 2014. [IETF-90] Zhu, X. et al.,Ganzhorn, C., Jones, P., and R. Pan, "NADA Update: Algorithm, Implementation, and Test Case Evalua6on Results",presented at IETF 90, https://tools.ietf.org/agenda/90/slides/slides-90-rmcat- 6.pdfJuly 2014, <https://tools.ietf.org/agenda/90/slides/slides-90-rmcat- 6.pdf>. [IETF-91] Zhu,X. et al.,X., Pan, R., Ramalho, M., Mena, S., Ganzhorn, C., Jones, P., and S. D'Aronco, "NADA Algorithm Update and Test Case Evaluations",presented at IETF 91 Interium, https://datatracker.ietf.org/meeting/91/agenda/rmcat/November 2014, <http://www.ietf.org/proceedings/interim/2014/11/09/rmcat/ slides/slides-interim-2014-rmcat-1-2.pdf>. Appendix A. Network Node Operations NADA can work with different network queue management schemes and does not assume any specific network node operation. As an example, this appendix describes threevariationsvariants of queue management behavior at the network node, leading to either implicit or explicit congestion signals. In all three flavors described below, the network queue operates with the simple first-in-first-out (FIFO) principle. There is no need to maintain per-flow state.Such a simple design ensures that theThe system can scale easily with a large number of video flows and at high link capacity.NADA sender behavior stays the same in the presence of all types of congestion indicators: delay, loss, ECN marking due to either RED/ECN or PCN algorithms. This unified approach allows a graceful transition of the scheme as the network shifts dynamically between light and heavy congestion levels. A.1A.1. Default behavior of drop tail queues In a conventional network with drop tail or RED queues, congestion is inferred from the estimation of end-to-end delay and/or packet loss. Packet drops at the queue are detected at the receiver, and contributes to the calculation of theequivalent delayaggregated congestion signal x_n. No special action is required at network node.A.2A.2. RED-based ECN marking In this mode, the network node randomly marks the ECN field in the IP packet header following the Random Early Detection (RED) algorithm [RFC2309]. Calculation of the marking probability involves the following steps:* uponon packetarrival,arrival: update smoothed queue size q_avg as: q_avg =alpha*qw*q +(1-alpha)*q_avg. The smoothing parameter alpha is a value between 0 and 1. A value of alpha=1 corresponds to performing no smoothing at all. *(1-w)*q_avg. calculate marking probability p as:p =/ 0, if q < q_lo; | | q_avg - q_lop =p= < p_max*--------------, if q_lo <= q < q_hi; | q_hi - q_lo | \ p = 1, if q >= q_hi. Here, q_lo and q_hi corresponds to the low and high thresholds of queue occupancy. The maximum marking probability is p_max. The ECN markings events will contribute to the calculation of an equivalent delay x_n at the receiver. No changes are required at the sender.A.3 PCN marking As a more advanced feature, we also envisageA.3. Random Early Marking with Virtual Queues Advanced network nodeswhichmay supportPCNrandom early marking based onvirtual queues. In suchacase, the marking probability of the ECNtoken bucket algorithm originally designed for Pre-Congestion Notification (PCN) [RFC6660]. The early congestion notification (ECN) bit in the IPpacketheader of packets are marked randomly. The marking probability is calculated based on a token-bucket algorithm originally designed for the Pre-Congestion Notification (PCN) [RFC6660]. The target link utilization is set asfollows:90%; the marking probability is designed to grow linearly with the token bucket size when it varies between 1/3 and 2/3 of the full token bucket limit. * upon packet arrival, meter packet against token bucket (r,b); * update token level b_tk; * calculate the marking probability as:p =/ 0, if b-b_tk < b_lo; | | b-b_tk-b_lo p = < p_max* --------------, if b_lo<= b-b_tk <b_hi; | b_hi-b_lop =| \ 1, if b-b_tk>=b_hi. Here, the token bucket lower and upper limits are denoted by b_lo and b_hi, respectively. The parameter b indicates the size of the token bucket. The parameter r is chosenas r=gamma*C, where gamma<1 isto be below capacity, resulting in slight under-utilization of thetarget utilization ratio and C designates link capacity.link. The maximum marking probability is p_max. The ECN markings events will contribute to the calculation of an equivalent delay x_n at the receiver. No changes are required at the sender. The virtual queuing mechanism from thePCNPCN-based marking algorithm will lead to additional benefits such as zero standing queues. Authors' Addresses Xiaoqing Zhu CiscoSystems,Systems 12515 Research Blvd., Building 4 Austin, TX78759,78759 USA Email: xiaoqzhu@cisco.com Rong Pan Cisco Systems510 McCarthy Blvd, Milpitas,3625 Cisco Way San Jose, CA95134,95134 USA Email: ropan@cisco.com Michael A. Ramalho6310 Watercrest Way Unit 203 Lakewood Ranch, FL, 34202,Cisco Systems, Inc. 8000 Hawkins Road Sarasota, FL 34241 USA Phone: +1 919 476 2038 Email: mramalho@cisco.com Sergio Mena de la Cruz Cisco Systems EPFL, Quartier de l'Innovation, Batiment E Ecublens, Vaud1015,1015 Switzerland Email: semena@cisco.comCharles Ganzhorn 7900 International Drive International Plaza, Suite 400 Bloomington, MN 55425, USA Email: charles.ganzhorn@gmail.comPaul E. Jones Cisco Systems 7025 Kit Creek Rd. Research Triangle Park, NC27709,27709 USA Email: paulej@packetizer.com Jiantao Fu Cisco Systems 707 Tasman Drive Milpitas, CA 95035 USA Email: jianfu@cisco.com Stefano D'Aronco Ecole Polytechnique Federale de Lausanne EPFL STI IELLTS4LTS4, ELD 220 (Batiment ELD), Station 11 Lausanne CH-1015Lausanne,Switzerland Email: stefano.daronco@epfl.ch Charles Ganzhorn 7900 International Drive, International Plaza, Suite 400 Bloomington, MN 55425 USA Email: charles.ganzhorn@gmail.com