DualQ Coupled AQM for Low Latency, Low
Loss and Scalable ThroughputNokia Bell LabsAntwerpBelgiumkoen.de_schepper@nokia.comhttps://www.bell-labs.com/usr/koen.de_schepperSimula Research Labietf@bobbriscoe.nethttp://bobbriscoe.net/Simula Research LabLysakerNorwayolgabnd@gmail.comhttps://www.simula.no/people/olgaboNokia Bell LabsAntwerpBelgiuming-jyh.tsang@nokia.com
Transport
Transport Area working group (tsvwg)Internet-DraftI-DData Centre TCP (DCTCP) was designed to provide predictably low
queuing latency, near-zero loss, and throughput scalability using
explicit congestion notification (ECN) and an extremely simple marking
behaviour on switches. However, DCTCP does not co-exist with existing
TCP traffic---throughput starves. So, until now, DCTCP could only be
deployed where a clean-slate environment could be arranged, such as in
private data centres. This specification defines `DualQ Coupled Active
Queue Management (AQM)' to allow scalable congestion controls like DCTCP
to safely co-exist with classic Internet traffic. The Coupled AQM
ensures that a flow runs at about the same rate whether it uses DCTCP or
TCP Reno/Cubic, but without inspecting transport layer flow identifiers.
When tested in a residential broadband setting, DCTCP achieved
sub-millisecond average queuing delay and zero congestion loss under a
wide range of mixes of DCTCP and `Classic' broadband Internet traffic,
without compromising the performance of the Classic traffic. The
solution also reduces network complexity and eliminates network
configuration.Latency is becoming the critical performance factor for many
(most?) applications on the public Internet, e.g. interactive Web, Web
services, voice, conversational video, interactive video, interactive
remote presence, instant messaging, online gaming, remote desktop,
cloud-based applications, and video-assisted remote control of
machinery and industrial processes. In the developed world, further
increases in access network bit-rate offer diminishing returns,
whereas latency is still a multi-faceted problem. In the last decade
or so, much has been done to reduce propagation time by placing caches
or servers closer to users. However, queuing remains a major component
of latency.The Diffserv architecture provides Expedited Forwarding , so that low latency traffic can jump the queue of
other traffic. However, on access links dedicated to individual sites
(homes, small enterprises or mobile devices), often all traffic at any
one time will be latency-sensitive. Then Diffserv is of little use.
Instead, we need to remove the causes of any unnecessary delay.The bufferbloat project has shown that excessively-large buffering
(`bufferbloat') has been introducing significantly more delay than the
underlying propagation time. These delays appear only
intermittently—only when a capacity-seeking (e.g. TCP) flow is
long enough for the queue to fill the buffer, making every packet in
other flows sharing the buffer sit through the queue.Active queue management (AQM) was originally developed to solve
this problem (and others). Unlike Diffserv, which gives low latency to
some traffic at the expense of others, AQM controls latency for all traffic in a class. In general, AQMs
introduce an increasing level of discard from the buffer the longer
the queue persists above a shallow threshold. This gives sufficient
signals to capacity-seeking (aka. greedy) flows to keep the buffer
empty for its intended purpose: absorbing bursts. However,
RED and other algorithms from the 1990s
were sensitive to their configuration and hard to set correctly. So,
AQM was not widely deployed.More recent state-of-the-art AQMs, e.g. fq_CoDel , PIE , Adaptive RED , are easier to configure, because they define the
queuing threshold in time not bytes, so it is invariant for different
link rates. However, no matter how good the AQM, the sawtoothing rate
of TCP will either cause queuing delay to vary or cause the link to be
under-utilized. Even with a perfectly tuned AQM, the additional
queuing delay will be of the same order as the underlying
speed-of-light delay across the network. Flow-queuing can isolate one
flow from another, but it cannot isolate a TCP flow from the delay
variations it inflicts on itself, and it has other problems - it
overrides the flow rate decisions of variable rate video applications,
it does not recognise the flows within IPSec VPN tunnels and it is
relatively expensive to implement.It seems that further changes to the network alone will now yield
diminishing returns. Data Centre TCP (DCTCP ) teaches us that a small but radical
change to TCP is needed to cut two major outstanding causes of queuing
delay variability: the `sawtooth' varying rate of TCP itself;the smoothing delay deliberately introduced into AQMs to permit
bursts without triggering losses.The former causes a flow's round trip time (RTT) to vary from
about 1 to 2 times the base RTT between the machines in question. The
latter delays the system's response to change by a worst-case
(transcontinental) RTT, which could be hundreds of times the actual
RTT of typical traffic from localized CDNs.Latency is not our only concern:It was known when TCP was first developed that it would not
scale to high bandwidth-delay products.Given regular broadband bit-rates over WAN distances are
already beyond the scaling range of
`classic' TCP Reno, `less unscalable' Cubic and Compound variants of TCP have been
successfully deployed. However, these are now approaching their
scaling limits. Unfortunately, fully scalable TCPs such as DCTCP cause
`classic' TCP to starve itself, which is why they have been confined
to private data centres or research testbeds (until now).This document specifies a `DualQ Coupled AQM' extension that solves
the problem of coexistence between scalable and classic flows, without
having to inspect flow identifiers. The AQM is not like flow-queuing
approaches that classify
packets by flow identifier into numerous separate queues in order to
isolate sparse flows from the higher latency in the queues assigned to
heavier flow. In contrast, the AQM exploits the behaviour of scalable
congestion controls like DCTCP so that every packet in every flow
sharing the queue for DCTCP-like traffic can be served with very low
latency.This AQM extension can be combined with any single qeueu AQM that
generates a statistical or deterministic mark/drop probability driven
by the queue dynamics. In many cases it simplifies the basic control
algorithm, and requires little extra processing. Therefore it is
believed the Coupled AQM would be applicable and easy to deploy in all
types of buffers; buffers in cost-reduced mass-market residential
equipment; buffers in end-system stacks; buffers in carrier-scale
equipment including remote access servers, routers, firewalls and
Ethernet switches; buffers in network interface cards, buffers in
virtualized network appliances, hypervisors, and so on.The overall L4S architecture is described in . The supporting papers and give the full rationale
for the AQM's design, both discursively and in more precise
mathematical form.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in . In this document, these words will appear with
that interpretation only when in ALL CAPS. Lower case uses of these
words are not to be interpreted as carrying RFC-2119 significance.The DualQ Coupled AQM uses two queues for two services. Each of the
following terms identifies both the service and the queue that
provides the service:The `Classic'
service is intended for all the behaviours that currently co-exist
with TCP Reno (TCP Cubic, Compound, SCTP, etc).The
`L4S' service is intended for a set of congestion controls with
scalable properties such as DCTCP (e.g. Relentless ).Either service can cope with a proportion of unresponsive or
less-responsive traffic as well (e.g. DNS, VoIP, etc), just as a
single queue AQM can. The DualQ Coupled AQM behaviour is similar to a
single FIFO queue with respect to unresponsive and overload
traffic.The AQM couples marking and/or dropping across the two queues such
that a flow will get roughly the same throughput whichever it uses.
Therefore both queues can feed into the full capacity of a link and no
rates need to be configured for the queues. The L4S queue enables
scalable congestion controls like DCTCP to give stunningly low and
predictably low latency, without compromising the performance of
competing 'Classic' Internet traffic. Thousands of tests have been
conducted in a typical fixed residential broadband setting. Typical
experiments used base round trip delays up to 100ms between the data
centre and home network, and large amounts of background traffic in
both queues. For every L4S packet, the AQM kept the average queuing
delay below 1ms (or 2 packets if serialization delay is bigger for
slow links), and no losses at all were introduced by the AQM. Details
of the extensive experiments will be made available .Subjective testing was also conducted using a demanding panoramic
interactive video application run over a stack with DCTCP enabled and
deployed on the testbed. Each user could pan or zoom their own high
definition (HD) sub-window of a larger video scene from a football
match. Even though the user was also downloading large amounts of L4S
and Classic data, latency was so low that the picture appeared to
stick to their finger on the touchpad (all the L4S data achieved the
same ultra-low latency). With an alternative AQM, the video noticeably
lagged behind the finger gestures.Unlike Diffserv Expedited Forwarding, the L4S queue does not have
to be limited to a small proportion of the link capacity in order to
achieve low delay. The L4S queue can be filled with a heavy load of
capacity-seeking flows like DCTCP and still achieve low delay. The L4S
queue does not rely on the presence of other traffic in the Classic
queue that can be 'overtaken'. It gives low latency to L4S traffic
whether or not there is Classic traffic, and the latency of Classic
traffic does not suffer when a proportion of the traffic is L4S. The
two queues are only necessary because DCTCP-like flows cannot keep
latency predictably low and keep utilization high if they are mixed
with legacy TCP flows,The experiments used the Linux implementation of DCTCP that is
deployed in private data centres, without any modification despite its
known deficiencies. Nonetheless, certain modifications will be
necessary before DCTCP is safe to use on the Internet, which are
recorded in Appendix A of .
However, the focus of this specification is to get the network service
in place. Then, without any management intervention, applications can
exploit it by migrating to scalable controls like DCTCP, which can
then evolve while their benefits are being
enjoyed by everyone on the Internet.There are two main aspects to the algorithm:the Coupled AQM that addresses throughput equivalence between
Classic (e.g. Reno, Cubic) flows and L4S (e.g. DCTCP) flowsthe Dual Queue structure that provides latency separation for L4S
flows to isolate them from the typically large Classic queue.In the 1990s, the `TCP formula' was derived for the relationship
between TCP's congestion window, cwnd, and its drop probability, p. To
a first order approximation, cwnd of TCP Reno is inversely
proportional to the square root of p. TCP Cubic implements a
Reno-compatibility mode, which is the only relevant mode for typical
RTTs under 20ms, while the throughput of a single flow is less than
about 500Mb/s. Therefore we can assume that Cubic traffic behaves
similar to Reno (but with a slightly different constant of
proportionality), and we shall use the term 'Classic' for the
collection of Reno and Cubic in Reno mode.In our supporting paper , we derive the
equivalent rate equation for DCTCP, for which cwnd is inversely
proportional to p (not the square root), where in this case p is the
ECN marking probability. DCTCP is not the only congestion control that
behaves like this, so we use the term 'L4S' traffic for all similar
behaviour.In order to make a DCTCP flow run at roughly the same rate as a
Reno TCP flow (all other factors being equal), we make the drop or
marking probability for Classic traffic, p_C distinct from the marking
probability for L4S traffic, p_L (in contrast to RFC3168 which
requires them to be the same). We make the Classic drop probability
p_C proportional to the square of the L4S marking probability p_L.
This is because we need to make the Reno flow rate equal the DCTCP
flow rate, so we have to square the square root of p_C in the Reno
rate equation to make it the same as the straight p_L in the DCTCP
rate equation.There is a really simple way to implement the square of a
probability - by testing the queue against two random numbers not one.
This is the approach adopted in and
.Stating this as a formula, the relation between Classic drop
probability, p_C, and L4S marking probability, p_L needs to take the
form:where k is the constant of proportionality. Optionally, k can be
expressed as a power of 2, so k=2^k', where k' is another constant.
Then implementations can avoid costly division by shifting p_L by k'
bits to the right.Classic traffic builds a large queue, so a separate queue is
provided for L4S traffic, and it is scheduled with strict priority.
Nonetheless, coupled marking ensures that giving priority to L4S
traffic still leaves the right amount of spare scheduling time for
Classic flows to each get equivalent throughput to DCTCP flows (all
other factors such as RTT being equal). The algorithm achieves this
without having to inspect flow identifiers.Both the Coupled AQM and DualQ mechanisms need an identifier to
distinguish L4S and C packets. A separate draft recommends using the ECT(1)
codepoint of the ECN field as this identifier, having assessed various
alternatives.Given L4S work is currently on the experimental track, but the
definition of the ECN field is on the standards track , another standards track document has proved
necessary to make the ECT(1) codepoint available for experimentation
.In the Dual Queue, L4S packets MUST be given priority over Classic,
although strict priority MAY not be appropriate.All L4S traffic MUST be ECN-capable, although some Classic traffic
MAY also be ECN-capable.Whatever identifier is used for L4S traffic, it will still be
necessary to agree on the meaning of an ECN marking on L4S traffic,
relative to a drop of Classic traffic. In order to prevent starvation
of Classic traffic by scalable L4S traffic (e.g. DCTCP) the drop
probability of Classic traffic MUST be proportional to the square of
the marking probability of L4S traffic, In other words, the power to
which p_L is raised in Eqn. (1) MUST be 2.The constant of proportionality, k, in Eqn (1) determines the
relative flow rates of Classic and L4S flows when the AQM concerned is
the bottleneck (all other factors being equal). k does not have to be
standardized because differences do not prevent interoperability.
However, k has to take some value, and each operator can make that
choice.A value of k=2 is currently RECOMMENDED as the default for Internet
access networks. Assuming scalable congestion controls for the
Internet will be as aggressive as DCTCP, this will ensure their
congestion window will be roughly the same as that of a standards
track TCP congestion control (Reno) and other
so-called TCP-friendly controls such as TCP Cubic in its TCP-friendly
mode.The requirements for scalable congestion controls on the Internet
(termed the TCP Prague requirements) are not necessarily final. If the
aggressiveness of DCTCP is not defined as the benchmark for scalable
controls on the Internet, the recommended value of k will also be
subject to change.Whatever value is recommended, the choice of k is a matter of
operator policy, and operators MAY choose a different value using
and the guidelines in .Typically, access network operators isolate customers from each
other with some form of layer-2 multiplexing (TDM in DOCSIS, CDMA in
3G) or L3 scheduling (WRR in broadband), rather than relying on TCP to
share capacity between customers . In such
cases, the choice of k will solely affect relative flow rates within
each customer's access capacity, not between customers. Also, k will
not affect relative flow rates at any times when all flows are Classic
or all L4S, and it will not affect small flows.Example DualQ Coupled AQM algorithms called PI2 and Curvy RED are
given in and . Either example AQM can be used to couple
packet marking and dropping across a dual Q. Curvy RED requires less
operations per packet than RED and can be used if the range of RTTs is
limited. PI2 is a simplification of PIE with stable
Proportional-Integral control for both Classic and L4S congestion
controls. Nonetheless, it would be possible to control the queues with
other alternative AQMs, as long as the above normative requirements
(those expressed in capitals) are observed, which are intended to be
independent of the specific AQM.{ToDo: Add management and monitoring requirements}This specification contains no IANA considerations.Where the interests of users or flows might conflict, it could be
necessary to police traffic to isolate any harm to performance. This
is a policy issue that needs to be separable from a basic AQM, but an
AQM does need to handle overload. A trade-off needs to be made between
complexity and the risk of either class harming the other. It is an
operator policy to define what must happen if the service time of the
classic queue becomes too great. In the following subsections three
optional non-exclusive overload protections are defined. Their
objective is for the overload behaviour of the DualQ AQM to be similar
to a single queue AQM. The example implementation in implements the 'delay on overload'
policy. Other overload protections can be envisaged:By replacing the priority
scheduler with a weighted round robin scheduler, a minimum
throughput service can be guaranteed for Classic traffic.
Typically the scheduling weight of the Classic queue will be small
(e.g. 5%) to avoid interference with the coupling but big enough
to avoid complete starvation of Classic traffic.To
control milder overload of responsive traffic, particularly when
close to the maximum congestion signal, delay can be used as an
alternative congestion control mechanism. The Dual Queue Coupled
AQM can be made to behave like a single First-In First-Out (FIFO)
queue with different service times by replacing the priority
scheduler with a very simple scheduler that could be called a
"time-shifted FIFO", which is the same as the Modifier Earliest
Deadline First (MEDF) scheduler of . The
scheduler adds T_m to the queue delay of the next L4S packet,
before comparing it with the queue delay of the next Classic
packet, then it selects the packet with the greater adjusted queue
delay. Under regular conditions, this time-shifted FIFO scheduler
behaves just like a strict priority scheduler. But under moderate
or high overload it prevents starvation of the Classic queue,
because the time-shift defines the maximum extra queuing delay
(T_m) of Classic packets relative to L4S.On
severe overload, e.g. due to non responsive traffic, queues will
typically overflow and packet drop will be unavoidable. It is
important to avoid unresponsive ECN traffic (either Classic or
L4S) driving the AQM to 100% drop and mark probability. Congestion
controls that have a minimum congestion window will become
unresponsive to ECN marking when the marking probability is high.
This situation can be avoided by applying the drop probability to
all packets of all traffic types when it exceeds a certain
threshold or by limiting the drop and marking probabilities to a
lower maximum value (up to where fairnes between the different
traffic types is still guaranteed) and rely on delay to control
temporary high congestion and eventually queue overflow. If the
classic drop probability is applied to all types of traffic when
it is higher than a threshold probability the queueing delay can
be controlled up to any overload situation, and no further
measures are required. If a maximum classic and coupled L4S
probability of less than 100% is used, both queues need scheduling
opportunities and should eventually experience drop. This can be
achieved with a scheduler that guarantees a minimum throughput for
each queue, such as a weighted round robin or time-shifted FIFO
scheduler. In that case a common queue limit can be configured
that will drop packets of both types of traffic.To keep the throughput of both L4S and Classic flows equal
over the full load range, a different control strategy needs to be
defined above the point where one congestion control first saturates
to a probability of 100% (if k>1, L4S will saturate first).
Possible strategies include: also dropping L4S; increasing the
queueing delay for both; or ensuring that L4S traffic still responds
to marking below a window of 2 segments (see ).Thanks to Anil Agarwal for detailed review comments and suggestions
on how to make our explanation clearer.The authors' contributions are part-funded by the European Community
under its Seventh Framework Programme through the Reducing Internet
Transport Latency (RITE) project (ICT-317700). The views expressed here
are solely those of the authors.Adaptive RED: An Algorithm for Increasing the Robustness of
RED's Active Queue ManagementACIRIACIRIACIRIIdentifying Modified Explicit Congestion Notification (ECN)
Semantics for Ultra-Low Queuing DelayThis specification defines the identifier to be used on IP
packets for a new network service called low latency, low loss and
scalable throughput (L4S). It is similar to the original (or
'Classic') Explicit Congestion Notification (ECN). 'Classic' ECN
marking was required to be equivalent to a drop, both when applied
in the network and when responded to by a transport. Unlike
'Classic' ECN marking, for packets carrying the L4S identifier,
the network applies marking more immediately and more aggressively
than drop, and the transport response to each mark is reduced and
smoothed relative to that for drop. The two changes counterbalance
each other so that the throughput of an L4S flow will be roughly
the same as a 'Classic' flow under the same conditions. However,
the much more frequent control signals and the finer responses to
them result in ultra-low queuing delay without compromising link
utilization, even during high load. Examples of new active queue
management (AQM) marking algorithms and examples of new transports
(whether TCP-like or real- time) are specified separately. The new
L4S identifier is the key piece that enables them to interwork and
distinguishes them from 'Classic' traffic. It gives an incremental
migration path so that existing 'Classic' TCP traffic will be no
worse off, but it can be prevented from degrading the ultra-low
delay and loss of the new scalable transports.Explicit Congestion Notification (ECN)
ExperimentationMultiple protocol experiments have been proposed that involve
changes to Explicit Congestion Notification (ECN) as specified in
RFC 3168. This memo summarizes the proposed areas of
experimentation to provide an overview to the Internet community
and updates RFC 3168, a Proposed Standard RFC, to allow the
experiments to proceed without requiring a standards process
exception for each Experimental RFC to update RFC 3168. Each
experiment is still required to be documented in an Experimental
RFC. In addition, this memo makes related updates to the ECN
specifications for RTP in RFC 6679 and to the ECN specifications
for DCCP in RFC 4341, RFC 4342 and RFC 5622. This memo also
records the conclusion of the ECN Nonce experiment in RFC 3540,
obsoletes RFC 3540 and reclassifies it as Historic to enable new
experimental use of the ECT(1) codepoint.Low Latency, Low Loss, Scalable Throughput (L4S) Internet
Service: ArchitectureThis document describes the L4S architecture for the provision
of a new service that the Internet could provide to eventually
replace best efforts for all traffic: Low Latency, Low Loss,
Scalable throughput (L4S). It is becoming common for _all_ (or
most) applications being run by a user at any one time to require
low latency. However, the only solution the IETF can offer for
ultra-low queuing delay is Diffserv, which only favours a minority
of packets at the expense of others. In extensive testing the new
L4S service keeps average queuing delay under a millisecond for
_all_ applications even under very heavy load, without sacrificing
utilization; and it keeps congestion loss to zero. It is becoming
widely recognized that adding more access capacity gives
diminishing returns, because latency is becoming the critical
problem. Even with a high capacity broadband access, the reduced
latency of L4S remarkably and consistently improves performance
under load for applications such as interactive video,
conversational video, voice, Web, gaming, instant messaging,
remote desktop and cloud-based apps (even when all being used at
once over the same access link). The insight is that the root
cause of queuing delay is in TCP, not in the queue. By fixing the
sending TCP (and other transports) queuing latency becomes so much
better than today that operators will want to deploy the network
part of L4S to enable new products and services. Further, the
network part is simple to deploy - incrementally with zero-config.
Both parts, sender and network, ensure coexistence with other
legacy traffic. At the same time L4S solves the long- recognized
problem with the future scalability of TCP throughput. This
document describes the L4S architecture, briefly describing the
different components and how the work together to provide the
aforementioned enhanced Internet service.Relentless Congestion ControlPSC`Data Centre to the Home': Ultra-Low Latency for AllNokia Bell LabsSimula Research LabBTNokia Bell Labs(Under submission)PI2: A Linearized AQM for both Classic and Scalable
TCPNokia Bell LabsSimula Research LabBTNokia Bell Labs(To appear)Insights from Curvy RED (Random Early Detection)BTControlling Queue DelayPARCPollere IncMEDF - a simple scheduling algorithm for two real-time
transport service classes with application in the UTRANUniversity of WuerzburgInfosim AGSiemensSiemensAs a first concrete example, the pseudocode below gives the DualQ
Coupled AQM algorithm based on the PI2 Classic AQM, we used and tested.
For this example only the pseudo code is given. An open source
implementation for Linux is available at:
https://github.com/olgabo/dualpi2.When packets arrive, first a common queue limit is checked as shown
in line 3 of the enqueuing pseudocode in . Note that the limit is
deliberately tested before enqueue to avoid any bias against larger
packets (so the actual buffer has to be one packet larger than limit).
If limit is not exceeded, the packet will be classified and enqueued to
the Classic or L4S queue dependent on the least significant bit of the
ECN field in the IP header (line 6). Packets with a codepoint having an
LSB of 0 (Not-ECT and ECT(0)) will be enqueued in the Classic queue.
Otherwise, ECT(1) and CE packets will be enqueued in the L4S queue.The pseudocode in
summarises the per packet dequeue implementation of the DualPI2 code.
Line 3 implements the time-shifted FIFO scheduling. It takes the packet
that waited the longest, biased by a time-shift of tshift for the
Classic traffic. If an L4S packet is scheduled, lines 5 and 6 mark the
packet if either the L4S threshold T is exceeded, or if a random marking
decision is drawn according to k times the probability p (maintained by
the dualpi2_update() function discussed below). The coupling factor is
applied here for determining the L4S marking probability so that Classic
TCP control is independent from the L4S coupling factor. If a Classic
packet is scheduled, lines 10 to 16 drop or mark the packet based on 2
random decisions resulting in the squared probability p^2 (hence the
name PI2 for Classic traffic). Note that p is not reduced here by the
factor k, as p has already been multiplied by the factor k when it was
used to mark the L4S traffic. The coupling factor gives Classic TCP and
DCTCP traffic equal throughput; Because L4S marking is factored up by k,
the dynamic gain parameters alpha and beta also have to be factored up
by k for the L4S queue, which is necessary to ensure that Classic TCP
and DCTCP controls have the same stability.The probability p is kept up to date by the core PI algorithm in
which is executed every Tupdate
( now recommends 16ms, but in our
testing so far we have used the earlier recommendation of 32ms). Note
that p solely depends on the queuing time in the Classic queue. In line
2, the current queuing delay is evaluated by inspecting the timestamp of
the next packet to schedule in the Classic queue. The function cq.time()
subtracts the time stamped at enqueue from the current time and
implicitly takes the current queuing delay as 0 if the queue is empty.
Line 3 and 4 only need to be executed when the configuration parameters
are changed. Alpha and beta in Hz are gain factors per 1 second. If a
briefer update time is configured, alpha_U and beta_U (_U = per Tupdate)
also have to be reduced, to ensure that the same response is given over
time. As such, a smaller Tupdate will only result in a response with
smaller and finer steps, not a more aggressive response. The new
probability is calculated in line 5, where target is the target queuing
delay, as defined in . In corner cases,
p can overflow the range [0,1] so the resulting value of p has to be
bounded (omitted from the pseudocode). Unlike PIE, alpha_U and beta_U
are not tuned dependent on p, every Tupdate. Instead, in PI2 alpha_U and
beta_U can be constants because the squaring applied to Classic traffic
tunes them inherently, as explained in .In our experiments so far (building on experiments with PIE) on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, PI2 achieves good results with the following
parameters:tshift = 40msT = max(1ms, serialization time of 2 MTU)target = 20msTupdate = 32msk = 2alpha = 10Hz (alpha*k = 20Hz for L4S)beta = 100Hz (beta*k = 200Hz for L4S)As another example, the pseudocode below gives the Curvy RED based
DualQ Coupled AQM algorithm we used and tested. Although we designed the
AQM to be efficient in integer arithmetic, to aid understanding it is
first given using real-number arithmetic. Then, one possible
optimization for integer arithmetic is given, also in pseudocode. To aid
comparison, the line numbers are kept in step between the two by using
letter suffixes where the longer code needs extra lines.Packet classification code is not shown, as it is no different from
. Potential classification
schemes are discussed in . Overload
protection code will be included in a future draft {ToDo}.At the outer level, the structure of dualq_dequeue() implements
strict priority scheduling. The code is written assuming the AQM is
applied on dequeue (Note ) . Every time dualq_dequeue() is called,
the if-block in lines 2-6 determines whether there is an L4S packet to
dequeue by calling lq.dequeue(pkt), and otherwise the while-block in
lines 7-13 determines whether there is a Classic packet to dequeue, by
calling cq.dequeue(pkt). (Note )In the lower priority Classic queue, a while loop is used so that, if
the AQM determines that a classic packet should be dropped, it continues
to test for classic packets deciding whether to drop each until it
actually forwards one. Thus, every call to dualq_dequeue() returns one
packet if at least one is present in either queue, otherwise it returns
NULL at line 14. (Note )Within each queue, the decision whether to drop or mark is taken as
follows (to simplify the explanation, it is assumed that U=1):If the test at line 2 determines there is an L4S
packet to dequeue, the tests at lines 3a and 3c determine whether to
mark it. The first is a simple test of whether the L4S queue
(lq.byt() in bytes) is greater than a step threshold T in bytes
(Note ). The second
test is similar to the random ECN marking in RED, but with the
following differences: i) the marking function does not start with a
plateau of zero marking until a minimum threshold, rather the
marking probability starts to increase as soon as the queue is
positive; ii) marking depends on queuing time, not bytes, in order
to scale for any link rate without being reconfigured; iii) marking
of the L4S queue does not depend on itself, it depends on the
queuing time of the other (Classic)
queue, where cq.sec() is the queuing time of the packet at the head
of the Classic queue (zero if empty); iv) marking depends on the
instantaneous queuing time (of the other Classic queue), not a
smoothed average; v) the queue is compared with the maximum of U
random numbers (but if U=1, this is the same as the single random
number used in RED).Specifically, in line 3a
the marking probability p_L is set to the Classic queueing time
qc.sec() in seconds divided by the L4S scaling parameter 2^S_L,
which represents the queuing time (in seconds) at which marking
probability would hit 100%. Then in line 3d (if U=1) the result is
compared with a uniformly distributed random number between 0 and 1,
which ensures that marking probability will linearly increase with
queueing time. The scaling parameter is expressed as a power of 2 so
that division can be implemented as a right bit-shift (>>) in
line 3 of the integer variant of the pseudocode ().If the test at line 7 determines that there
is at least one Classic packet to dequeue, the test at line 9b
determines whether to drop it. But before that, line 8b updates Q_C,
which is an exponentially weighted moving average (Note ) of the queuing time
in the Classic queue, where pkt.sec() is the instantaneous queueing
time of the current Classic packet and alpha is the EWMA constant
for the classic queue. In line 8a, alpha is represented as an
integer power of 2, so that in line 8 of the integer code the
division needed to weight the moving average can be implemented by a
right bit-shift (>> f_C).Lines 9a and
9b implement the drop function. In line 9a the averaged queuing time
Q_C is divided by the Classic scaling parameter 2^S_C, in the same
way that queuing time was scaled for L4S marking. This scaled
queuing time is given the variable name sqrt_p_C because it will be
squared to compute Classic drop probability, so before it is squared
it is effectively the square root of the drop probability. The
squaring is done by comparing it with the maximum out of two random
numbers (assuming U=1). Comparing it with the maximum out of two is
the same as the logical `AND' of two tests, which ensures drop
probability rises with the square of queuing time (Note ). Again, the
scaling parameter is expressed as a power of 2 so that division can
be implemented as a right bit-shift in line 9 of the integer
pseudocode.The marking/dropping functions in each queue (lines 3 & 9) are
two cases of a new generalization of RED called Curvy RED, motivated as
follows. When we compared the performance of our AQM with fq_CoDel and
PIE, we came to the conclusion that their goal of holding queuing delay
to a fixed target is misguided . As the
number of flows increases, if the AQM does not allow TCP to increase
queuing delay, it has to introduce abnormally high levels of loss. Then
loss rather than queuing becomes the dominant cause of delay for short
flows, due to timeouts and tail losses.Curvy RED constrains delay with a softened target that allows some
increase in delay as load increases. This is achieved by increasing drop
probability on a convex curve relative to queue growth (the square curve
in the Classic queue, if U=1). Like RED, the curve hugs the zero axis
while the queue is shallow. Then, as load increases, it introduces a
growing barrier to higher delay. But, unlike RED, it requires only one
parameter, the scaling, not three. The diadvantage of Curvy RED is that
it is not adapted to a wide range of RTTs. Curvy RED can be used as is
when the RTT range to support is limited otherwise an adaptation
mechanism is required.There follows a summary listing of the two parameters used for each
of the two queues:The scaling factor of the dropping function
scales Classic queuing times in the range [0, 2^(S_C)] seconds
into a dropping probability in the range [0,1]. To make division
efficient, it is constrained to be an integer power of two;To smooth the queuing time of the Classic
queue and make multiplication efficient, we use a negative
integer power of two for the dimensionless EWMA constant, which
we define as 2^(-f_C).As for the Classic queue, the
scaling factor of the L4S marking function scales Classic
queueing times in the range [0, 2^(S_L)] seconds into a
probability in the range [0,1]. Note that S_L = S_C + k, where k
is the coupling between the queues (). So S_L and k count as only one
parameter;The queue size in bytes at which step
threshold marking starts in the L4S queue.{ToDo: These are the raw parameters used within the algorithm.
A configuration front-end could accept more meaningful parameters and
convert them into these raw parameters.}From our experiments so far, recommended values for these parameters
are: S_C = -1; f_C = 5; T = 5 * MTU for the range of base RTTs typical
on the public Internet. explains why
these parameters are applicable whatever rate link this AQM
implementation is deployed on and how the parameters would need to be
adjusted for a scenario with a different range of RTTs (e.g. a data
centre) {ToDo incorporate a summary of that report into this draft}. The
setting of k depends on policy (see and
respectively for its recommended
setting and guidance on alternatives).There is also a cUrviness parameter, U, which is a small positive
integer. It is likely to take the same hard-coded value for all
implementations, once experiments have determined a good value. We have
solely used U=1 in our experiments so far, but results might be even
better with U=2 or higher.Note that the dropping function at line 9 calls maxrand(2*U), which
gives twice as much curviness as the call to maxrand(U) in the marking
function at line 3. This is the trick that implements the square rule in
equation (1) (). This is based on the fact
that, given a number X from 1 to 6, the probability that two dice throws
will both be less than X is the square of the probability that one throw
will be less than X. So, when U=1, the L4S marking function is linear
and the Classic dropping function is squared. If U=2, L4S would be a
square function and Classic would be quartic. And so on.The maxrand(u) function in lines 16-21 simply generates u random
numbers and returns the maximum (Note ). Typically, maxrand(u) could be
run in parallel out of band. For instance, if U=1, the Classic queue
would require the maximum of two random numbers. So, instead of calling
maxrand(2*U) in-band, the maximum of every pair of values from a
pseudorandom number generator could be generated out-of-band, and held
in a buffer ready for the Classic queue to consume.Notes:The drain rate of the queue can vary
if it is scheduled relative to other queues, or to cater for
fluctuations in a wireless medium. To auto-adjust to changes in
drain rate, the queue must be measured in time, not bytes or packets
. In our Linux implementation, it was easiest
to measure queuing time at dequeue. Queuing time can be estimated
when a packet is enqueued by measuring the queue length in bytes and
dividing by the recent drain rate.An implementation has to use
priority queueing, but it need not implement strict priority.If packets can be enqueued while
processing dequeue code, an implementer might prefer to place the
while loop around both queues so that it goes back to test again
whether any L4S packets arrived while it was dropping a Classic
packet.In order not to change too many factors
at once, for now, we keep the marking function for DCTCP-only
traffic as similar as possible to DCTCP. However, unlike DCTCP, all
processing is at dequeue, so we determine whether to mark a packet
at the head of the queue by the byte-length of the queue behind it. We plan to test whether using
queuing time will work in all circumstances, and if we find that the
step can cause oscillations, we will investigate replacing it with a
steep random marking curve.An EWMA is only one possible way to
filter bursts; other more adaptive smoothing methods could be valid
and it might be appropriate to decrease the EWMA faster than it
increases.In practice at line 10 the
Classic queue would probably test for ECN capability on the packet
to determine whether to drop or mark the packet. However, for
brevity such detail is omitted. All packets classified into the L4S
queue have to be ECN-capable, so no dropping logic is necessary at
line 3. Nonetheless, L4S packets could be dropped by overload code
(see ).In the integer variant of the
pseudocode () real numbers are
all represented as integers scaled up by 2^32. In lines 3 & 9
the function maxrand() is arranged to return an integer in the range
0 <= maxrand() < 2^32. Queuing times are also scaled up by
2^32, but in two stages: i) In lines 3 and 8 queuing times cq.ns()
and pkt.ns() are returned in integer nanoseconds, making the values
about 2^30 times larger than when the units were seconds, ii) then
in lines 3 and 9 an adjustment of -2 to the right bit-shift
multiplies the result by 2^2, to complete the scaling by 2^32.RTT_C / RTT_LRenoCubic1k=1k=02k=2k=13k=2k=24k=3k=25k=3k=3To determine the appropriate policy, the operator first has to judge
whether it wants DCTCP flows to have roughly equal throughput with Reno
or with Cubic (because, even in its Reno-compatibility mode, Cubic is
about 1.4 times more aggressive than Reno). Then the operator needs to
decide at what ratio of RTTs it wants DCTCP and Classic flows to have
roughly equal throughput. For example choosing the recommended value of
k=0 will make DCTCP throughput roughly the same as Cubic, if their RTTs are the same.However, even if the base RTTs are the same, the actual RTTs are
unlikely to be the same, because Classic (Cubic or Reno) traffic needs a
large queue to avoid under-utilization and excess drop, whereas L4S
(DCTCP) does not. The operator might still choose this policy if it
judges that DCTCP throughput should be rewarded for keeping its own
queue short.On the other hand, the operator will choose one of the higher values
for k, if it wants to slow DCTCP down to roughly the same throughput as
Classic flows, to compensate for Classic flows slowing themselves down
by causing themselves extra queuing delay.The values for k in the table are derived from the formulae, which
was developed in :For localized traffic from a particular ISP's data centre, we used
the measured RTTs to calculate that a value of k=3 would achieve
throughput equivalence, and our experiments verified the formula very
closely.