two versions of pcc

paper

PCC Allegro

recall

computing utility:

where vector x is the global state of sending rates, $L(x) = max\{ 0, 1- \frac{C}{\sum_j x_j}\}$ denotes the per-packet loss probability, $T_i(x) = x_i (1-L(x))$ denotes the sender i‘s throughput.
note: $Sigmoid(x) = \frac{1}{1+e^{\alpha x}}, \alpha > 0$.

Starting State

starts at $2 \cdot MSS/RTT$ and doubles at each Monitor Interval(MI), until the utility decreases.

Decision Making State

in four consecutive MIs divided to two pairs, in each pair PCC attempts rate $r(1-\epsilon)$ and rate $r(1+\epsilon)$.

  • if the higher or lower rate leads to a high utility in both pairs, PCC adjusts its rate to the new one.
  • if the result is inconclusive, PCC keeps its rate and $\epsilon += \epsilon_{min} $. ($\epsilon_{min} = 0.01, \epsilon_{max} = 0.05$.)

Rate Adjusting State

$r_n = r_{n-1} \cdot (1 + n \cdot \epsilon_{min} \cdot dir)$, $dir=\pm1$

if $U(r_n) < U(r_{n-1})$, PCC reverts its rate to $r_{n-1}$ and moves back to Decision Making State.


PCC Vivace

vs. Allegro:

  • incorporates latency awareness and mitigates the bufferbloat problem and resulting packet loss and latency inflation.
  • more friendly to TCP and other senders with diff utility functions.
  • faster convergence
  • reacts more quickly upon network condition changes.

utility function:

$u(x_i, \frac{d(RTT_i)}{dT}, L_i ) = x_i^t - bx_i\frac{d(RTT_i)}{dT} - cx_i \times L_i $

where $ 0 < t < 1, b\ge 0, c > 0, x_i $ is the sender $i$’s sending rate, $L_i$ is ts observed loss rate, $\frac{d(RTT_i)}{dT}$ is the observed RTT gradient in the MI, i.e. the increase in latency.

By using estimating $\gamma$ from the gradient of the utility function, Vivace changes the rate by $\theta \gamma$, where $\theta$ changes from initially very high to 0.

But there are two problem:

  • the initial step is very high: leading to high loss rates and latency inflation
  • as time goes by, $\theta$ changes to very small, leading to slow reaction to changed network condition.
From utility function to Rate

suppose the current rate is $r$. In next two MIs, test the rate $r(1+\epsilon)$ and $r(1-\epsilon)$, compute the corresponding utility values $u_1$ and $u_2$, estimate the gradient $\gamma = \frac{u_1 - u_2}{2\epsilon r}$.

The rate change $\Delta_r = m(\tau) \theta_0 \gamma$, where $m(\tau)$ denotes the confidence amplifier, $m(0) = 1$ and $\tau$ denotes the number of consecutive decisions to change the rate in the same direction. When the direction is reversed, $\tau$ is set back to 0.

The dynamic change boundary $\omega$ for preventing drastic rate change in case of unreliable measurements or large changes between MIs. When $\Delta_r > \omega r$, the rate change is capped at $\omega r$. $\omega$ is updated to $\omega = \omega_0 + k \cdot \delta$. k denotes the consecutive rate adjustments times when the rate change exceeds the boundary. Whenever $\Delta_r \le r \cdot \omega$, the k is set to the smallest value for $\Delta_r \le r \cdot \omega$. k is set back to 0 when the direction of rate adjustment changed.