LINKS
ABOUT
SOCIAL
  • Black LinkedIn Icon
  • Black Facebook Icon
  • Black Twitter Icon
  • Black Google+ Icon

franklin.mungau@trueuxltd.com

katherine.mungau@trueuxltd.com

Tel (UK): +44 (0) 7580 138 541

Tel (KEN): +254 (0) 722 509 995

Babbage Way, Clyst Honiton,

Exeter EX5 2FN

United Kingdom

© 2018 by TRUEUX. Proudly created with Wix.com

Please reload

Recent Posts

Deploying BBR v2 on geo-distributed cloud servers

February 18, 2020

1/3
Please reload

Featured Posts

AI derived “just-in-time” capacity – CUBIC vs. BBR

August 17, 2019

Google has recently developed a new algorithm called BBR (B for Bottleneck, B for Bandwidth, R for Round-Trip-Time) which dramatically changes the way in which information flows on the Internet. BBR throughput and latency improvements have been observed independently at Amazon CloudFront where BBR was deployed during March and April 2019. So significant are the results that one US operator reported improvements in average download speeds of more than 100% for CloudFront requests!  

 

At TrueUX, Artificial Intelligence (AI) is being used to reason about how much capacity is required to achieve optimal performance levels whilst taking into account of all the necessary real-world variables such as the arrival rate of connections, Round-Trip-Time (RTT) variability, etc.

 

[Figure 1 - SureLink-XG analysis shows that TCP CUBIC flows (Top Left & Bottom Left) that start when a few other flows are in steady-state can fail to achieve a fair share of the bandwidth because sharing is based on a "full queue model." On the other hand, TCP BBR flows (Top Right & Bottom Right) perform significantly better particularly on Long-distance, high-speed links using a sharing model that is based on an "empty queue model"

 

 

 

SureLink-XG’s powerful computational engine has the unique capability of monitoring network activity levels, reasoning about the “manoeuvrability” of congestion control algorithms on a massive scale and deriving ROI & performance optimal resource allocations across the entire network – all with real-time prospects.

 

How many lanes does a highway need to ease congestion?

 

The information superhighway functions in much the same way as a typical highway in that lanes are not dedicated – individual vehicles may manoeuvre by speeding up, slowing down, or overtaking to maximise the use of available lanes. This is clearly an efficient approach to share any available resources (lanes/bandwidth) with the one possible downside being that it is difficult to predict the outcome (speed) for any given number of lane allocations – keeping with the highway analogy, middle lane hogs can easily cause congestion even when an adequate number of lanes has been provisioned.

 

How much network capacity does the Internet need for a fast connection speed?

 

Manoeuvrability on the Internet is enabled by congestion control algorithms (specifically, variants of the Transmission Control Protocol - TCP). These distributed algorithms run on the very edges of the Internet (user devices & servers) enabling “always on" connectivity by adapting the connection data rates to the prevailing conditions on the Internet. In contrast, legacy “dial up” connections effectively dedicated resources to individual Internet connections – a reliable but significantly inefficient approach (in order to appreciate the difference, consider calling up the highway agency to reserve lanes for every league of the journey from a home location to a given destination)!

 

1. BBR Bottleneck Bandwidth-based pacing versus Acknowledgement-based pacing 

 

The main mechanism that TCP uses for active feedback on the state of the Internet is the concept of ack-pacing or self-clocking in which a sender injects new packets into the network only when an old packet has exited. This mechanism was first proposed by Van Jacobson in 1988 in order to solve a congestion collapse event (the equivalent of total gridlock on a highway) that happened between the Lawrence Berkeley Laboratory and UC Berkeley.

 

This concept of the “conservation of packets” is so robust that it has been implemented in state-of-the-art congestion control algorithms (e.g., TCP CUBIC, 2005) the latter of which has become (almost) ubiquitous on the Internet. However, a problem that persists with the latter advanced congestion control algorithm lies in the loss-based definition of network congestion.

 

Indeed, under performance on the Internet has been attributed to a large extent to the fact that TCP interprets congestion as the loss of packets which occurs only when the congestion control algorithm has (in a sense) caused an over-flow of the delivery PIPE. An interesting fact is that – regardless of how many links a connection traverses or what their individual speeds are – an arbitrarily complex path behaves as a single link operating at a Bottleneck rate. Anyone who has been stuck in traffic only to emerge to find no apparent cause of the delay would readily attest to this observation.

 

Googles *NEW* algorithm BBR (proposed by Cardwell & Van Jacobson et. al. 2017) re-defines congestion as the point at which the PIPE is transmitting packets at the maximum possible rate. The only way to ease congestion at this location is to pace the sending of packets (arrival rate) into the network at the Bottleneck Bandwidth rate and to have a maximum number of packets in the system (network) that is equal to the Bandwidth-Delay Product – recall that “delay” refers the time required for active feedback and is the time taken to send a packet and receive an acknowledgement from the receiver; i.e., the Round-Trip-Time or RTT. BBR simultaneously meets these conditions and the resulting queuing performance is indeed remarkable.

 

2. BBR “manoeuvrability” when responding to the network state

 

Whilst TCP CUBIC already has mechanisms to measure the RTT, BBR requires a new implementation to track the state of the network in terms of the Bottleneck Bandwidth. BBR relies on the resulting network model (Bottleneck Bandwidth & RTT) to speed-up and slow-down transmission accordingly.

 

In addition to the network model, BBR has a novel response to the network state when compared to CUBIC which transitions through the traditional TCP states:

 

Slow-Start – > fast re-transmit – > fast recovery – > Congestion Avoidance

 

Instead of this traditional trajectory, BBR follows a different transition of states each of which is triggered by specific conditions that are met during a data transfer:

 

Startup – > (fast re-transmit & fast recovery) Drain – > Probe Bandwidth – > Probe RTT

 

 

[Figure 2 - BBR state transition diagram determines how BBR develops a model of the network and adapts the rate at which packets are sent into the network]

 

In accordance with the network model, any increases in the number of packets in the network as directed by BBR is limited both by the pacing rate and the congestion window. In the BBR start-up state, the pacing rate and the congestion window grow at the rate 2/ln(2) which effectively doubles the sending rate whilst the delivery rate is increasing.

 

Once the BBR sender “perceives” that the number of packets in the network has reached a maximum (i.e.,  this number is greater or equal to the Bandwidth Delay Product – BDP – i.e., Bottleneck Bandwidth × RTT and there has thus not been any increase in bandwidth measurements), BBR transitions to the drain state, which as the name implies, drains network queues of the backlog accumulated during the start-up phase. This result is achieved primarily by pacing at the much lower rate of ln(2)/2.

 

As soon as the Pipe has been drained of a queue backlog, BBR enters steady-state – the so-called Probe-BW state. In Probe-BW, the pacing rate is cycled between 8 different values which are maintained approximately for the duration of a Round-Trip Time (RTT). In the first state, the pacing rate increase the number of packets in the network by 25%, followed by a decrease of 25% and an increase of 0% thereafter.

 

In the final phase, the Probe-RTT state, BBR attempts to determine the network “delay”– the time taken to send a packet and receive an acknowledgement from the receiver. The reason for this state lies in the uncertainty principle – if the bottleneck link is full, it is possible to determine the maximum sending rate but not the minimum delay and vice versa. Details of the precise points of transition between the different states can be found in the following draft IETF RFC

 

3. BBRs new “sharing-is-caring” model

 

The third distinctive feature of BBR is that – unlike TCP CUBIC – sharing is effected by calculating the point at which the network queue is empty (sending at the maximum rate) rather than the point at which the network queue is full (packets are being dropped).

 

Specifically, in the Probe-BW phase described above, bigger flows yield much more bandwidth to the smaller flows resulting in each flow “learning” its fair share of the available bandwidth. However, unfairness can still persist if shorter/later flows overestimate the Round-Trip Time. It is therefore necessary to combined the operation of the Probe-BW phase with the effects of the Probe-RTT phase during which flows drain the number of packets in the network to a bare minimum, such that each flow can revise its estimates of the delay (RTT). Synchronisation is effected by the feature that starts a new Probe-RTT cycle every time a reduced delay measurement is observed.

 

It is noted that "new" BBR flows tend to contend for capacity much more aggressively by filling the PIPE in the start-up phase, but quickly settle to a fair share of the available bandwidth. SureLink-XG analysis shows that CUBICs loss-based approach may results in flows failing to achieve their fair share of network resources and thus delivering data at very low rates. SureLink-XG analysis clearly shows that it is indeed possible for a multi-Gigabit infrastructure to deliver only a few Mbps over intercontinental distances as described in the article – BBR: congestion-based congestion control.

 

0 – 25 Mbps in 30 seconds!

 

SureLink-XG was used to calculate the amount of capacity required to support Netflix HD vs. Netflix U-HD vs Google Stadia U-HD Games streaming based on the BBR congestion control algorithm and real-world connection arrival rates using datasets from the MAWI working group traffic archive. The performance of CUBIC was then determined for the BBR optimised capacity over an evaluation period of 30 seconds, a period during which long-lived flows had achieved a steady-state (Congestion Avoidance and Probe-BW respectively).  

 

 

[Table 1 - If the Bottleneck Bandwidth is equivalent to the width of a PIPE and the RTT is equivalent to its length, SureLink-XGanalysis shows that BBR significantly out performs CUBIC by more than 100% in terms of download speed over a 30 sec interval on a "Long Fat PIPE"

 

 

The performance improvements owing to BBR and its stability in delivering high throughput has the potential to drastically minimise the amount of network capacity provisioned. It can be expected that flows will achieve a fair share of the available capacity with predictable performance levels for a given Internet connection arrival rate. 

 

SureLink-XG's sophisticated approach takes full account of the complexity of BBRs exact operation when determining the exact amount of "just-in-time" capacity that delivers optimal performance levels and also minimises network capacity provisioning costs. 

 

 

Acknowledgements: 

 

This work is dedicated to my daughters, nieces and nephews - here is to a bright and positive future!

 

 

 

 

 

Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload