A Quality of Experience and Smart Queue Management system for ISPs. Leverage CAKE to improve network responsiveness, enforce bandwidth plans, and reduce bufferbloat.
Go to file
2021-12-11 11:13:34 -07:00
docs Add files via upload 2021-10-10 16:17:12 -06:00
v0.7 Delete new.txt 2021-07-01 23:53:20 -06:00
v0.8 Update LibreQoS.py 2021-07-04 12:06:57 -06:00
v0.9 Update uispImport.py 2021-10-29 08:22:09 -06:00
v1.0 Update LibreQoS.py 2021-12-11 11:13:34 -07:00
.gitmodules Added xdp-cpumap-tc submodule 2021-12-05 12:12:57 -07:00
LICENSE Initial commit 2020-10-02 09:18:14 -06:00
README.md Update README.md 2021-12-11 11:13:12 -07:00

LibreQoS

Banner LibreQoS is designed for Internet Service Providers (such as Fixed Wireless Internet Service Providers) to manage customer traffic and thus improve the experience, prevent bufferbloat, and keep the network responsive.

Because the customers see better performance, ISPs receive fewer support tickets/calls and reduce network traffic from fewer retransmissions.

A sub-$600 computer running LibreQoS should be able to shape traffic for hundreds or thousands of customers at 2 Gbps.

How does LibreQoS work?

ISPs use LibreQoS to enforce customer plan bandwidth, improve responsiveness, reduce latency, reduce bufferbloat, and improve overall network performance.

LibreQoS runs on a computer that sits between your upstream provider and the core of your network (see graphic below). It manages all customer traffic with the htb+cake or htb+fq_codel Active Queue Management (AQM) algorithms.

LibreQoS directs each customer's traffic into a hierarchy token bucket, where traffic can be shaped both by Access Point capacity and by the subscriber's allocated plan bandwidth.

Who should use LibreQoS?

The target for LibreQoS is ISPs that have a modest number of subscribers. LibreQoS runs on an inexpensive computer and handles hundreds or thousands of subscribers.

Individuals can reduce bufferbloat or latency on their home internet connections (whether or not their service provider offers an AQM solution) with a router that supports fq_codel, such as IQrouter, Ubiquiti's EdgeRouter-X (be sure to enable advanced queue fq_codel), or installing OpenWrt or DD-WRT on their existing router.

Large Internet Service Providers with significantly more subscribers may benefit from using commercially supported alternatives with NMS/CRM integrations such as Preseem or Saisei. See the table below.

A comparison of LibreQoS and Preseem

╔══════════════════════╦══════════════════════╦══════════════════╗
║ Feature              ║ LibreQoS             ║ Preseem          ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ IPv4                 ║ ✔                    ║ ✔                ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ IPv6                 ║ v0.8 only            ║ ✔                ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ fq_codel             ║ ✔                    ║ ✔                ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ cake                 ║ ✔                    ║                  ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ Fair Queuing         ║ ✔                    ║ ✔                ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ VoIP Prioritization  ║ ✔ cake diffserv4 [1] ║                  ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ Video Prioritization ║ ✔ cake diffserv4 [1] ║                  ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ CRM Integration      ║                      ║ ✔                ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ Metrics              ║                      ║ ✔                ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ Shape By             ║ Site, AP, Client     ║ Site, AP, Client ║
╠══════════════════════╬══════════════════════╬══════════════════╣
║ Throughput           ║ 10G+ (v0.9+)         ║ 20G+ [2]         ║
╚══════════════════════╩══════════════════════╩══════════════════╝

How do Cake and fq_codel work?

These AQM techniques direct each customer's traffic into its own queue, where LibreQoS can shape it both by Access Point capacity and by the subscriber's allocated plan bandwidth.

The difference is dramatic: the chart below shows the ping times during a Realtime Response Under Load (RRUL) test before and after enabling LibreQoS AQM. The RRUL test sends full-rate traffic in both directions, then measures latency during the transfer. Note that the latency drops from ~20 msec (green, no LibreQoS) to well under 1 msec (brown, using LibreQoS).

The impact of fq_codel on a 3000Mbps connection vs hard rate limiting — a 30x latency reduction.

“FQ_Codel provides great isolation... if you've got low-rate videoconferencing and low rate web traffic they never get dropped. A lot of issues with IW10 go away, because all the other traffic sees is the front of the queue. You don't know how big its window is, but you don't care because you are not affected by it. FQ_Codel increases utilization across your entire networking fabric, especially for bidirectional traffic... If we're sticking code into boxes to deploy codel, don't do that. Deploy fq_codel. It's just an across the board win.”

  • Van Jacobson | IETF 84 Talk

References

Typical Client Results

Here are the DSLReports Speed Test results for a Fixed Wireless client averaging 20ms to the test server. LibreQoS keeps added latency below 5ms in each direction.

Network Design

  • Edge and Core routers with MTU 1500 on links between them
    • If you use MPLS, you would terminate MPLS traffic at the core router. LibreQoS cannot decapsulate MPLS on its own.
  • OSPF primary link (low cost) through the server running LibreQoS
  • OSPF backup link

Diagram

v0.8 (Stable - IPv4 & IPv6) 2 July 2021

Features

  • Dual stack: client can be shaped by same qdisc for both IPv4 and IPv6
  • Up to 1000 clients (IPv4/IPv6)
  • Real world asymmetrical throughput: between 2Gbps and 4.5Gbps depending on CPU single thread performance.
  • HTB+fq_codel or HTB+cake
  • Shape Clients by Access Point / Node capacity
  • TC filters split into groups through hashing filters to increase throughput
  • Simple client management via csv file
  • Simple statistics - table shows top 20 subscribers by packet loss, with APs listed

Limitations

  • Qdisc locking problem limits throughput of HTB used in v0.8 (solved in v0.9). Tested up to 4Gbps/500Mbps asymmetrical throughput using Microsoft Ethr with n=500 streams. High quantities of small packets will reduce max throughput in practice.
  • Linux tc hash tables can only handle ~4000 rules each. This limits total possible clients to 1000 in v0.8.

v0.9 (Stable - IPv4) 11 Jul 2021

Features

  • XDP-CPUMAP-TC integration greatly improves throughput, allows many more IPv4 clients, and lowers CPU use. Latency reduced by half on networks previously limited by single-CPU / TC QDisc locking problem in v.0.8.
  • Tested up to 10Gbps asymmetrical throughput on dedicated server (lab only had 10G router). v0.9 is estimated to be capable of an asymmetrical throughput of 20Gbps-40Gbps on a dedicated server with 12+ cores.
  • Throughput
  • MQ+HTB+fq_codel or MQ+HTB+cake
  • Now defaults to 'cake diffserv4' for optimal client performance
  • Client limit raised from 1,000 to 32,767
  • Shape Clients by Access Point / Node capacity
  • APs equally distributed among CPUs / NIC queues to greatly increase throughput
  • Simple client management via csv file

Considerations

  • Each Node / Access Point is tied to a queue and CPU core. Access Points are evenly distributed across CPUs. Since each CPU can usually only accomodate up to 4Gbps, ensure any single Node / Access Point will not require more than 4Gbps throughput.

Limitations

  • Not dual stack, clients can only be shaped by IPv4 address for now in v0.9. Once IPv6 support is added to XDP-CPUMAP-TC we can then shape IPv6 as well.
  • XDP's cpumap-redirect achieves higher throughput on a server with direct access to the NIC (XDP offloading possible) vs as a VM with bridges (generic XDP).

v1.0 (Stable - IPv4) 11 Dec 2021

Features

  • Can now shape by Site, in addition to by AP and by Client

Considerations

  • If you shape by Site, each site is tied to a queue and CPU core. Sites are evenly distributed across CPUs. Since each CPU can usually only accomodate up to 4Gbps, ensure any single Site will not require more than 4Gbps throughput.
  • If you shape by Acess Point, each Access Point is tied to a queue and CPU core. Access Points are evenly distributed across CPUs. Since each CPU can usually only accomodate up to 4Gbps, ensure any single Access Point will not require more than 4Gbps throughput.

Limitations

  • As with 0.9, not yet dual stack, clients can only be shaped by IPv4 address until IPv6 support is added to XDP-CPUMAP-TC. Once that happens we can then shape IPv6 as well.
  • XDP's cpumap-redirect achieves higher throughput on a server with direct access to the NIC (XDP offloading possible) vs as a VM with bridges (generic XDP).

General Requirements

  • VM or physical server. Physical server will perform better and better utilize all CPU cores.
  • One management network interface, completely seperate from the traffic shaping interfaces.
  • NIC supporting two interfaces for traffic shaping. Recommendations:
  • Ubuntu Server recommended. Ubuntu Desktop is not recommended as it uses NetworkManager instead of Netplan.
  • v0.9: Requires kernel version 5.9 or above for physical servers, and kernel version 5.14 or above for VM.
  • v0.8: Requires kernel version 5.1 or above.
  • Python 3, PIP, and some modules (listed in respective guides).
  • Choose a CPU with solid single-thread performance within your budget. Generally speaking, any new CPU above $200 can probably handle shaping up to 2Gbps.

Installation and Usage Guide

Best Performance, IPv4 Only:

📄 LibreQoS v0.9 Installation & Usage Guide Physical Server and Ubuntu 21.10

Good Performance, IPv4 Only:

📄 LibreQoS v0.9 Installation & Usage Guide Proxmox and Ubuntu 21.10

OK Performance, IPv4 and IPv6:

📄 LibreQoS 0.8 Installation and Usage Guide - Proxmox and Ubuntu 20.04 LTS

Donate

LibreQoS itself is Open-Source/GPL software: there is no cost to use it.

LibreQoS makes great use of fq_codel - an open source project led by Dave Täht, and contributed to by dozens of others. Without Dave's work, there would be no LibreQoS, Preseem, or Saisei.

If LibreQoS helps your network, please consider donating to Dave's Patreon account. Donating just $0.2/sub/month ($100/month for 500 subs) comes out to be 60% less than any proprietary solution, and you get to ensure continued development of fq_codel and its successor, CAKE.

Donate

Special Thanks

Special thanks to Dave Täht, Jesper Dangaard Brouer, Toke Høiland-Jørgensen, Kumar Kartikeya Dwivedi, Maxim Mikityanskiy, Yossi Kuperman, and Rony Efraim for their many contributions to the Linux networking stack. Thank you Phil Sutter, Bert Hubert, Gregory Maxwell, Remco van Mook, Martijn van Oosterhout, Paul B Schroeder, and Jasper Spaans for contributing to the guides and documentation listed below. Thanks to Leo Manuel Magpayo for his help improving documentation and for testing. Thanks to everyone on the Bufferbloat mailing list for your help and contibutions.

Other References

License

Copyright (C) 2020-2021 Robert Chacón

LibreQoS is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.

LibreQoS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with LibreQoS. If not, see http://www.gnu.org/licenses/.