mirror of
https://github.com/LibreQoE/LibreQoS.git
synced 2025-02-25 18:55:32 -06:00
feat: ✨ move all existing wiki docs into RTD
This commit is contained in:
120
docs/ChangeNotes/v1.4.md
Normal file
120
docs/ChangeNotes/v1.4.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# LibreQoS v1.3.1 to v1.4 Change Summary
|
||||
|
||||
Version 1.4 is a huge milestone. A whole new back-end, new GUI, 30%+ performance improvements, support for single-interface mode.
|
||||
|
||||
## Some Statistics
|
||||
|
||||
- **564** Commits since 1.3.1
|
||||
- **28,399** Lines of Code
|
||||
- **10,142** lines of Rust
|
||||
- **5,448** lines of HTML & JavaScript
|
||||
- **3,126** lines of Python
|
||||
- **2,023** lines of C
|
||||
|
||||
## Peak Performance (So Far)
|
||||
|
||||
- Tested single-stream performance of just under 10 gbit/s on a 16-core Xeon Gold (single interface architecture, using 8 cores for each direction). The flow was shaped with Cake, and retained good (<10 ms RTT latency) performance.
|
||||
- Tested 25 gbit/s total throughput on the same system. CPU was not saturated---we didn't have a bigger network to test!
|
||||
- Running live at ISPs with 11 gbit/s of real customer performance and plenty of room to grow.
|
||||
|
||||
## New Architecture
|
||||
|
||||
- Rust-based back-end provides:
|
||||
- `lqosd` - a daemon that manages:
|
||||
- Loading/setup/unloading eBPF programs.
|
||||
- Gathers statistics directly from eBPF.
|
||||
- Provides a local "bus" for transporting data between components.
|
||||
- Sets "tunables", replacing the need for a separate offloading service.
|
||||
- `lqtop` - a console-based utility for viewing current activity.
|
||||
- `lqos_node_manager` - a web-based GUI that:
|
||||
- Monitors current activity.
|
||||
- Monitors system status.
|
||||
- Provides "best/worst" summaries of RTT.
|
||||
- Provides visibility into the working of queues.
|
||||
- Categorizes traffic to match your network hierarchy, letting you quickly find the bottlenecks.
|
||||
- Let's you browse and search your shaped devices.
|
||||
- Lists "unknown IP addresses" that are passing through the shaper but do not have a rule associated.
|
||||
- Allows you to view and edit the LibreQoS configuration.
|
||||
- `lqos_python` - provides Python access to the bus system.
|
||||
- `lqos_setup` - builds enough configuration files to get you started.
|
||||
- `lqos_users` - authentication for the GUIs.
|
||||
- High-performance Python script:
|
||||
- Batches TC commands for fast execution.
|
||||
- Batches bus-transactions to associate IP subnets with users for fast execution.
|
||||
- Improved scheduler for InfluxDB graphing.
|
||||
|
||||
## High Performance Bridge (Bifrost)
|
||||
|
||||
- Optionally replace the Linux bridge system with an XDP-based bridge accelerator.
|
||||
- Throughput is 30% higher in this mode.
|
||||
|
||||
## Packet and Flow Analysis (Heimdall)
|
||||
|
||||
- Viewing a circuit in the web UI displays a summary of IP traffic flows for that circuit.
|
||||
- A "capture" button will capture packet headers, and allow nanosecond-level analysis of traffic data.
|
||||
- You can download the packet captures in `libpcap` format, for analysis in Wireshark and similar tools.
|
||||
- Configure the capture delay in `/etc/lqos.conf`
|
||||
|
||||
## Single-interface Mode
|
||||
|
||||
- Operate with a single network interface and VLANs for "in" and "out".
|
||||
|
||||
## Graphs
|
||||
|
||||
- Graph current throughput, shaped and unshaped.
|
||||
- Graph CPU and RAM performance.
|
||||
- Graph individual Cake shaper tins, backlog, delays.
|
||||
- TCP "round trip time" histogram showing overall network latency performance.
|
||||
- Per-network node traffic graph.
|
||||
- Per-network node RTT latency histogram, to let you zero-in on troublespots.
|
||||
|
||||
## Miscellaneous
|
||||
|
||||
- `build_rust.sh` builds the entire package from a Git update, with minimal (<1 second) downtime.
|
||||
- `build_dpkg.sh` assembles the entire system into an Ubuntu/Debian `.deb` installer.
|
||||
- Sample `.service` files for `systemd` integration.
|
||||
- Real-time adjustment to tunables.
|
||||
- Redact text into Klingon to allow screenshots without sharing customer data.
|
||||
- Preliminary support for reading IP data inside MPLS packets, as long as they are ordered "VLAN->MPLS->VPLS" and not the other way around.
|
||||
- Automatically trim network trees that exceed 9 levels deep.
|
||||
- Very accurate timing functions for better statistics.
|
||||
- Greatly improved documentation.
|
||||
- Improved rejection of TCP round-trip-time outliers (from long-polled connections).
|
||||
- Improved Spylnx and UISP integrations.
|
||||
|
||||
## Better Distribution
|
||||
|
||||
> This is in alpha testing. It has worked on some test setups, but needs production testing.
|
||||
|
||||
Installation via `apt-get` and LibreQoS's own repo. Add the `libreqos` repo, and you can use `apt-get` to install/update the traffic shaper. This doesn't get you the development toolchain.
|
||||
|
||||
```sh
|
||||
sudo echo "deb http://stats.libreqos.io/ubuntu jammy main" > /etc/apt/sources.list.d/libreqos.list
|
||||
wget -O - -q http://stats.libreqos.io/repo.asc | apt-key add -
|
||||
apt-get update
|
||||
apt-get install libreqos
|
||||
```
|
||||
|
||||
You will be asked some questions about your configuration, and the management daemon and webserver will automatically start. Go to `http://<your_ip>:9123/` to finish installation.
|
||||
|
||||
## Gallery
|
||||
|
||||
### Node Manager - Dashboard
|
||||
|
||||

|
||||
*The node manager displays current activity on your network*
|
||||
|
||||
### Node Manager - Circuit View
|
||||
|
||||

|
||||
*Find out exactly what's going on inside each circuit, monitoring all of the queue stats - you can even view the details of each category tin*
|
||||
|
||||
### Node Manager - Flow Analysis
|
||||
|
||||

|
||||
*Analyze what's going on for a specific client, viewing real-time traffic flow data. No need to run `torch` or equivalent on their router. Ideal for finding connectivity problems.*
|
||||
|
||||
### Node Manager - Packet Capture
|
||||
|
||||

|
||||
*Capture traffic and analyze per-packet in an intuitive, zoomable traffic analysis system. You can view down to nanosecond granularity to find performance problems, and see what's really going on with your traffic. Click "Download PCAP Dump" to analyze the same data in Wireshark.*
|
||||
434
docs/Legacy/v1.3.1.md
Normal file
434
docs/Legacy/v1.3.1.md
Normal file
@@ -0,0 +1,434 @@
|
||||
# LibreQoS v1.3.1
|
||||
|
||||
## LibreQoS v1.3.1 Installation & Usage Guide - Physical Server and Ubuntu 22.04
|
||||
|
||||
## Notes for upgrading from v1.2 or prior
|
||||
|
||||
### Custom CRM Integrations
|
||||
|
||||
If you use a custom CRM integration, please ensure your integration uses a unique circuit identifier for the 'Circuit ID' field in ShapedDevices.csv. This is now required in v1.3 in order to make partial reloading possible. A good choice for this ID would be internet service plan unique ID, or the subscriber site ID your CRM provides for customer service locations. Multiple devices within the same circuit would use the same 'Circuit ID', but aside from that, all Circuit IDs should be distinct. The built-in Splynx and UISP integrations for v1.3 handle this automatically.
|
||||
|
||||
## Network Design Assumptions
|
||||
|
||||
Officially supported configuration:
|
||||
|
||||
- Edge and Core routers with MTU 1500 on links between them
|
||||
- If you use MPLS, you would terminate MPLS traffic at the core router.
|
||||
LibreQoS cannot decapsulate MPLS on its own.
|
||||
- OSPF primary link (low cost) through the server running LibreQoS
|
||||
- OSPF backup link
|
||||
|
||||

|
||||
|
||||
Is it possible to use LibreQoS in-line without a core router, but that setup requires depending on STP instead of OSPF, which can cause issues. Such configurations are not officially supported.
|
||||
|
||||
## Network Interface Card
|
||||
|
||||
LibreQoS requires a NIC with 2 or more RX/TX queues and XDP support. While many cards theoretically meet these requirements, less commonly used cards tend to have unreported driver bugs which impede XDP functionality and make them unusable for our purposes. At this time we can only recommend Intel x520, Intel x710, and Nvidia (ConnectX-5 or newer) NICs.
|
||||
|
||||
## Server Setup
|
||||
|
||||
Disable hyperthreading on the BIOS/UEFI of your host system. Hyperthreaading is also known as Simultaneous Multi Threading (SMT) on AMD systems. Disabling this is very important for optimal performance of the XDP cpumap filtering and, in turn, throughput and latency.
|
||||
|
||||
- Boot, pressing the appropriate key to enter the BIOS settings
|
||||
- For AMD systems, you will have to navigate the settings to find the "SMT Control" setting. Usually it is under something like ```Advanced -> AMD CBS -> CPU Common Options -> Thread Enablement -> SMT Control``` Once you find it, switch to "Disabled" or "Off"
|
||||
- For Intel systems, you will also have to navigate the settings to find the "hyperthrading" toggle option. On HP servers it's under ```System Configuration > BIOS/Platform Configuration (RBSU) > Processor Options > Intel (R) Hyperthreading Options.```
|
||||
- Save changes and reboot
|
||||
|
||||
## Install Ubuntu
|
||||
|
||||
Download Ubuntu Server 22.04 from <a href="https://ubuntu.com/download/server">https://ubuntu.com/download/server</a>.
|
||||
|
||||
1. Boot Ubuntu Server from USB.
|
||||
2. Follow the steps to install Ubuntu Server.
|
||||
3. If you use a Mellanox network card, the Ubuntu Server installer will ask you whether to install the mellanox/intel NIC drivers. Check the box to confirm. This extra driver is important.
|
||||
4. On the Networking settings step, it is recommended to assign a static IP address to the management NIC.
|
||||
5. Ensure SSH server is enabled so you can more easily log into the server later.
|
||||
6. You can use scp or sftp to access files from your LibreQoS server for easier file editing. Here's how to access via scp or sftp using an [Ubuntu](https://www.addictivetips.com/ubuntu-linux-tips/sftp-server-ubuntu/) or [Windows](https://winscp.net/eng/index.php) machine.
|
||||
|
||||
## Use Installer Script (For Sponsors - Skip If Not Applicable)
|
||||
|
||||
Sponsors can use the LibreQoS-Installer script. This script does the following:
|
||||
|
||||
- Disables IRQbalance
|
||||
- Disables required offloading types using service
|
||||
- Creates a bridge between two interfaces - applied by the above service at each boot
|
||||
- Installs LibreQoS and cpumap-pping
|
||||
|
||||
Once complete - skip to [this section](https://github.com/LibreQoE/LibreQoS/wiki/LibreQoS-v1.3-Installation-&-Usage-Guide-Physical-Server-and-Ubuntu-22.04#install-influxdb-for-graphing) of the guide.
|
||||
|
||||
## Setup
|
||||
|
||||
### Disable IRQbalance
|
||||
|
||||
```shell
|
||||
sudo systemctl stop irqbalance
|
||||
sudo systemctl disable irqbalance
|
||||
```
|
||||
|
||||
### Disable Offloading
|
||||
|
||||
We need to disable certain hardware offloading features, as they break XDP, used by XDP-CPUMAP-TC to send traffic to appropriate CPUs.
|
||||
You can create a bash script to disabled these offload features upon boot.
|
||||
|
||||
```shell
|
||||
sudo nano /usr/local/sbin/offloadOff.sh
|
||||
```
|
||||
|
||||
Enter the following
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
ethtool --offload eth1 gso off tso off lro off sg off gro off
|
||||
ethtool --offload eth2 gso off tso off lro off sg off gro off
|
||||
```
|
||||
|
||||
Replace eth1 and eth2 with your two shaper interfaces (order doesn't matter).
|
||||
Then create
|
||||
|
||||
```shell
|
||||
sudo nano /etc/systemd/system/offloadOff.service
|
||||
```
|
||||
|
||||
With the following
|
||||
|
||||
```text
|
||||
[Unit]
|
||||
After=network.service
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/sbin/offloadOff.sh
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
Then change permissions and enable the service with
|
||||
|
||||
```shell
|
||||
sudo chmod 664 /etc/systemd/system/offloadOff.service
|
||||
sudo chmod 744 /usr/local/sbin/offloadOff.sh
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable offloadOff.service
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
### Add a bridge between edge/core interfaces
|
||||
|
||||
From the Ubuntu VM, create a linux interface bridge - br0 - with the two shaping interfaces.
|
||||
Find your existing .yaml file in /etc/netplan/ with
|
||||
|
||||
```shell
|
||||
cd /etc/netplan/
|
||||
ls
|
||||
```
|
||||
|
||||
Then edit the .yaml file there with
|
||||
|
||||
```shell
|
||||
sudo nano XX-cloud-init.yaml
|
||||
```
|
||||
|
||||
with XX corresponding to the name of the existing file.
|
||||
|
||||
Editing the .yaml file, we need to define the shaping interfaces (here, ens19 and ens20) and add the bridge with those two interfaces. Assuming your interfaces are ens18, ens19, and ens20, here is what your file might look like:
|
||||
|
||||
```yaml
|
||||
# This is the network config written by 'subiquity'
|
||||
network:
|
||||
ethernets:
|
||||
ens18:
|
||||
addresses:
|
||||
- 10.0.0.12/24
|
||||
routes:
|
||||
- to: default
|
||||
via: 10.0.0.1
|
||||
nameservers:
|
||||
addresses:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
search: []
|
||||
ens19:
|
||||
dhcp4: no
|
||||
ens20:
|
||||
dhcp4: no
|
||||
version: 2
|
||||
bridges:
|
||||
br0:
|
||||
interfaces:
|
||||
- ens19
|
||||
- ens20
|
||||
```
|
||||
|
||||
Make sure to replace 10.0.0.12/24 with your LibreQoS VM's address and subnet, and to replace the default gateway 10.0.0.1 with whatever your default gateway is.
|
||||
|
||||
Then run
|
||||
|
||||
```shell
|
||||
sudo netplan apply
|
||||
```
|
||||
|
||||
### Install LibreQoS and dependencies
|
||||
|
||||
Cd to your preferred directory and download the latest release
|
||||
|
||||
```shell
|
||||
cd home/$USER/
|
||||
sudo apt update
|
||||
sudo apt install python3-pip clang gcc gcc-multilib llvm libelf-dev git nano graphviz
|
||||
python3 -m pip install -r requirements.txt
|
||||
sudo python3 -m pip install -r requirements.txt
|
||||
git clone https://github.com/rchac/LibreQoS.git
|
||||
git checkout v1.3.1
|
||||
```
|
||||
|
||||
### Install and compile cpumap-pping
|
||||
|
||||
```shell
|
||||
cd home/$USER/LibreQoS/src
|
||||
git submodule update --init
|
||||
cd cpumap-pping/
|
||||
git submodule update --init
|
||||
cd src/
|
||||
make
|
||||
```
|
||||
|
||||
### Install InfluxDB for Graphing
|
||||
|
||||
To install InfluxDB 2.x., follow the steps at [https://portal.influxdata.com/downloads/](https://portal.influxdata.com/downloads/).
|
||||
|
||||
For high throughput networks (5+ Gbps) you will likely want to install InfluxDB to a separate machine or VM from that of the LibreQoS server to avoid CPU load.
|
||||
|
||||
Restart your system that is running InfluxDB
|
||||
|
||||
```shell
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
Check to ensure InfluxDB is running properly. This command should show "Active: active" with green dot.
|
||||
|
||||
```shell
|
||||
sudo service influxdb status
|
||||
```
|
||||
|
||||
Check that Web UI is running:<br>
|
||||
|
||||
```shell
|
||||
http://SERVER_IP_ADDRESS:8086
|
||||
```
|
||||
|
||||
Create Bucket
|
||||
|
||||
- Data > Buckets > Create Bucket
|
||||
|
||||
Call the bucket "libreqos" (all lowercase, without quotes).<br>
|
||||
Have it store as many days of data as you prefer. 7 days is standard.<>
|
||||
Import Dashboard
|
||||
|
||||
- Boards > Create Dashboard > Import Dashboard
|
||||
|
||||
Then upload the file [influxDBdashboardTemplate.json](https://github.com/rchac/LibreQoS/blob/main/src/influxDBdashboardTemplate.json) to InfluxDB.
|
||||
|
||||
[Generate an InfluxDB Token](https://docs.influxdata.com/influxdb/cloud/security/tokens/create-token/). It will be added to ispConfig.py in the following steps.
|
||||
|
||||
### Modify ispConfig.py
|
||||
|
||||
Copy ispConfig.example.py to ispConfig.py and edit as needed
|
||||
|
||||
```shell
|
||||
cd /home/$USER/LibreQoS/src/
|
||||
cp ispConfig.example.py ispConfig.py
|
||||
nano ispConfig.py
|
||||
```
|
||||
|
||||
- Set upstreamBandwidthCapacityDownloadMbps and upstreamBandwidthCapacityUploadMbps to match the bandwidth in Mbps of your network's upstream / WAN internet connection. The same can be done for generatedPNDownloadMbps and generatedPNUploadMbps.
|
||||
- Set interfaceA to the interface facing your core router (or bridged internal network if your network is bridged)
|
||||
- Set interfaceB to the interface facing your edge router
|
||||
- Set ```enableActualShellCommands = True``` to allow the program to actually run the commands.
|
||||
|
||||
### Integrations
|
||||
|
||||
Integrations now share a common framework thanks to [this pull](https://github.com/rchac/LibreQoS/pull/145). This also allows for graphing the network topology with graphviz.
|
||||
|
||||
#### UISP Integration
|
||||
|
||||
To run the UISP Integration, use
|
||||
|
||||
```shell
|
||||
python3 integrationUISP.py
|
||||
```
|
||||
On the first successful run, it will create a network.json and ShapedDevices.csv file.
|
||||
If a network.json file exists, it will not be overwritten.
|
||||
You can modify the network.json file to more accurately reflect bandwidth limits.
|
||||
ShapedDevices.csv will be overwritten every time the UISP integration is run.
|
||||
You have the option to run integrationUISP.py automatically on boot and every 30 minutes, which is recommended. This can be enabled by setting ```automaticImportUISP = True``` in ispConfig.py
|
||||
|
||||
### Network.json
|
||||
|
||||
Network.json allows ISP operators to define a Hierarchical Network Topology, or Flat Network Topology.
|
||||
|
||||
For networks with no Parent Nodes (no strictly defined Access Points or Sites) edit the network.json to use a Flat Network Topology with
|
||||
```nano network.json```
|
||||
setting the following file content:
|
||||
|
||||
```json
|
||||
{}
|
||||
```
|
||||
|
||||
If you plan to use the built-in UISP or Splynx integrations, you do not need to create a network.json file quite yet.
|
||||
|
||||
If you plan to use the built-in UISP integration, it will create this automatically on its first run (assuming network.json is not already present). You can then modify the network.json to more accurately reflect your topology.
|
||||
|
||||
If you will not be using an integration, you can manually define the network.json following the template file - network.example.json
|
||||
|
||||
```text
|
||||
+-----------------------------------------------------------------------+
|
||||
| Entire Network |
|
||||
+-----------------------+-----------------------+-----------------------+
|
||||
| Parent Node A | Parent Node B | Parent Node C |
|
||||
+-----------------------+-------+-------+-------+-----------------------+
|
||||
| Parent Node D | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Parent Node F |
|
||||
+-------+-------+-------+-------+-------+-------+-------+-------+-------+
|
||||
| Sub 1 | Sub 2 | | | | Sub 8 | Sub 9 |
|
||||
+-------+-------+-------+-----------------------+-------+-------+-------+
|
||||
```
|
||||
|
||||
#### Manual Editing
|
||||
|
||||
- Modify the network.json file using your preferred JSON editor (Geany for example)
|
||||
- Parent node name must match that used for clients in ShapedDevices.csv
|
||||
|
||||
### ShapedDevices.csv
|
||||
|
||||
If you are using an integration, this file will be automatically generated. If you are not using an integration, you can manually edit the file.
|
||||
|
||||
- Modify the ShapedDevices.csv file using your preferred spreadsheet editor (LibreOffice Calc, Excel, etc), following the template file - ShapedDevices.example.csv
|
||||
- An IPv4 address or IPv6 address is required for each entry.
|
||||
- The Access Point or Site name should be set in the Parent Node field. Parent Node can be left blank for flat networks.
|
||||
- The ShapedDevices.csv file allows you to set minimum guaranteed, and maximum allowed bandwidth per subscriber.
|
||||
- The minimum allowed plan rates for Circuits are 2Mbit. Bandwidth min and max should both be above that threshold.
|
||||
- Recommendation: set the min bandwidth to something like 25/10 and max to 1.15X advertised plan rate by using bandwidthOverheadFactor = 1.15
|
||||
- This way, when an AP hits its ceiling, users have any remaining AP capacity fairly distributed between them.
|
||||
- Ensure a reasonable minimum bandwidth minimum for every subscriber, allowing them to utilize up to the maximum provided when AP utilization is below 100%.
|
||||
|
||||
Note regarding SLAs: For customers with SLA contracts that guarantee them a minimum bandwidth, set their plan rate as the minimum bandwidth. That way when an AP approaches its ceiling, SLA customers will always get that amount.
|
||||
|
||||

|
||||
|
||||
## How to run LibreQoS
|
||||
|
||||
### One-Time / Debug Runs
|
||||
|
||||
One-time runs show the response from the terminal for each filter rule applied, and can be very helpful for debugging and to make sure it is correctly configured.
|
||||
|
||||
- Modify setting parameters in ispConfig.py to suit your environment
|
||||
- For one-time runs, use
|
||||
|
||||
```shell
|
||||
sudo ./LibreQoS.py
|
||||
```
|
||||
|
||||
- To use the debug mode with more verbose output, use:
|
||||
|
||||
```shell
|
||||
sudo ./LibreQoS.py --debug
|
||||
```
|
||||
|
||||
### Running as a service
|
||||
|
||||
To run as a service, we create a systemd service to run scheduler.py.
|
||||
scheduler.py does the following:
|
||||
|
||||
- On start: Run a full setup of queues
|
||||
- Every 30 minutes: Update queues, pulling new configuration from CRM integration if enabled
|
||||
|
||||
On Linux distributions that use systemd, such as Ubuntu, we create
|
||||
|
||||
```shell
|
||||
sudo nano /etc/systemd/system/LibreQoS.service
|
||||
```
|
||||
|
||||
Then paste the text below, replacing "/home/YOUR_USERNAME/LibreQoS" with wherever you downloaded LibreQoS to. Be sure to replace YOUR_USERNAME with your actual username, because otherwise when the root user executes it, it will look in the wrong directory.
|
||||
|
||||
```text
|
||||
[Unit]
|
||||
After=network.service
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/home/YOUR_USERNAME/LibreQoS/src
|
||||
ExecStart=/usr/bin/python3 /home/YOUR_USERNAME/LibreQoS/src/scheduler.py
|
||||
ExecStopPost=/bin/bash -c '/usr/bin/python3 /home/YOUR_USERNAME/LibreQoS/src/LibreQoS.py --clearrules'
|
||||
ExecStop=/bin/bash -c '/usr/bin/python3 /home/YOUR_USERNAME/LibreQoS/src/LibreQoS.py --clearrules'
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
Then run
|
||||
|
||||
```shell
|
||||
sudo chmod 664 /etc/systemd/system/LibreQoS.service
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable LibreQoS.service
|
||||
```
|
||||
|
||||
You can start the service using
|
||||
|
||||
```shell
|
||||
sudo systemctl start LibreQoS.service
|
||||
```
|
||||
|
||||
You can check the status of the service using
|
||||
|
||||
```shell
|
||||
sudo systemctl status LibreQoS.service
|
||||
```
|
||||
|
||||
You can restart the service to refresh any changes you've made to the ShapedDevices.csv file by doing
|
||||
|
||||
```shell
|
||||
sudo systemctl restart LibreQoS.service
|
||||
```
|
||||
|
||||
You can also stop the service to remove all queues and IP rules by doing
|
||||
|
||||
```shell
|
||||
sudo systemctl stop LibreQoS.service
|
||||
```
|
||||
|
||||
### Crontab
|
||||
|
||||
- At 4AM: Runs a full reload of all queues to make sure they perfectly match queueStructure.py and that any changes to network.json can be applied.
|
||||
|
||||
First, check to make sure the cron job does not already exist.
|
||||
|
||||
```shell
|
||||
sudo crontab -l | grep -q 'LibreQoS' && echo 'entry exists' || echo 'entry does not exist'
|
||||
```
|
||||
|
||||
The above should output "entry does not exist". If so, proceed to add it with:
|
||||
|
||||
```shell
|
||||
(sudo crontab -l 2>/dev/null; echo "0 4 * * * /bin/systemctl try-restart LibreQoS") | sudo crontab -
|
||||
sudo /etc/init.d/cron start
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Program Running, But Traffic Not Shaping
|
||||
|
||||
In ispConfig.py, make sure the edge and core interfaces correspond to correctly to the edge and core. Try swapping the interfaces to see if shaping starts to work.
|
||||
|
||||
### RTNETLINK answers: Invalid argument
|
||||
|
||||
This tends to show up when the MQ qdisc cannot be added correctly to the NIC interface. This would suggest the NIC has insufficient RX/TX queues. Please make sure you are using the [recommended NICs](#network-interface-card).
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### OSPF
|
||||
|
||||
It is recommended to set the OSPF timers of both OSPF neighbors (core and edge router) to minimize downtime upon a reboot of the LibreQoS server.
|
||||
|
||||
- hello interval
|
||||
- dead
|
||||
20
docs/Quickstart/networkdesignassumptions.md
Normal file
20
docs/Quickstart/networkdesignassumptions.md
Normal file
@@ -0,0 +1,20 @@
|
||||
## Network Design Assumptions
|
||||
Officially supported configuration:
|
||||
- LibreQoS placed inline in network, usually between an edge router (NAT, firewall) and core router (distribution to sites across network).
|
||||
* If you use NAT/CG-NAT, place LibreQoS inline south of where NAT is applied, as LibreQoS needs to shape internal addresses (100.64.0.0/12) not public post-NAT IPs.
|
||||
- Edge and Core routers should have 1500 MTU on links between them
|
||||
- If you use MPLS, you would terminate MPLS traffic at the core router. LibreQoS cannot decapsulate MPLS on its own.
|
||||
- OSPF primary link (low cost) through the server running LibreQoS
|
||||
- OSPF backup link (high cost, maybe 200 for example)
|
||||
|
||||

|
||||
|
||||
Is it possible to use LibreQoS in-line without a core router, but that setup requires depending on STP instead of OSPF, which can cause issues. Such configurations are not officially supported.
|
||||
|
||||
### Network Interface Card
|
||||
You must have one of these:
|
||||
*single NIC with two interfaces,
|
||||
*two NICs with single interface,
|
||||
*2x VLANs interface (using one or two NICs).
|
||||
|
||||
LibreQoS requires NICs to have 2 or more RX/TX queues and XDP support. While many cards theoretically meet these requirements, less commonly used cards tend to have unreported driver bugs which impede XDP functionality and make them unusable for our purposes. At this time we recommend the Intel x520, Intel x710, and Nvidia (ConnectX-5 or newer) NICs. We cannot guarantee compatibility with other cards.
|
||||
212
docs/Quickstart/quickstart-libreqos-1.4.md
Normal file
212
docs/Quickstart/quickstart-libreqos-1.4.md
Normal file
@@ -0,0 +1,212 @@
|
||||
## Install LibreQoS 1.4
|
||||
|
||||
### Updating from v1.3
|
||||
#### Remove offloadOff.service
|
||||
```
|
||||
sudo systemctl disable offloadOff.service
|
||||
sudo rm /usr/local/sbin/offloadOff.sh /etc/systemd/system/offloadOff.service
|
||||
```
|
||||
#### Remove cron tasks from v1.3
|
||||
Run ```sudo crontab -e``` and remove any entries pertaining to LibreQoS from v1.3.
|
||||
|
||||
### Simple install via .Deb package (Recommended)
|
||||
Use the deb package from the [latest v1.4 release](https://github.com/LibreQoE/LibreQoS/releases/).
|
||||
|
||||
### Complex install (Not Recommended)
|
||||
#### Clone the repo
|
||||
|
||||
The recommended install location is `/opt/libreqos`
|
||||
Go to the install location, and clone the repo:
|
||||
|
||||
```
|
||||
cd /opt/
|
||||
git clone https://github.com/LibreQoE/LibreQoS.git libreqos
|
||||
sudo chown -R YOUR_USER /opt/libreqos
|
||||
```
|
||||
By specifying `libreqos` at the end, git will ensure the folder name is lowercase.
|
||||
|
||||
#### Install Dependencies from apt and pip
|
||||
|
||||
You need to have a few packages from `apt` installed:
|
||||
|
||||
```
|
||||
sudo apt-get install -y python3-pip clang gcc gcc-multilib llvm libelf-dev git nano graphviz curl screen llvm pkg-config linux-tools-common linux-tools-`uname -r` libbpf-dev
|
||||
```
|
||||
|
||||
Then you need to install some Python dependencies:
|
||||
|
||||
```
|
||||
cd /opt/libreqos
|
||||
python3 -m pip install -r requirements.txt
|
||||
sudo python3 -m pip install -r requirements.txt
|
||||
```
|
||||
|
||||
#### Install the Rust development system
|
||||
|
||||
Go to [RustUp](https://rustup.rs) and follow the instructions. Basically, run the following:
|
||||
|
||||
```
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
When Rust finishes installing, it will tell you to execute a command to place the Rust build tools into your path. You need to either execute this command or logout and back in again.
|
||||
|
||||
Once that's done, please run:
|
||||
```
|
||||
cd /opt/libreqos/src/
|
||||
./build_rust.sh
|
||||
```
|
||||
|
||||
This will take a while the first time, but it puts everything in the right place.
|
||||
|
||||
Now, to build rust crates, run:
|
||||
```
|
||||
cd rust
|
||||
cargo build --all
|
||||
```
|
||||
|
||||
### Configure LibreQoS
|
||||
|
||||
#### Configure lqos.conf
|
||||
|
||||
Copy the lqosd daemon configuration file to `/etc`:
|
||||
|
||||
```
|
||||
cd /opt/libreqos/src
|
||||
sudo cp lqos.example /etc/lqos.conf
|
||||
```
|
||||
|
||||
Now edit the file to match your setup with
|
||||
|
||||
```
|
||||
sudo nano /etc/lqos.conf
|
||||
```
|
||||
|
||||
Change `enp1s0f1` and `enp1s0f2` to match your network interfaces. It doesn't matter which one is which. Notice, it's paring the interfaces, so when you first enter enps0f<ins>**1**</ins> in the first line, the `redirect_to` parameter is enp1s0f<ins>**2**</ins> (replacing with your actual interface names).
|
||||
|
||||
- First Line: `name = "enp1s0f1", redirect_to = "enp1s0f2"`
|
||||
- Second Line: `name = "enp1s0f2", redirect_to = "enp1s0f1"`
|
||||
|
||||
Then, if using Bifrost/XDP set `use_xdp_bridge = true` under that same `[bridge]` section.
|
||||
|
||||
|
||||
#### Configure ispConfig.py
|
||||
Copy ispConfig.example.py to ispConfig.py and edit as needed
|
||||
```
|
||||
cd /opt/libreqos/src/
|
||||
cp ispConfig.example.py ispConfig.py
|
||||
nano ispConfig.py
|
||||
```
|
||||
* Set upstreamBandwidthCapacityDownloadMbps and upstreamBandwidthCapacityUploadMbps to match the bandwidth in Mbps of your network's upstream / WAN internet connection. The same can be done for generatedPNDownloadMbps and generatedPNUploadMbps.
|
||||
* Set interfaceA to the interface facing your core router (or bridged internal network if your network is bridged)
|
||||
* Set interfaceB to the interface facing your edge router
|
||||
* Set ```enableActualShellCommands = True``` to allow the program to actually run the commands.
|
||||
|
||||
|
||||
### Network.json
|
||||
Network.json allows ISP operators to define a Hierarchical Network Topology, or Flat Network Topology.
|
||||
|
||||
For networks with no Parent Nodes (no strictly defined Access Points or Sites) edit the network.json to use a Flat Network Topology with
|
||||
```nano network.json```
|
||||
setting the following file content:
|
||||
```
|
||||
{}
|
||||
```
|
||||
If you plan to use the built-in UISP or Splynx integrations, you do not need to create a network.json file quite yet.
|
||||
|
||||
If you plan to use the built-in UISP integration, it will create this automatically on its first run (assuming network.json is not already present). You can then modify the network.json to more accurately reflect your topology.
|
||||
|
||||
If you will not be using an integration, you can manually define the network.json following the template file - network.example.json
|
||||
```
|
||||
+-----------------------------------------------------------------------+
|
||||
| Entire Network |
|
||||
+-----------------------+-----------------------+-----------------------+
|
||||
| Parent Node A | Parent Node B | Parent Node C |
|
||||
+-----------------------+-------+-------+-------+-----------------------+
|
||||
| Parent Node D | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Parent Node F |
|
||||
+-------+-------+-------+-------+-------+-------+-------+-------+-------+
|
||||
| Sub 1 | Sub 2 | | | | Sub 8 | Sub 9 |
|
||||
+-------+-------+-------+-----------------------+-------+-------+-------+
|
||||
```
|
||||
#### Manual Setup
|
||||
You can use
|
||||
```
|
||||
python3 csvToNetworkJSON.py
|
||||
```
|
||||
to convert manualNetwork.csv to a network.json file.
|
||||
manualNetwork.csv can be copied from the template file, manualNetwork.template.csv
|
||||
|
||||
Note: The parent node name must match that used for clients in ShapedDevices.csv
|
||||
|
||||
### ShapedDevices.csv
|
||||
If you are using an integration, this file will be automatically generated. If you are not using an integration, you can manually edit the file.
|
||||
#### Manual Editing
|
||||
* Modify the ShapedDevices.csv file using your preferred spreadsheet editor (LibreOffice Calc, Excel, etc), following the template file - ShapedDevices.example.csv
|
||||
* Circuit ID is required. Must be a string of some sort (int is fine, gets parsed as string). Must NOT include any number symbols (#).
|
||||
* An IPv4 address or IPv6 address is required for each entry.
|
||||
* The Access Point or Site name should be set in the Parent Node field. Parent Node can be left blank for flat networks.
|
||||
* The ShapedDevices.csv file allows you to set minimum guaranteed, and maximum allowed bandwidth per subscriber.
|
||||
* The minimum allowed plan rates for Circuits are 2Mbit. Bandwidth min and max should both be above that threshold.
|
||||
* Recommendation: set the min bandwidth to something like 25/10 and max to 1.15X advertised plan rate by using bandwidthOverheadFactor = 1.15
|
||||
* This way, when an AP hits its ceiling, users have any remaining AP capacity fairly distributed between them.
|
||||
* Ensure a reasonable minimum bandwidth minimum for every subscriber, allowing them to utilize up to the maximum provided when AP utilization is below 100%.
|
||||
|
||||
Note regarding SLAs: For customers with SLA contracts that guarantee them a minimum bandwidth, set their plan rate as the minimum bandwidth. That way when an AP approaches its ceiling, SLA customers will always get that amount.
|
||||
|
||||

|
||||
|
||||
### LibreQoS daemons
|
||||
lqosd
|
||||
* Manages actual XDP code. Build with Rust.
|
||||
|
||||
lqos_node_manager
|
||||
* Runs the GUI available at http://a.b.c.d:9123
|
||||
|
||||
lqos_scheduler
|
||||
* lqos_scheduler handles statistics and performs continuous refreshes of LibreQoS' shapers, including pulling from any enabled CRM Integrations (UISP, Splynx).
|
||||
* On start: Run a full setup of queues
|
||||
* Every 10 seconds: Graph bandwidth and latency stats
|
||||
* Every 30 minutes: Update queues, pulling new configuration from CRM integration if enabled
|
||||
|
||||
### Run daemons with systemd
|
||||
|
||||
You can setup `lqosd`, `lqos_node_manager`, and `lqos_scheduler` as systemd services.
|
||||
|
||||
```
|
||||
sudo cp /opt/libreqos/src/bin/lqos_node_manager.service.example /etc/systemd/system/lqos_node_manager.service
|
||||
sudo cp /opt/libreqos/src/bin/lqosd.service.example /etc/systemd/system/lqosd.service
|
||||
sudo cp /opt/libreqos/src/bin/lqos_scheduler.service.example /etc/systemd/system/lqos_scheduler.service
|
||||
```
|
||||
Finally, run
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable lqosd lqos_node_manager lqos_scheduler
|
||||
```
|
||||
|
||||
You can now point a web browser at `http://a.b.c.d:9123` (replace `a.b.c.d` with the management IP address of your shaping server) and enjoy a real-time view of your network.
|
||||
|
||||
### Debugging lqos_scheduler
|
||||
In the background, lqos_scheduler runs scheduler.py, which in turn runs LibreQoS.py
|
||||
|
||||
One-time runs of these individual components can be very helpful for debugging and to make sure everything is correctly configured.
|
||||
|
||||
First, stop lqos_scheduler
|
||||
```
|
||||
sudo systemctl stop lqos_scheduler
|
||||
```
|
||||
For one-time runs of LibreQoS.py, use
|
||||
```
|
||||
sudo ./LibreQoS.py
|
||||
```
|
||||
* To use the debug mode with more verbose output, use:
|
||||
```
|
||||
sudo ./LibreQoS.py --debug
|
||||
```
|
||||
To confirm that lqos_scheduler (scheduler.py) is able to work correctly, run:
|
||||
```
|
||||
sudo python3 scheduler.py
|
||||
```
|
||||
Once you have any errors eliminated, restart lqos_scheduler with
|
||||
```
|
||||
sudo systemctl start lqos_scheduler
|
||||
```
|
||||
107
docs/Quickstart/quickstart-prereq.md
Normal file
107
docs/Quickstart/quickstart-prereq.md
Normal file
@@ -0,0 +1,107 @@
|
||||
## Server Setup - Pre-requisites
|
||||
|
||||
Disable hyperthreading on the BIOS/UEFI of your host system. Hyperthreaading is also known as Simultaneous Multi Threading (SMT) on AMD systems. Disabling this is very important for optimal performance of the XDP cpumap filtering and, in turn, throughput and latency.
|
||||
|
||||
* Boot, pressing the appropriate key to enter the BIOS settings
|
||||
* For AMD systems, you will have to navigate the settings to find the "SMT Control" setting. Usually it is under something like ```Advanced -> AMD CBS -> CPU Common Options -> Thread Enablement -> SMT Control``` Once you find it, switch to "Disabled" or "Off"
|
||||
* For Intel systems, you will also have to navigate the settings to find the "hyperthrading" toggle option. On HP servers it's under ```System Configuration > BIOS/Platform Configuration (RBSU) > Processor Options > Intel (R) Hyperthreading Options.```
|
||||
* Save changes and reboot
|
||||
|
||||
### Install Ubuntu Server
|
||||
|
||||
We recommend Ubuntu Server because its kernel version tends to track closely with the mainline Linux releases. Our current documentation assumes Ubuntu Server. To run LibreQoS v1.4, Linux kernel 5.11 or greater is required, as 5.11 includes some important XDP patches. Ubuntu Server 22.04 uses kernel 5.13, which meets that requirement.
|
||||
|
||||
You can download Ubuntu Server 22.04 from <a href="https://ubuntu.com/download/server">https://ubuntu.com/download/server</a>.
|
||||
|
||||
1. Boot Ubuntu Server from USB.
|
||||
2. Follow the steps to install Ubuntu Server.
|
||||
3. If you use a Mellanox network card, the Ubuntu Server installer will ask you whether to install the mellanox/intel NIC drivers. Check the box to confirm. This extra driver is important.
|
||||
4. On the Networking settings step, it is recommended to assign a static IP address to the management NIC.
|
||||
5. Ensure SSH server is enabled so you can more easily log into the server later.
|
||||
6. You can use scp or sftp to access files from your LibreQoS server for easier file editing. Here's how to access via scp or sftp using an [Ubuntu](https://www.addictivetips.com/ubuntu-linux-tips/sftp-server-ubuntu/) or [Windows](https://winscp.net/eng/index.php) machine.
|
||||
|
||||
### Choose Bridge Type
|
||||
There are two options for the bridge to pass data through your two interfaces:
|
||||
* Bifrost XDP-Accelerated Bridge
|
||||
* Regular Linux Bridge
|
||||
|
||||
The Bifrost Bridge is faster and generally recommended, but may not work perfectly in a VM setup using virtualized NICs.
|
||||
To use the Bifrost bridge, skip the regular Linux bridge section below, and be sure to enable Bifrost/XDP in lqos.conf a few sections below.
|
||||
|
||||
### Adding a regular Linux bridge (if not using Bifrost XDP bridge)
|
||||
From the Ubuntu VM, create a linux interface bridge - br0 - with the two shaping interfaces.
|
||||
Find your existing .yaml file in /etc/netplan/ with
|
||||
```
|
||||
cd /etc/netplan/
|
||||
ls
|
||||
```
|
||||
Then edit the .yaml file there with
|
||||
```
|
||||
sudo nano XX-cloud-init.yaml
|
||||
```
|
||||
with XX corresponding to the name of the existing file.
|
||||
|
||||
Editing the .yaml file, we need to define the shaping interfaces (here, ens19 and ens20) and add the bridge with those two interfaces. Assuming your interfaces are ens18, ens19, and ens20, here is what your file might look like:
|
||||
```
|
||||
# This is the network config written by 'subiquity'
|
||||
network:
|
||||
ethernets:
|
||||
ens18:
|
||||
addresses:
|
||||
- 10.0.0.12/24
|
||||
routes:
|
||||
- to: default
|
||||
via: 10.0.0.1
|
||||
nameservers:
|
||||
addresses:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
search: []
|
||||
ens19:
|
||||
dhcp4: no
|
||||
ens20:
|
||||
dhcp4: no
|
||||
version: 2
|
||||
bridges:
|
||||
br0:
|
||||
interfaces:
|
||||
- ens19
|
||||
- ens20
|
||||
```
|
||||
Make sure to replace 10.0.0.12/24 with your LibreQoS VM's address and subnet, and to replace the default gateway 10.0.0.1 with whatever your default gateway is.
|
||||
|
||||
Then run
|
||||
```
|
||||
sudo netplan apply
|
||||
```
|
||||
|
||||
### Install InfluxDB (Optional but Recommended)
|
||||
|
||||
InfluxDB allows you to track long-term stats beyond what lqos_node_manager can so far.
|
||||
|
||||
To install InfluxDB 2.x., follow the steps at [https://portal.influxdata.com/downloads/](https://portal.influxdata.com/downloads/).
|
||||
|
||||
For high throughput networks (5+ Gbps) you will likely want to install InfluxDB to a separate machine or VM from that of the LibreQoS server to avoid CPU load.
|
||||
|
||||
Restart your system that is running InfluxDB
|
||||
```
|
||||
sudo reboot
|
||||
```
|
||||
Check to ensure InfluxDB is running properly. This command should show "Active: active" with green dot.
|
||||
```
|
||||
sudo service influxdb status
|
||||
```
|
||||
Check that Web UI is running:<br>
|
||||
```
|
||||
http://SERVER_IP_ADDRESS:8086
|
||||
```
|
||||
Create Bucket
|
||||
* Data > Buckets > Create Bucket
|
||||
|
||||
Call the bucket `libreqos` (all lowercase).<br>
|
||||
Have it store as many days of data as you prefer. 7 days is standard.<>
|
||||
Import Dashboard
|
||||
* Boards > Create Dashboard > Import Dashboard
|
||||
Then upload the file [influxDBdashboardTemplate.json](https://github.com/rchac/LibreQoS/blob/main/src/influxDBdashboardTemplate.json) to InfluxDB.
|
||||
|
||||
[Generate an InfluxDB Token](https://docs.influxdata.com/influxdb/cloud/security/tokens/create-token/). It will be added to ispConfig.py in the following steps.
|
||||
31
docs/Quickstart/share.md
Normal file
31
docs/Quickstart/share.md
Normal file
@@ -0,0 +1,31 @@
|
||||
## Share your before and after
|
||||
|
||||
We ask that you please share an anonymized screenshot of your LibreQoS deployment before (monitor only mode) and after (queuing enabled) to our [Matrix Channel](https://matrix.to/#/#libreqos:matrix.org). This helps us gauge the impact of our software. It also makes us smile.
|
||||
|
||||
1. Enable monitor only mode
|
||||
2. Klingon mode (Redact customer info)
|
||||
3. Screenshot
|
||||
4. Resume regular queuing
|
||||
5. Screenshot
|
||||
|
||||
### Enable monitor only mode
|
||||
|
||||
```shell
|
||||
sudo systemctl stop lqos_scheduler
|
||||
sudo systemctl restart lqosd
|
||||
sudo systemctl restart lqos_node_manager
|
||||
```
|
||||
|
||||
### Klingon mode
|
||||
|
||||
Please go to the Web UI and click Configuration. Toggle Redact Customer Information (screenshot mode) and then Apply Changes.
|
||||
|
||||
### Resume regular queuing
|
||||
|
||||
```shell
|
||||
sudo systemctl start lqos_scheduler
|
||||
```
|
||||
|
||||
### Screenshot
|
||||
|
||||
To generate a screenshot - please go to the Web UI and click Configuration. Toggle Redact Customer Information (screenshot mode), Apply Changes, and then return to the dashboard to take a screenshot.
|
||||
55
docs/SystemRequirements/Compute.md
Normal file
55
docs/SystemRequirements/Compute.md
Normal file
@@ -0,0 +1,55 @@
|
||||
## System Requirements
|
||||
### VM or physical server
|
||||
* For VMs, NIC passthrough is required for optimal throughput and latency (XDP vs generic XDP). Using Virtio / bridging is much slower than NIC passthrough. Virtio / bridging should not be used for large amounts of traffic.
|
||||
|
||||
### CPU
|
||||
* 2 or more CPU cores
|
||||
* A CPU with solid [single-thread performance](https://www.cpubenchmark.net/singleThread.html#server-thread) within your budget. Queuing is very CPU-intensive, and requires high single-thread performance.
|
||||
|
||||
Single-thread CPU performance will determine the max throughput of a single HTB (cpu core), and in turn, what max speed plan you can offer customers.
|
||||
|
||||
| Customer Max Plan | Passmark Single-Thread |
|
||||
| --------------------| ------------------------ |
|
||||
| 100 Mbps | 1000 |
|
||||
| 250 Mbps | 1500 |
|
||||
| 500 Mbps | 2000 |
|
||||
| 1 Gbps | 2500 |
|
||||
| 2 Gbps | 3000 |
|
||||
|
||||
Below is a table of approximate aggregate throughput capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 2700 or greater:
|
||||
|
||||
| Aggregate Throughput | CPU Cores |
|
||||
| ------------------------| ------------- |
|
||||
| 500 Mbps | 2 |
|
||||
| 1 Gbps | 4 |
|
||||
| 5 Gbps | 6 |
|
||||
| 10 Gbps | 8 |
|
||||
| 20 Gbps | 16 |
|
||||
| 50 Gbps* | 32 |
|
||||
|
||||
(* Estimated)
|
||||
|
||||
So for example, an ISP delivering 1Gbps service plans with 10Gbps aggregate throughput would choose a CPU with a 2500+ single-thread score and 8 cores, such as the Intel Xeon E-2388G @ 3.20GHz.
|
||||
|
||||
### Memory
|
||||
* Minimum RAM = 2 + (0.002 x Subscriber Count) GB
|
||||
* Recommended RAM:
|
||||
|
||||
| Subscribers | RAM |
|
||||
| ------------- | ------------- |
|
||||
| 100 | 4 GB |
|
||||
| 1,000 | 8 GB |
|
||||
| 5,000 | 16 GB |
|
||||
| 10,000* | 18 GB |
|
||||
| 50,000* | 24 GB |
|
||||
|
||||
(* Estimated)
|
||||
|
||||
### Server Recommendations
|
||||
It is most cost-effective to buy a used server with specifications matching your unique requirements, as laid out in the System Requirements section below.
|
||||
For those who do not have the time to do that, here are some off-the-shelf options to consider:
|
||||
* 1 Gbps | [Supermicro SuperServer E100-9W-L](https://www.thinkmate.com/system/superserver-e100-9w-l)
|
||||
* 10 Gbps | [Supermicro SuperServer 510T-ML (Choose E-2388G)](https://www.thinkmate.com/system/superserver-510t-ml)
|
||||
* 20 Gbps | [Dell R450 Config](https://www.dell.com/en-us/shop/servers-storage-and-networking/poweredge-r450-rack-server/spd/poweredge-r450/pe_r450_15127_vi_vp?configurationid=a7663c54-6e4a-4c96-9a21-bc5a69d637ba)
|
||||
|
||||
The [AsRock 1U4LW-B6502L2T](https://www.thinkmate.com/system/asrock-1u4lw-b6502l2t/635744) can be a great lower-cost option as well.
|
||||
10
docs/SystemRequirements/Networking.md
Normal file
10
docs/SystemRequirements/Networking.md
Normal file
@@ -0,0 +1,10 @@
|
||||
## Network Interface Requirements
|
||||
* One management network interface completely separate from the traffic shaping interfaces. Usually this would be the Ethernet interface built in to the motherboard.
|
||||
* Dedicated Network Interface Card for Shaping Interfaces
|
||||
* NIC must have 2 or more interfaces for traffic shaping.
|
||||
* NIC must have multiple TX/RX transmit queues. [Here's how to check from the command line](https://serverfault.com/questions/772380/how-to-tell-if-nic-has-multiqueue-enabled).
|
||||
* Known supported cards:
|
||||
* [NVIDIA Mellanox MCX512A-ACAT](https://www.fs.com/products/119649.html)
|
||||
* NVIDIA Mellanox MCX416A-CCAT
|
||||
* [Intel X710](https://www.fs.com/products/75600.html)
|
||||
* Intel X520
|
||||
12
docs/TechnicalDocs/extras.md
Normal file
12
docs/TechnicalDocs/extras.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Extras
|
||||
|
||||
## Flamegraph
|
||||
|
||||
```shell
|
||||
git clone https://github.com/brendangregg/FlameGraph.git
|
||||
cd FlameGraph
|
||||
sudo perf record -F 99 -a -g -- sleep 60
|
||||
perf script > out.perf
|
||||
./stackcollapse-perf.pl out.perf > out.folded
|
||||
./flamegraph.pl --title LibreQoS --width 7200 out.folded > libreqos.svg
|
||||
```
|
||||
28
docs/TechnicalDocs/integrations.md
Normal file
28
docs/TechnicalDocs/integrations.md
Normal file
@@ -0,0 +1,28 @@
|
||||
## Integrations
|
||||
|
||||
### UISP Integration
|
||||
|
||||
First, set the relevant parameters for UISP (uispAuthToken, UISPbaseURL, etc.) in ispConfig.py.
|
||||
|
||||
To test the UISP Integration, use
|
||||
```
|
||||
python3 integrationUISP.py
|
||||
```
|
||||
On the first successful run, it will create a network.json and ShapedDevices.csv file.
|
||||
If a network.json file exists, it will not be overwritten.
|
||||
You can modify the network.json file to more accurately reflect bandwidth limits.
|
||||
ShapedDevices.csv will be overwritten every time the UISP integration is run.
|
||||
You have the option to run integrationUISP.py automatically on boot and every 30 minutes, which is recommended. This can be enabled by setting ```automaticImportUISP = True``` in ispConfig.py
|
||||
|
||||
### Splynx Integration
|
||||
|
||||
First, set the relevant parameters for Splynx (splynx_api_key, splynx_api_secret, etc.) in ispConfig.py.
|
||||
|
||||
To test the Splynx Integration, use
|
||||
```
|
||||
python3 integrationSplynx.py
|
||||
```
|
||||
On the first successful run, it will create a ShapedDevices.csv file.
|
||||
You can manually create your network.json file to more accurately reflect bandwidth limits.
|
||||
ShapedDevices.csv will be overwritten every time the Splynx integration is run.
|
||||
You have the option to run integrationSplynx.py automatically on boot and every 30 minutes, which is recommended. This can be enabled by setting ```automaticImportSplynx = True``` in ispConfig.py
|
||||
28
docs/TechnicalDocs/performance-tuning.md
Normal file
28
docs/TechnicalDocs/performance-tuning.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Performance Tuning
|
||||
|
||||
## Ubuntu Starts Slowly (~2 minutes)
|
||||
|
||||
### List all services which requires network
|
||||
|
||||
```shell
|
||||
systemctl show -p WantedBy network-online.target
|
||||
```
|
||||
|
||||
### For Ubuntu 22.04 this command can help
|
||||
|
||||
```shell
|
||||
systemctl disable cloud-config iscsid cloud-final
|
||||
```
|
||||
|
||||
### Set proper governor for CPU (baremetal/hypervisior host)
|
||||
|
||||
```shell
|
||||
cpupower frequency-set --governor performance
|
||||
```
|
||||
|
||||
### OSPF
|
||||
|
||||
It is recommended to set the OSPF timers of both OSPF neighbors (core and edge router) to minimize downtime upon a reboot of the LibreQoS server.
|
||||
|
||||
* hello interval
|
||||
* dead
|
||||
12
docs/TechnicalDocs/troubleshooting.md
Normal file
12
docs/TechnicalDocs/troubleshooting.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Troubleshooting
|
||||
|
||||
## Common Issues
|
||||
|
||||
### LibreQoS Is Running, But Traffic Not Shaping
|
||||
|
||||
- In ispConfig.py, make sure the edge and core interfaces correspond to correctly to the edge and core. Try swapping the interfaces to see if shaping starts to work.
|
||||
- Make sure your services are running properly `lqos.service`, `lqos_node_manager`, `lqos_scheduler`. Node manager and scheduler are dependent on the `lqos.service` being in a healthy, running state.
|
||||
|
||||
### RTNETLINK answers: Invalid argument
|
||||
|
||||
This tends to show up when the MQ qdisc cannot be added correctly to the NIC interface. This would suggest the NIC has insufficient RX/TX queues. Please make sure you are using the [recommended NICs](#network-interface-card).
|
||||
15
docs/Updates/update.md
Normal file
15
docs/Updates/update.md
Normal file
@@ -0,0 +1,15 @@
|
||||
## Updating 1.4 To Latest Version
|
||||
|
||||
Note: If you use the XDP bridge, traffic will stop passing through the bridge during the update (XDP bridge is only operating while lqosd runs).
|
||||
|
||||
1. Change to your `LibreQoS` directory (e.g. `cd /opt/LibreQoS`)
|
||||
2. Update from Git: `git pull`
|
||||
3. Recompile: `./build-rust.sh`
|
||||
4. `sudo rust/remove_pinned_maps.sh`
|
||||
5.
|
||||
|
||||
```
|
||||
sudo systemctl restart lqosd
|
||||
sudo systemctl restart lqos_node_manager
|
||||
sudo systemctl restart lqos_scheduler
|
||||
```
|
||||
Reference in New Issue
Block a user