Merge pull request #564 from LibreQoE/ui_stability

UI Stability, Memory Management and More - B3 Work List
This commit is contained in:
Robert Chacón 2024-10-31 13:18:59 -07:00 committed by GitHub
commit 90c1b1823f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
200 changed files with 4345 additions and 2681 deletions

View File

@ -7,8 +7,8 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest]
runs-on: ${{ matrix.os }}
runs-on: [ self-hosted, ubuntu-latest ]
steps:
- uses: actions/checkout@v2
- name: Install dependencies

View File

@ -29,6 +29,13 @@ Please support the continued development of LibreQoS by sponsoring us via [GitHu
Our Zulip chat server is available at [https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/](https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/).
## LibreQoS Social
- https://www.youtube.com/@LibreQoS
- https://www.linkedin.com/company/libreqos/
- https://www.facebook.com/libreqos
- https://twitter.com/libreqos
- https://fosstodon.org/@LibreQoS/
## Long-Term Stats (LTS)
Long-Term Stats (LTS) is an analytics service built for LibreQoS that revolutionizes the way you track and analyze your network.

View File

@ -85,44 +85,3 @@ Then run
```shell
sudo netplan apply
```
### Install InfluxDB (Optional but Recommended)
InfluxDB allows you to track long-term stats beyond what lqos_node_manager can so far.
To install InfluxDB 2.x., follow the steps at [https://portal.influxdata.com/downloads/](https://portal.influxdata.com/downloads/).
For high throughput networks (5+ Gbps) you will likely want to install InfluxDB to a separate machine or VM from that of the LibreQoS server to avoid CPU load.
Restart your system that is running InfluxDB
```shell
sudo reboot
```
Check to ensure InfluxDB is running properly. This command should show "Active: active" with green dot.
```shell
sudo service influxdb status
```
Check that Web UI is running:<br>
```shell
http://SERVER_IP_ADDRESS:8086
```
Create Bucket
- Data > Buckets > Create Bucket
Call the bucket `libreqos` (all lowercase).<br>
Have it store as many days of data as you prefer. 7 days is standard.<>
Import Dashboard `Boards > Create Dashboard > Import Dashboard`
Then upload the file [influxDBdashboardTemplate.json](https://github.com/rchac/LibreQoS/blob/main/src/influxDBdashboardTemplate.json) to InfluxDB.
[Generate an InfluxDB Token](https://docs.influxdata.com/influxdb/cloud/security/tokens/create-token/). It will be added to ispConfig.py in the following steps.
```{note}
You may want to install a reverse proxy in front of the web interfaces for influx and lqos. Setting these up is outside the scope of this document, but some examples are [Caddy](https://caddyserver.com/), and Nginx [Proxy Manager](https://nginxproxymanager.com/)
```

View File

@ -1,6 +1,7 @@
## System Requirements
### VM or physical server
* For VMs, NIC passthrough is required for optimal throughput and latency (XDP vs generic XDP). Using Virtio / bridging is much slower than NIC passthrough. Virtio / bridging should not be used for large amounts of traffic.
### Physical server
* LibreQoS requires a dedicated, physical x86_64 device.
* While it is technically possible to run LibreQoS in VM, it is not officially supported, and comes at a significant 30% performance penalty (even when using NIC passthrough). For VMs, NIC passthrough is required for throughput above 1 Gbps (XDP vs generic XDP).
### CPU
* 2 or more CPU cores
@ -14,47 +15,38 @@ Single-thread CPU performance will determine the max throughput of a single HTB
| 250 Mbps | 1250 |
| 500 Mbps | 1500 |
| 1 Gbps | 2000 |
| 3 Gbps | 3000 |
| 10 Gbps | 4000 |
| 2.5 Gbps | 3000 |
| 5 Gbps | 4000 |
Below is a table of approximate aggregate throughput capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 2700 or greater:
Below is a table of approximate aggregate throughput capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 2700 / 4000:
| Aggregate Throughput | CPU Cores |
| ------------------------| ------------- |
| 500 Mbps | 2 |
| 1 Gbps | 4 |
| 5 Gbps | 6 |
| 10 Gbps | 8 |
| 20 Gbps | 16 |
| 50 Gbps | 32 |
| 100 Gbps * | 64 |
(* Estimated)
| Aggregate Throughput | CPU Cores Needed (>2700 single-thread) | CPU Cores Needed (>4000 single-thread) |
| ------------------------| -------------------------------------- | -------------------------------------- |
| 500 Mbps | 2 | 2 |
| 1 Gbps | 4 | 2 |
| 5 Gbps | 6 | 4 |
| 10 Gbps | 8 | 6 |
| 20 Gbps | 16 | 8 |
| 50 Gbps | 32 | 16 |
| 100 Gbps | 64 | 32 |
So for example, an ISP delivering 1Gbps service plans with 10Gbps aggregate throughput would choose a CPU with a 2500+ single-thread score and 8 cores, such as the Intel Xeon E-2388G @ 3.20GHz.
### Memory
* Minimum RAM = 2 + (0.002 x Subscriber Count) GB
* Recommended RAM:
| Subscribers | RAM |
| ------------- | ------------- |
| 100 | 4 GB |
| 1,000 | 8 GB |
| 5,000 | 16 GB |
| 10,000* | 18 GB |
| 50,000* | 24 GB |
(* Estimated)
| 100 | 8 GB |
| 1,000 | 16 GB |
| 5,000 | 64 GB |
| 10,000 | 128 GB |
| 20,000 | 256 GB |
### Server Recommendations
It is most cost-effective to buy a used server with specifications matching your unique requirements, as laid out in the System Requirements section above.
For those who do not have the time to do that, here are some off-the-shelf options to consider:
| Aggregate | 100Mbps Plans | 1Gbps Plans | 4Gbps Plans |
| ------------- | ------------- | ------------- | ------------- |
| 1 Gbps Total | A | | |
| 10 Gbps Total | B or C | B or C | C |
* A | [Lanner L-1513-4C](https://www.whiteboxsolution.com/product/l-1513/) (Select L-1513-4C)
* B | [Supermicro SuperServer 510T-ML](https://www.thinkmate.com/system/superserver-510t-ml) (Select E-2388G)
* C | [Supermicro AS-1015A-MT](https://store.supermicro.com/us_en/as-1015a-mt.html) (Ryzen 9 7700X, 2x16GB DDR5 4800MHz ECC, 1xSupermicro 10-Gigabit XL710+ X557)
Here are some convenient, off-the-shelf server options to consider:
| Throughput | Model | CPU Option | RAM Option | NIC Option | Extras | Temp Range |
| --- | --- | --- | --- | --- | --- | --- |
| 2.5 Gbps | [Supermicro SYS-E102-13R-E](https://store.supermicro.com/us_en/compact-embedded-iot-i5-1350pe-sys-e102-13r-e.html) | Default | 2x8GB | Built-in | [USB-C RJ45](https://www.amazon.com/Anker-Ethernet-PowerExpand-Aluminum-Portable/dp/B08CK9X9Z8/)| 0°C ~ 40°C (32°F ~ 104°F) |
| 10 Gbps | [Supermicro AS -1115S-FWTRT](https://store.supermicro.com/us_en/1u-amd-epyc-8004-compact-server-as-1115s-fwtrt.html) | 8124P | 2x16GB | Mellanox (2 x SFP28) | | 0°C ~ 40°C (32°F ~ 104°F) |
| 25 Gbps | [Supermicro AS -1115S-FWTRT](https://store.supermicro.com/us_en/1u-amd-epyc-8004-compact-server-as-1115s-fwtrt.html) | 8534P | 4x16GB | Mellanox (2 x SFP28) | | 0°C ~ 40°C (32°F ~ 104°F) |

View File

@ -0,0 +1,103 @@
# Configure LibreQoS
## Configure lqos.conf
Copy the lqosd daemon configuration file to `/etc`:
```shell
cd /opt/libreqos/src
sudo cp lqos.example /etc/lqos.conf
```
Now edit the file to match your setup with
```shell
sudo nano /etc/lqos.conf
```
Change `enp1s0f1` and `enp1s0f2` to match your network interfaces. It doesn't matter which one is which. Notice, it's paring the interfaces, so when you first enter enps0f<ins>**1**</ins> in the first line, the `redirect_to` parameter is enp1s0f<ins>**2**</ins> (replacing with your actual interface names).
- First Line: `name = "enp1s0f1", redirect_to = "enp1s0f2"`
- Second Line: `name = "enp1s0f2", redirect_to = "enp1s0f1"`
Then, if using Bifrost/XDP set `use_xdp_bridge = true` under that same `[bridge]` section.
## Configure ispConfig.py
Copy ispConfig.example.py to ispConfig.py and edit as needed
```shell
cd /opt/libreqos/src/
cp ispConfig.example.py ispConfig.py
nano ispConfig.py
```
- Set upstreamBandwidthCapacityDownloadMbps and upstreamBandwidthCapacityUploadMbps to match the bandwidth in Mbps of your network's upstream / WAN internet connection. The same can be done for generatedPNDownloadMbps and generatedPNUploadMbps.
- Set interfaceA to the interface facing your core router (or bridged internal network if your network is bridged)
- Set interfaceB to the interface facing your edge router
- Set ```enableActualShellCommands = True``` to allow the program to actually run the commands.
## Network.json
Network.json allows ISP operators to define a Hierarchical Network Topology, or Flat Network Topology.
For networks with no Parent Nodes (no strictly defined Access Points or Sites) edit the network.json to use a Flat Network Topology with
```nano network.json```
setting the following file content:
```json
{}
```
If you plan to use the built-in UISP or Splynx integrations, you do not need to create a network.json file quite yet.
If you plan to use the built-in UISP integration, it will create this automatically on its first run (assuming network.json is not already present). You can then modify the network.json to more accurately reflect your topology.
If you will not be using an integration, you can manually define the network.json following the template file - network.example.json
```text
+-----------------------------------------------------------------------+
| Entire Network |
+-----------------------+-----------------------+-----------------------+
| Parent Node A | Parent Node B | Parent Node C |
+-----------------------+-------+-------+-------+-----------------------+
| Parent Node D | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Parent Node F |
+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| Sub 1 | Sub 2 | | | | Sub 8 | Sub 9 |
+-------+-------+-------+-----------------------+-------+-------+-------+
```
## Manual Setup
You can use
```shell
python3 csvToNetworkJSON.py
```
to convert manualNetwork.csv to a network.json file.
manualNetwork.csv can be copied from the template file, manualNetwork.template.csv
Note: The parent node name must match that used for clients in ShapedDevices.csv
## ShapedDevices.csv
If you are using an integration, this file will be automatically generated. If you are not using an integration, you can manually edit the file.
### Manual Editing
- Modify the ShapedDevices.csv file using your preferred spreadsheet editor (LibreOffice Calc, Excel, etc), following the template file - ShapedDevices.example.csv
- Circuit ID is required. Must be a string of some sort (int is fine, gets parsed as string). Must NOT include any number symbols (#). Every circuit needs a unique CircuitID - they cannot be reused. Here, circuit essentially means customer location. If a customer has multiple locations on different parts of your network, use a unique CircuitID for each of those locations.
- At least one IPv4 address or IPv6 address is required for each entry.
- The Access Point or Site name should be set in the Parent Node field. Parent Node can be left blank for flat networks.
- The ShapedDevices.csv file allows you to set minimum guaranteed, and maximum allowed bandwidth per subscriber.
- The minimum allowed plan rates for Circuits are 2Mbit. Bandwidth min and max should both be above that threshold.
- Recommendation: set the min bandwidth to something like 25/10 and max to 1.15X advertised plan rate by using bandwidthOverheadFactor = 1.15
- This way, when an AP hits its ceiling, users have any remaining AP capacity fairly distributed between them.
- Ensure a reasonable minimum bandwidth minimum for every subscriber, allowing them to utilize up to the maximum provided when AP utilization is below 100%.
Note regarding SLAs: For customers with SLA contracts that guarantee them a minimum bandwidth, set their plan rate as the minimum bandwidth. That way when an AP approaches its ceiling, SLA customers will always get that amount.
![image](https://user-images.githubusercontent.com/22501920/200134960-28709d0f-48fe-4129-b4fd-70b204cade2c.png)
Once your configuration is complete. You're ready to run the application and start the [Deamons](./services-and-run.md)

View File

@ -0,0 +1,42 @@
# Network Design Assumptions
## Officially supported configuration
- LibreQoS placed inline in network, usually between an edge router (NAT, firewall) and core router (distribution to sites across network).
- If you use NAT/CG-NAT, place LibreQoS inline south of where NAT is applied, as LibreQoS needs to shape internal addresses (100.64.0.0/12) not public post-NAT IPs.
- Edge and Core routers should have 1500 MTU on links between them
- If you use MPLS, you would terminate MPLS traffic at the core router. LibreQoS cannot decapsulate MPLS on its own.
- OSPF primary link (low cost) through the server running LibreQoS
- OSPF backup link (high cost, maybe 200 for example)
![Offical Configuration](https://raw.githubusercontent.com/LibreQoE/LibreQoS/main/docs/design.png)
### Network Interface Card
```{note}
You must have one of these:
- single NIC with two interfaces,
- two NICs with single interface,
- 2x VLANs interface (using one or two NICs).
```
LibreQoS requires NICs to have 2 or more RX/TX queues and XDP support. While many cards theoretically meet these requirements, less commonly used cards tend to have unreported driver bugs which impede XDP functionality and make them unusable for our purposes. At this time we recommend the Intel x520, Intel x710, and Nvidia (ConnectX-5 or newer) NICs. We cannot guarantee compatibility with other cards.
## Alternate configuration (Not officially supported)
This alternate configuration uses Spanning Tree Protocol (STP) to modify the data path in the event the LibreQoS device is offline for maintenance or another problem.
```{note}
Most of the same considerations apply to the alternate configuration as they do to the officially supported configuation
```
- LibreQoS placed inline in network, usually between an edge router (NAT, firewall) and core router (distribution to sites across network).
- If you use NAT/CG-NAT, place LibreQoS inline south of where NAT is applied, as LibreQoS needs to shape internal addresses (100.64.0.0/12) not public post-NAT IPs.
- Edge router and Core switch should have 1500 MTU on links between them
- If you use MPLS, you would terminate MPLS traffic somewhere south of the core/distribution switch. LibreQoS cannot decapsulate MPLS on its own.
- Spanning Tree primary link (low cost) through the server running LibreQoS
- Spanning Tree backup link (high cost, maybe 80 for example)
Keep in mind that if you use different bandwidth links, for example, 10 Gbps through LibreQoS, and 1 Gbps between core switch and edge router, you may need to be more intentional with your STP costs.
![Alternate Configuration](../stp-diagram.png)

View File

@ -0,0 +1,37 @@
# Install LibreQoS 1.4
## Updating from v1.3
### Remove offloadOff.service
```shell
sudo systemctl disable offloadOff.service
sudo rm /usr/local/sbin/offloadOff.sh /etc/systemd/system/offloadOff.service
```
### Remove cron tasks from v1.3
Run ```sudo crontab -e``` and remove any entries pertaining to LibreQoS from v1.3.
## Simple install via .Deb package (Recommended)
Use the deb package from the [latest v1.4 release](https://github.com/LibreQoE/LibreQoS/releases/).
```shell
sudo echo "deb http://stats.libreqos.io/ubuntu jammy main" | sudo tee -a /etc/apt/sources.list.d/libreqos.list
sudo wget -O - -q http://stats.libreqos.io/repo.asc | sudo apt-key add -
apt-get update
apt-get install libreqos
```
You will be asked some questions about your configuration, and the management daemon and webserver will automatically start. Go to http://<your_ip>:9123/ to finish installation.
## Complex Install (Not Reccomended)
```{note}
Use this install if you'd like to constantly deploy from the main branch on Github. For experienced users only!
```
[Complex Installation](../TechnicalDocs/complex-install.md)
You are now ready to [Configure](./configuration.md) LibreQoS!

View File

@ -0,0 +1,87 @@
# Server Setup - Pre-requisites
Disable hyperthreading on the BIOS/UEFI of your host system. Hyperthreaading is also known as Simultaneous Multi Threading (SMT) on AMD systems. Disabling this is very important for optimal performance of the XDP cpumap filtering and, in turn, throughput and latency.
- Boot, pressing the appropriate key to enter the BIOS settings
- For AMD systems, you will have to navigate the settings to find the "SMT Control" setting. Usually it is under something like ```Advanced -> AMD CBS -> CPU Common Options -> Thread Enablement -> SMT Control``` Once you find it, switch to "Disabled" or "Off"
- For Intel systems, you will also have to navigate the settings to find the "hyperthrading" toggle option. On HP servers it's under ```System Configuration > BIOS/Platform Configuration (RBSU) > Processor Options > Intel (R) Hyperthreading Options.```
- Save changes and reboot
## Install Ubuntu Server
We recommend Ubuntu Server because its kernel version tends to track closely with the mainline Linux releases. Our current documentation assumes Ubuntu Server. To run LibreQoS v1.4, Linux kernel 5.11 or greater is required, as 5.11 includes some important XDP patches. Ubuntu Server 22.04 uses kernel 5.13, which meets that requirement.
You can download Ubuntu Server 22.04 from <a href="https://ubuntu.com/download/server">https://ubuntu.com/download/server</a>.
1. Boot Ubuntu Server from USB.
2. Follow the steps to install Ubuntu Server.
3. If you use a Mellanox network card, the Ubuntu Server installer will ask you whether to install the mellanox/intel NIC drivers. Check the box to confirm. This extra driver is important.
4. On the Networking settings step, it is recommended to assign a static IP address to the management NIC.
5. Ensure SSH server is enabled so you can more easily log into the server later.
6. You can use scp or sftp to access files from your LibreQoS server for easier file editing. Here's how to access via scp or sftp using an [Ubuntu](https://www.addictivetips.com/ubuntu-linux-tips/sftp-server-ubuntu/) or [Windows](https://winscp.net/eng/index.php) machine.
### Choose Bridge Type
There are two options for the bridge to pass data through your two interfaces:
- Bifrost XDP-Accelerated Bridge
- Regular Linux Bridge
The Bifrost Bridge is recommended for Intel NICs with XDP support, such as the X520 and X710.
The regular Linux bridge is recommended for Nvidea/Mellanox NICs such as the ConnectX-5 series (which have superior bridge performance), and VM setups using virtualized NICs.
To use the Bifrost bridge, skip the regular Linux bridge section below, and be sure to enable Bifrost/XDP in lqos.conf a few sections below.
### Adding a regular Linux bridge (if not using Bifrost XDP bridge)
From the Ubuntu VM, create a linux interface bridge - br0 - with the two shaping interfaces.
Find your existing .yaml file in /etc/netplan/ with
```shell
cd /etc/netplan/
ls
```
Then edit the .yaml file there with
```shell
sudo nano XX-cloud-init.yaml
```
With XX corresponding to the name of the existing file.
Editing the .yaml file, we need to define the shaping interfaces (here, ens19 and ens20) and add the bridge with those two interfaces. Assuming your interfaces are ens18, ens19, and ens20, here is what your file might look like:
```yaml
# This is the network config written by 'subiquity'
network:
ethernets:
ens18:
addresses:
- 10.0.0.12/24
routes:
- to: default
via: 10.0.0.1
nameservers:
addresses:
- 1.1.1.1
- 8.8.8.8
search: []
ens19:
dhcp4: no
ens20:
dhcp4: no
version: 2
bridges:
br0:
interfaces:
- ens19
- ens20
```
Make sure to replace 10.0.0.12/24 with your LibreQoS VM's address and subnet, and to replace the default gateway 10.0.0.1 with whatever your default gateway is.
Then run
```shell
sudo netplan apply
```

View File

@ -0,0 +1,71 @@
# LibreQoS daemons
lqosd
- Manages actual XDP code. Build with Rust.
lqos_node_manager
- Runs the GUI available at http://a.b.c.d:9123
lqos_scheduler
- lqos_scheduler handles statistics and performs continuous refreshes of LibreQoS' shapers, including pulling from any enabled CRM Integrations (UISP, Splynx).
- On start: Run a full setup of queues
- Every 10 seconds: Graph bandwidth and latency stats
- Every 30 minutes: Update queues, pulling new configuration from CRM integration if enabled
## Run daemons with systemd
You can setup `lqosd`, `lqos_node_manager`, and `lqos_scheduler` as systemd services.
```shell
sudo cp /opt/libreqos/src/bin/lqos_node_manager.service.example /etc/systemd/system/lqos_node_manager.service
sudo cp /opt/libreqos/src/bin/lqosd.service.example /etc/systemd/system/lqosd.service
sudo cp /opt/libreqos/src/bin/lqos_scheduler.service.example /etc/systemd/system/lqos_scheduler.service
```
Finally, run
```shell
sudo systemctl daemon-reload
sudo systemctl enable lqosd lqos_node_manager lqos_scheduler
```
You can now point a web browser at `http://a.b.c.d:9123` (replace `a.b.c.d` with the management IP address of your shaping server) and enjoy a real-time view of your network.
## Debugging lqos_scheduler
In the background, lqos_scheduler runs scheduler.py, which in turn runs LibreQoS.py
One-time runs of these individual components can be very helpful for debugging and to make sure everything is correctly configured.
First, stop lqos_scheduler
```shell
sudo systemctl stop lqos_scheduler
```
For one-time runs of LibreQoS.py, use
```shell
sudo ./LibreQoS.py
```
- To use the debug mode with more verbose output, use:
```shell
sudo ./LibreQoS.py --debug
```
To confirm that lqos_scheduler (scheduler.py) is able to work correctly, run:
```shell
sudo python3 scheduler.py
```
Once you have any errors eliminated, restart lqos_scheduler with
```shell
sudo systemctl start lqos_scheduler
```

View File

@ -0,0 +1,31 @@
# Share your before and after
We ask that you please share an anonymized screenshot of your LibreQoS deployment before (monitor only mode) and after (queuing enabled) to the [LibreQoS Chat](https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/). This helps us gauge the impact of our software. It also makes us smile.
1. Enable monitor only mode
2. Klingon mode (Redact customer info)
3. Screenshot
4. Resume regular queuing
5. Screenshot
## Enable monitor only mode
```shell
sudo systemctl stop lqos_scheduler
sudo systemctl restart lqosd
sudo systemctl restart lqos_node_manager
```
## Klingon mode
Please go to the Web UI and click Configuration. Toggle Redact Customer Information (screenshot mode) and then Apply Changes.
## Resume regular queuing
```shell
sudo systemctl start lqos_scheduler
```
## Screenshot
To generate a screenshot - please go to the Web UI and click Configuration. Toggle Redact Customer Information (screenshot mode), Apply Changes, and then return to the dashboard to take a screenshot.

View File

@ -0,0 +1,56 @@
# Complex install (Not Recommended)
## Clone the repo
The recommended install location is `/opt/libreqos`
Go to the install location, and clone the repo:
```shell
cd /opt/
git clone https://github.com/LibreQoE/LibreQoS.git libreqos
sudo chown -R YOUR_USER /opt/libreqos
```
By specifying `libreqos` at the end, git will ensure the folder name is lowercase.
## Install Dependencies from apt and pip
You need to have a few packages from `apt` installed:
```shell
sudo apt-get install -y python3-pip clang gcc gcc-multilib llvm libelf-dev git nano graphviz curl screen llvm pkg-config linux-tools-common linux-tools-`uname -r` libbpf-dev libssl-dev
```
Then you need to install some Python dependencies:
```shell
cd /opt/libreqos
python3 -m pip install -r requirements.txt
sudo python3 -m pip install -r requirements.txt
```
## Install the Rust development system
Go to [RustUp](https://rustup.rs) and follow the instructions. Basically, run the following:
```shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
When Rust finishes installing, it will tell you to execute a command to place the Rust build tools into your path. You need to either execute this command or logout and back in again.
Once that's done, please run:
```shell
cd /opt/libreqos/src/
./build_rust.sh
```
This will take a while the first time, but it puts everything in the right place.
Now, to build rust crates, run:
```shell
cd rust
cargo build --all
```

View File

@ -0,0 +1,12 @@
# Extras
## Flamegraph
```shell
git clone https://github.com/brendangregg/FlameGraph.git
cd FlameGraph
sudo perf record -F 99 -a -g -- sleep 60
perf script > out.perf
./stackcollapse-perf.pl out.perf > out.folded
./flamegraph.pl --title LibreQoS --width 7200 out.folded > libreqos.svg
```

View File

@ -0,0 +1,65 @@
# Integrations
## UISP Integration
First, set the relevant parameters for UISP (uispAuthToken, UISPbaseURL, etc.) in ispConfig.py.
To test the UISP Integration, use
```shell
python3 integrationUISP.py
```
On the first successful run, it will create a network.json and ShapedDevices.csv file.
If a network.json file exists, it will not be overwritten.
You can modify the network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the UISP integration is run.
You have the option to run integrationUISP.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportUISP = True``` in ispConfig.py
## Powercode Integration
First, set the relevant parameters for Powercode (powercode_api_key, powercode_api_url, etc.) in ispConfig.py.
To test the Powercode Integration, use
```shell
python3 integrationPowercode.py
```
On the first successful run, it will create a ShapedDevices.csv file.
You can modify the network.json file manually to reflect Site/AP bandwidth limits.
ShapedDevices.csv will be overwritten every time the Powercode integration is run.
You have the option to run integrationPowercode.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportPowercode = True``` in ispConfig.py
## Sonar Integration
First, set the relevant parameters for Sonar (sonar_api_key, sonar_api_url, etc.) in ispConfig.py.
To test the Sonar Integration, use
```shell
python3 integrationSonar.py
```
On the first successful run, it will create a ShapedDevices.csv file.
If a network.json file exists, it will not be overwritten.
You can modify the network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the Sonar integration is run.
You have the option to run integrationSonar.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportSonar = True``` in ispConfig.py
## Splynx Integration
First, set the relevant parameters for Splynx (splynx_api_key, splynx_api_secret, etc.) in ispConfig.py.
The Splynx Integration uses Basic authentication. For using this type of authentication, please make sure you enable [Unsecure access](https://splynx.docs.apiary.io/#introduction/authentication) in your Splynx API key settings. Also the Splynx API key should be granted access to the necessary permissions.
To test the Splynx Integration, use
```shell
python3 integrationSplynx.py
```
On the first successful run, it will create a ShapedDevices.csv file.
You can manually create your network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the Splynx integration is run.
You have the option to run integrationSplynx.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportSplynx = True``` in ispConfig.py

View File

@ -0,0 +1,28 @@
# Performance Tuning
## Ubuntu Starts Slowly (~2 minutes)
### List all services which requires network
```shell
systemctl show -p WantedBy network-online.target
```
### For Ubuntu 22.04 this command can help
```shell
systemctl disable cloud-config iscsid cloud-final
```
### Set proper governor for CPU (baremetal/hypervisior host)
```shell
cpupower frequency-set --governor performance
```
### OSPF
It is recommended to set the OSPF timers of both OSPF neighbors (core and edge router) to minimize downtime upon a reboot of the LibreQoS server.
* hello interval
* dead

View File

@ -0,0 +1,42 @@
# Troubleshooting
## Common Issues
### LibreQoS Is Running, But Traffic Not Shaping
In ispConfig.py, make sure the edge and core interfaces correspond to correctly to the edge and core. Try swapping the interfaces to see if shaping starts to work.
Make sure your services are running properly
- `lqosd.service`
- `lqos_node_manager`
- `lqos_scheduler`
Node manager and scheduler are dependent on the `lqos.service` being in a healthy, running state.
For example to check the status of lqosd, run:
```sudo systemctl status lqosd```
### lqosd not running or failed to start
At the command-line, type ```sudo RUST_LOG=info /opt/libreqos/src/bin/lqosd``` which will provide specifics regarding why it failed to start.
### RTNETLINK answers: Invalid argument
This tends to show up when the MQ qdisc cannot be added correctly to the NIC interface. This would suggest the NIC has insufficient RX/TX queues. Please make sure you are using the [recommended NICs](../SystemRequirements/Networking.md).
### InfluxDB "Failed to update bandwidth graphs"
The scheduler (scheduler.py) runs the InfluxDB integration within a try/except statement. If it fails to update InfluxDB, it will report "Failed to update bandwidth graphs".
To find the exact cause of the failure, please run ```python3 graphInfluxDB.py``` which will provde more specific errors.
### All customer IPs are listed under Unknown IPs, rather than Shaped Devices in GUI
```
cd /opt/libreqos/src
sudo systemctl stop lqos_scheduler
sudo python3 LibreQoS.py
```
The console output from running LibreQoS.py directly provides more specific errors regarding issues with ShapedDevices.csv and network.json
Once you have identified the error and fixed ShapedDevices.csv and/or Network.json, please then run
```sudo systemctl start lqos_scheduler```

View File

@ -0,0 +1,24 @@
# Updating 1.4 To Latest Version
```{warning}
If you use the XDP bridge, traffic will stop passing through the bridge during the update (XDP bridge is only operating while lqosd runs).
```
## If you installed with Git
1. Change to your `LibreQoS` directory (e.g. `cd /opt/LibreQoS`)
2. Update from Git: `git pull`
3. Recompile: `./build-rust.sh`
4. `sudo rust/remove_pinned_maps.sh`
Run the following commands to reload the LibreQoS services.
```shell
sudo systemctl restart lqosd
sudo systemctl restart lqos_node_manager
sudo systemctl restart lqos_scheduler
```
## If you installed through the APT repository
All you should have to do in this case is run `sudo apt update && sudo apt upgrade` and LibreQoS should install the new package.

1
docs/v1.4/test.txt Normal file
View File

@ -0,0 +1 @@

View File

@ -0,0 +1,17 @@
# LibreQoS v4 to v1.5 Change Summary
NLNet Milestones: 2B and 2C.
This is a relatively huge development branch. Major features:
* The kernel-side XDP now performs all dissection and analysis in the XDP side, not the TC side. This results in better CPU usage overall.
* If your kernel/drivers support it, use eBPF metadata functionality to completely skip a secondary LPM check - for a substantial CPU usage decrease.
* Packets are divided into "flows" (by a source IP/dest IP/protocol/src port/dst port tuple).
* Flows gather TCP retransmission data, as well as byte/packet counts.
* Flows are scanned for RTT (by time sequence). When one occurs, instead of using a regularly polled map (which proved slow), they are sent to the userspace demon by a kernel ringbuffer/message system.
* RTT messages are received by the userspace and compared with an "ignore" list. If they aren't ignored, they are categorized by remote IP for ASN information, and RTT data placed in a large ringbuffer.
* Flows are expired after a TCP FIN or RST event, or 30 seconds (configurable) after they cease sending data.
* Once a flow expires, it is sent to the "finished flow system".
* The finished flow system categorizes by target ASN, target location (geolocated via a free database), IP protocol and ethertype. These are displayed in the GUI.
* Optionally, finished flows can be sent to another host in summary form via Netflow V5 or Netflow V9 protocols - allowing for further analysis with tools such as `ntop`.
* Quite a bit of UI work to accommodate all of this.

View File

@ -0,0 +1,101 @@
# Configure LibreQoS
## Configure lqos.conf
If you installed LibreQoS the complex (Git) installation, you can copy the lqosd daemon configuration file to `/etc`. This is not neccesarry if you installed using the .deb:
```shell
cd /opt/libreqos/src
sudo cp lqos.example /etc/lqos.conf
```
Now edit the file to match your setup with
```shell
sudo nano /etc/lqos.conf
```
In the ```[bridge]``` section, change `to_internet` and `to_network` to match your network interfaces.
- `to_internet = "enp1s0f1"`
- `to_network = "enp1s0f2"`
Then, if using Bifrost/XDP set `use_xdp_bridge = true` under that same `[bridge]` section. If you're not sure whether you need this, we recommend to leave it as `false`.
- Set downlink_bandwidth_mbps and uplink_bandwidth_mbps to match the bandwidth in Mbps of your network's upstream / WAN internet connection. The same can be done for generated_pn_download_mbps and generated_pn_upload_mbps.
- to_internet would be the interface facing your edge router and the broader internet
- to_network would be the interface facing your core router (or bridged internal network if your network is bridged)
Note: If you find that traffic is not being shaped when it should, please make sure to swap the interface order and restart lqosd as well as lqos_scheduler with ```sudo systemctl restart lqosd lqos_scheduler```.
After changing any part of `/etc/lqos.conf` it is highly recommended to always restart lqosd, using `sudo systemctl restart lqosd`. This re-parses any new values in lqos.conf, making those new values accessible to both the Rust and Python sides of the code.
### Integrations
Learn more about [configuring integrations here](../TechnicalDocs/integrations.md).
## Network.json
Network.json allows ISP operators to define a Hierarchical Network Topology, or Flat Network Topology.
If you plan to use the built-in UISP or Splynx integrations, you do not need to create a network.json file quite yet.
If you plan to use the built-in UISP integration, it will create this automatically on its first run (assuming network.json is not already present).
If you will not be using an integration, you can manually define the network.json following the template file - network.example.json
```text
+-----------------------------------------------------------------------+
| Entire Network |
+-----------------------+-----------------------+-----------------------+
| Parent Node A | Parent Node B | Parent Node C |
+-----------------------+-------+-------+-------+-----------------------+
| Parent Node D | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Parent Node F |
+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| Sub 1 | Sub 2 | | | | Sub 8 | Sub 9 |
+-------+-------+-------+-----------------------+-------+-------+-------+
```
For networks with no Parent Nodes (no strictly defined Access Points or Sites) edit the network.json to use a Flat Network Topology with
```nano network.json```
setting the following file content:
```json
{}
```
## CSV to JSON format helper
You can use
```shell
python3 csvToNetworkJSON.py
```
to convert manualNetwork.csv to a network.json file.
manualNetwork.csv can be copied from the template file, manualNetwork.template.csv
Note: The parent node name must match that used for clients in ShapedDevices.csv
## ShapedDevices.csv
If you are using an integration, this file will be automatically generated. If you are not using an integration, you can manually edit the file using either the WebUI or by directly editing the ShapedDevices.csv file through the CLI.
### Manual Editing by WebUI
Navigate to the LibreQoS WebUI (http://a.b.c.d:9123) and select Configuration > Shaped Devices.
### Manual Editing by CLI
- Modify the ShapedDevices.csv file using your preferred spreadsheet editor (LibreOffice Calc, Excel, etc), following the template file - ShapedDevices.example.csv
- Circuit ID is required. Must be a string of some sort (int is fine, gets parsed as string). Must NOT include any number symbols (#). Every circuit needs a unique CircuitID - they cannot be reused. Here, circuit essentially means customer location. If a customer has multiple locations on different parts of your network, use a unique CircuitID for each of those locations.
- At least one IPv4 address or IPv6 address is required for each entry.
- The Access Point or Site name should be set in the Parent Node field. Parent Node can be left blank for flat networks.
- The ShapedDevices.csv file allows you to set minimum guaranteed, and maximum allowed bandwidth per subscriber.
- The minimum allowed plan rates for Circuits are 2Mbit. Bandwidth min and max should both be above that threshold.
- Recommendation: set the min bandwidth to something like 25/10 and max to 1.15X advertised plan rate by using bandwidthOverheadFactor = 1.15
- This way, when an AP hits its ceiling, users have any remaining AP capacity fairly distributed between them.
- Ensure a reasonable minimum bandwidth minimum for every subscriber, allowing them to utilize up to the maximum provided when AP utilization is below 100%.
Note regarding SLAs: For customers with SLA contracts that guarantee them a minimum bandwidth, set their plan rate as the minimum bandwidth. That way when an AP approaches its ceiling, SLA customers will always get that amount.
![image](https://user-images.githubusercontent.com/22501920/200134960-28709d0f-48fe-4129-b4fd-70b204cade2c.png)
Once your configuration is complete. You're ready to run the application and start the [Deamons](./services-and-run.md)

View File

@ -0,0 +1,46 @@
# Network Design Assumptions
## Officially supported configuration
- LibreQoS placed inline in network, usually between an edge router (NAT, firewall) and core router (distribution to sites across network).
- If you use NAT/CG-NAT, place LibreQoS inline south of where NAT is applied, as LibreQoS needs to shape internal addresses (100.64.0.0/12) not public post-NAT IPs.
- Edge and Core routers should have 1500 MTU on links between them
- If you use MPLS, you would terminate MPLS traffic at the core router. LibreQoS cannot decapsulate MPLS on its own.
- OSPF primary link (low cost) through the server running LibreQoS
- OSPF backup link (high cost, maybe 200 for example)
![Offical Configuration](https://raw.githubusercontent.com/LibreQoE/LibreQoS/main/docs/design.png)
## Testbed configuration
When you are first testing out LibreQoS, we recommend deploying a small-scale testbed to see it in action.
![image](https://github.com/user-attachments/assets/6174bd29-112d-4b00-bea8-41314983d37a)
### Network Interface Card
```{note}
You must have one of these:
- single NIC with two interfaces,
- two NICs with single interface,
- 2x VLANs interface (using one or two NICs).
```
LibreQoS requires NICs to have 2 or more RX/TX queues and XDP support. While many cards theoretically meet these requirements, less commonly used cards tend to have unreported driver bugs which impede XDP functionality and make them unusable for our purposes. At this time we recommend the Intel x520, Intel x710, and Nvidia (ConnectX-5 or newer) NICs. We cannot guarantee compatibility with other cards.
## Alternate configuration (Not officially supported)
This alternate configuration uses Spanning Tree Protocol (STP) to modify the data path in the event the LibreQoS device is offline for maintenance or another problem.
```{note}
Most of the same considerations apply to the alternate configuration as they do to the officially supported configuation
```
- LibreQoS placed inline in network, usually between an edge router (NAT, firewall) and core router (distribution to sites across network).
- If you use NAT/CG-NAT, place LibreQoS inline south of where NAT is applied, as LibreQoS needs to shape internal addresses (100.64.0.0/12) not public post-NAT IPs.
- Edge router and Core switch should have 1500 MTU on links between them
- If you use MPLS, you would terminate MPLS traffic somewhere south of the core/distribution switch. LibreQoS cannot decapsulate MPLS on its own.
- Spanning Tree primary link (low cost) through the server running LibreQoS
- Spanning Tree backup link (high cost, maybe 80 for example)
Keep in mind that if you use different bandwidth links, for example, 10 Gbps through LibreQoS, and 1 Gbps between core switch and edge router, you may need to be more intentional with your STP costs.
![Alternate Configuration](../stp-diagram.png)

View File

@ -0,0 +1,24 @@
# Install LibreQoS 1.5
## Step 1 - Complete The Prerequisites
[LibreQoS Installation Prerequisites](quickstart-prereq.md)
## Step 2 - Install
### Download .DEB Package (Recommended Method)
Donwload the latest .deb from https://libreqos.io/#download .
Unzip the .zip file and transfer the .deb to your LibreQoS box, installing with:
```
sudo apt install [deb file name]
```
### Git Install (For Developers Only - Not Recommended)
[Complex Installation](../TechnicalDocs/git-install.md)
## Step 3 - Configure
You are now ready to [Configure](./configuration.md) LibreQoS!

View File

@ -0,0 +1,141 @@
# Server Setup - Pre-requisites
Disable hyperthreading on the BIOS/UEFI of your host system. Hyperthreaading is also known as Simultaneous Multi Threading (SMT) on AMD systems. Disabling this is very important for optimal performance of the XDP cpumap filtering and, in turn, throughput and latency.
- Boot, pressing the appropriate key to enter the BIOS settings
- For AMD systems, you will have to navigate the settings to find the "SMT Control" setting. Usually it is under something like ```Advanced -> AMD CBS -> CPU Common Options -> Thread Enablement -> SMT Control``` Once you find it, switch to "Disabled" or "Off"
- For Intel systems, you will also have to navigate the settings to find the "hyperthrading" toggle option. On HP servers it's under ```System Configuration > BIOS/Platform Configuration (RBSU) > Processor Options > Intel (R) Hyperthreading Options.```
- Save changes and reboot
## Install Ubuntu Server
We recommend Ubuntu Server because its kernel version tends to track closely with the mainline Linux releases. Our current documentation assumes Ubuntu Server. To run LibreQoS v1.4, Linux kernel 5.11 or greater is required, as 5.11 includes some important XDP patches. Ubuntu Server 22.04 uses kernel 5.13, which meets that requirement.
You can download Ubuntu Server 22.04 from <a href="https://ubuntu.com/download/server">https://ubuntu.com/download/server</a>.
1. Boot Ubuntu Server from USB.
2. Follow the steps to install Ubuntu Server.
3. If you use a Mellanox network card, the Ubuntu Server installer will ask you whether to install the mellanox/intel NIC drivers. Check the box to confirm. This extra driver is important.
4. On the Networking settings step, it is recommended to assign a static IP address to the management NIC.
5. Ensure SSH server is enabled so you can more easily log into the server later.
6. You can use scp or sftp to access files from your LibreQoS server for easier file editing. Here's how to access via scp or sftp using an [Ubuntu](https://www.addictivetips.com/ubuntu-linux-tips/sftp-server-ubuntu/) or [Windows](https://winscp.net/eng/index.php) machine.
### Choose Bridge Type
There are two options for the bridge to pass data through your two interfaces:
- Bifrost XDP-Accelerated Bridge
- Regular Linux Bridge
The regular Linux bridge is recommended for Nvidea/Mellanox NICs such as the ConnectX-5 series (which have superior bridge performance), and VM setups using virtualized NICs. The Bifrost Bridge is recommended for Intel NICs with XDP support, such as the X520 and X710.
To use the Bifrost bridge, be sure to enable Bifrost/XDP in lqos.conf in the [Configuration](configuration.md) section.
Below are the instructions to configure Netplan, whether using the Linux Bridge or Bifrost XDP bridge:
## Netplan config
### Netplan for a regular Linux bridge (if not using Bifrost XDP bridge)
From the Ubuntu VM, create a linux interface bridge - br0 - with the two shaping interfaces.
Find your existing .yaml file in /etc/netplan/ with
```shell
cd /etc/netplan/
ls
```
Then edit the .yaml file there with
```shell
sudo nano XX-cloud-init.yaml
```
With XX corresponding to the name of the existing file.
Editing the .yaml file, we need to define the shaping interfaces (here, ens19 and ens20) and add the bridge with those two interfaces. Assuming your interfaces are ens18, ens19, and ens20, here is what your file might look like:
```yaml
# This is the network config written by 'subiquity'
network:
ethernets:
ens18:
addresses:
- (addr goes here)
gateway4: (gateway goes here)
nameservers:
addresses:
- 1.1.1.1
- 8.8.8.8
search: []
ens19:
dhcp4: no
dhcp6: no
ens20:
dhcp4: no
dhcp6: no
version: 2
bridges:
br0:
interfaces:
- ens19
- ens20
```
By setting `dhcp4: no` and `dhcp6: no`, the interfaces will be brought up as part of the normal boot cycle, despite not having IP addresses assigned.
Make sure to replace `(addr goes here)` with your LibreQoS VM's address and subnet CIDR, and to replace `(gateway goes here)` with whatever your default gateway is.
Then run
```shell
sudo netplan apply
```
### Netplan for the Bifrost XDP bridge
Find your existing .yaml file in /etc/netplan/ with
```shell
cd /etc/netplan/
ls
```
Then edit the .yaml file there with
```shell
sudo nano XX-cloud-init.yaml
```
With XX corresponding to the name of the existing file.
Editing the .yaml file, we need to define the shaping interfaces (here, ens19 and ens20) and add the bridge with those two interfaces. Assuming your interfaces are ens18, ens19, and ens20, here is what your file might look like:
```
network:
ethernets:
ens18:
addresses:
- (addr goes here)
gateway4: (gateway goes here)
nameservers:
addresses:
- (etc)
search: []
ens19:
dhcp4: no
dhcp6: no
ens20:
dhcp4: no
dhcp6: no
```
By setting `dhcp4: no` and `dhcp6: no`, the interfaces will be brought up as part of the normal boot cycle, despite not having IP addresses assigned.
Make sure to replace (addr goes here) with your LibreQoS VM's address and subnet CIDR, and to replace `(gateway goes here)` with whatever your default gateway is.
Once everything is in place, run:
```shell
sudo netplan apply
```

View File

@ -0,0 +1,69 @@
# LibreQoS daemons
lqosd
- Manages actual XDP code. Build with Rust.
- Runs the GUI available at http://a.b.c.d:9123
lqos_scheduler
- lqos_scheduler handles statistics and performs continuous refreshes of LibreQoS' shapers, including pulling from any enabled CRM Integrations (UISP, Splynx).
- On start: Run a full setup of queues
- Every 30 minutes: Update queues, pulling new configuration from CRM integration if enabled
- Minute interval is adjustable with the setting `queue_refresh_interval_mins` in `/etc/lqos.conf`.
## Run daemons with systemd
Note: If you used the .deb installer, you can skip this section. The .deb installer automatically sets these up.
You can setup `lqosd`, and `lqos_scheduler` as systemd services.
```shell
sudo cp /opt/libreqos/src/bin/lqosd.service.example /etc/systemd/system/lqosd.service
sudo cp /opt/libreqos/src/bin/lqos_scheduler.service.example /etc/systemd/system/lqos_scheduler.service
```
Finally, run
```shell
sudo systemctl daemon-reload
sudo systemctl enable lqosd lqos_scheduler
```
You can now point a web browser at `http://a.b.c.d:9123` (replace `a.b.c.d` with the management IP address of your shaping server) and enjoy a real-time view of your network.
## Debugging lqos_scheduler
In the background, lqos_scheduler runs scheduler.py, which in turn runs LibreQoS.py
One-time runs of these individual components can be very helpful for debugging and to make sure everything is correctly configured.
First, stop lqos_scheduler
```shell
sudo systemctl stop lqos_scheduler
```
For one-time runs of LibreQoS.py, use
```shell
sudo ./LibreQoS.py
```
- To use the debug mode with more verbose output, use:
```shell
sudo ./LibreQoS.py --debug
```
To confirm that lqos_scheduler (scheduler.py) is able to work correctly, run:
```shell
sudo python3 scheduler.py
```
Once you have any errors eliminated, restart lqos_scheduler with
```shell
sudo systemctl start lqos_scheduler
```

View File

@ -0,0 +1,31 @@
# Share your before and after
We ask that you please share an anonymized screenshot of your LibreQoS deployment before (monitor only mode) and after (queuing enabled) to the [LibreQoS Chat](https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/). This helps us gauge the impact of our software. It also makes us smile.
1. Enable monitor only mode
2. Klingon mode (Redact customer info)
3. Screenshot
4. Resume regular queuing
5. Screenshot
## Enable monitor only mode
```shell
sudo systemctl stop lqos_scheduler
sudo systemctl restart lqosd
sudo systemctl restart lqos_node_manager
```
## Klingon mode
Please go to the Web UI and click Configuration. Toggle Redact Customer Information (screenshot mode) and then Apply Changes.
## Resume regular queuing
```shell
sudo systemctl start lqos_scheduler
```
## Screenshot
To generate a screenshot - please go to the Web UI and click Configuration. Toggle Redact Customer Information (screenshot mode), Apply Changes, and then return to the dashboard to take a screenshot.

View File

@ -0,0 +1,12 @@
# Extras
## Flamegraph
```shell
git clone https://github.com/brendangregg/FlameGraph.git
cd FlameGraph
sudo perf record -F 99 -a -g -- sleep 60
perf script > out.perf
./stackcollapse-perf.pl out.perf > out.folded
./flamegraph.pl --title LibreQoS --width 7200 out.folded > libreqos.svg
```

View File

@ -0,0 +1,66 @@
# Git install
## Clone the repo
The recommended install location is `/opt/libreqos`
Go to the install location, and clone the repo:
```shell
cd /opt/
git clone https://github.com/LibreQoE/LibreQoS.git libreqos
sudo chown -R YOUR_USER /opt/libreqos
cd /opt/libreqos/
git switch develop
```
By specifying `libreqos` at the end, git will ensure the folder name is lowercase.
## Install Dependencies from apt and pip
You need to have a few packages from `apt` installed:
```shell
sudo apt-get install -y python3-pip clang gcc gcc-multilib llvm libelf-dev git nano graphviz curl screen llvm pkg-config linux-tools-common linux-tools-`uname -r` libbpf-dev libssl-dev
```
Then you need to install some Python dependencies:
```shell
cd /opt/libreqos
pip install requirements.txt --break-system-packages
sudo pip install requirements.txt --break-system-packages
```
## Python 3.10 quirk (will fix later)
```
cd /opt/libreqos/src/rust
cargo update
sudo cp /usr/lib/x86_64-linux-gnu/libpython3.11.so /usr/lib/x86_64-linux-gnu/libpython3.10.so.1.0
```
## Install the Rust development system
Run the following:
```shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
When Rust finishes installing, it will tell you to execute a command to place the Rust build tools into your path. You need to either execute this command or logout and back in again.
Once that's done, please run:
```shell
cd /opt/libreqos/src/
./build_rust.sh
```
This will take a while the first time, but it puts everything in the right place.
Now, to build rust crates, run:
```shell
cd rust
cargo build --all
```

View File

@ -0,0 +1,151 @@
# Integrations
## Splynx Integration
First, set the relevant parameters for Splynx (splynx_api_key, splynx_api_secret, etc.) in `/etc/lqos.conf`.
The Splynx Integration uses Basic authentication. For using this type of authentication, please make sure you enable [Unsecure access](https://splynx.docs.apiary.io/#introduction/authentication) in your Splynx API key settings. Also the Splynx API key should be granted access to the necessary permissions.
To test the Splynx Integration, use
```shell
python3 integrationSplynx.py
```
On the first successful run, it will create a ShapedDevices.csv file and network.json.
ShapedDevices.csv will be overwritten every time the Splynx integration is run.
To ensure the network.json is always overwritten with the newest version pulled in by the integration, please edit `/etc/lqos.conf` with the command `sudo nano /etc/lqos.conf`.
Edit the file to set the value of `always_overwrite_network_json` to `true`.
Then, run `sudo systemctl restart lqosd`.
You have the option to run integrationSplynx.py automatically on boot and every X minutes (set by the parameter `queue_refresh_interval_mins`), which is highly recommended. This can be enabled by setting ```enable_spylnx = true``` in `/etc/lqos.conf`.
Once set, run `sudo systemctl restart lqos_scheduler`.
### Splynx Overrides
You can also modify the the file `integrationSplynxBandwidths.csv` to override the default bandwidths for each Node (Site, AP).
A template is available in the `/opt/libreqos/src` folder. To utilize the template, copy the file `integrationSplynxBandwidths.template.csv` (removing the `.template` part of the filename) and set the appropriate information inside each file. For example, if you want to change the set bandwidth for a site, you would do:
```
sudo cp /opt/libreqos/src/integrationSplynxBandwidths.template.csv /opt/libreqos/src/integrationSplynxBandwidths.csv
```
And edit the CSV using LibreOffice or your preferred CSV editor.
## UISP Integration
First, set the relevant parameters for UISP (token, url, automatic_import_uisp, etc.) in `/etc/lqos.conf`.
```
# Whether to run the UISP integration automatically in the lqos_scheduler service
enable_uisp = true
# Your UISP API Access Token
token = ""
# Your UISP URL (include https://, but omit anything past .com, .net, etc)
url = "https://uisp.your_domain.com"
# The site here refers to the Root site you want UISP to base its topology "perspective" from.
# Default value is a blank string.
site = "Site_name"
# Strategy type. "full" is recommended. "flat" can be used if only client shaping is desired.
strategy = "full"
# Suspension strategy:
# * "none" - do not handle suspensions
# * "ignore" - do not add suspended customers to the network map
# * "slow" - limit suspended customers to 1mbps
suspended_strategy = "none"
# UISP's reported AP capacities for AirMax can be a bit optimistic. For AirMax APs, we limit
# to 65% of what UISP claims an AP's capacity is, by default. This is adjustable.
airmax_capacity = 0.65
# UISP's reported AP capacities for LTU are more accurate, but to be safe we adjust to 95%
# of those capacities. This is adjustable.
ltu_capacity = 0.95
# If you want to exclude sites in UISP from appearing in your LibreQoS network.json, simply
# include them here. For example, exclude_sites = ["Site_1", "Site_2"]
exclude_sites = []
# If you use DHCPv6, and want to pull in IPv6 CIDRs corresponding to each customer's IPv4
# address, you can do so with this. If enabled, be sure to fill out mikrotikDHCPRouterList.csv
# and run `python3 mikrotikFindIPv6.py` to test its functionality.
ipv6_with_mikrotik = false
# If you want customers to recieve a bit more of less than their allocated speed plan, set
# it here. For example, 1.15 is 15% above their alloted speed plan.
bandwidth_overhead_factor = 1.15
# By default, the customer "minimum" is set to 98% of the maximuum (CIR).
commit_bandwidth_multiplier = 0.98
exception_cpes = []
# If you have some sites branched off PtMP Access Points, set `true`
use_ptmp_as_parent = true
uisp_use_burst = true
```
To test the UISP Integration, use
```shell
cd /opt/libreqos/src
sudo /opt/libreqos/src/bin/uisp_integration
```
On the first successful run, it will create a network.json and ShapedDevices.csv file.
If a network.json file exists, it will not be overwritten, unless you set ```always_overwrite_network_json = true```.
ShapedDevices.csv will be overwritten every time the UISP integration is run.
To ensure the network.json is always overwritten with the newest version pulled in by the integration, please edit `/etc/lqos.conf` with the command `sudo nano /etc/lqos.conf`.
Edit the file to set the value of `always_overwrite_network_json` to `true`.
Then, run `sudo systemctl restart lqosd`.
You have the option to run integrationUISP.py automatically on boot and every X minutes (set by the parameter `queue_refresh_interval_mins`), which is highly recommended. This can be enabled by setting ```enable_uisp = true``` in `/etc/lqos.conf`. Once set, run `sudo systemctl restart lqos_scheduler`.
### UISP Overrides
You can also modify the the following files to more accurately reflect your network:
- integrationUISPbandwidths.csv
- integrationUISProutes.csv
Each of the files above have templates available in the `/opt/libreqos/src` folder. If you don't find them there, you can navigate [here](https://github.com/LibreQoE/LibreQoS/tree/develop/src). To utilize the template, copy the file (removing the `.template` part of the filename) and set the appropriate information inside each file.
For example, if you want to change the set bandwidth for a site, you would do:
```
sudo cp /opt/libreqos/src/integrationUISPbandwidths.template.csv /opt/libreqos/src/integrationUISPbandwidths.csv
```
And edit the CSV using LibreOffice or your preferred CSV editor.
## Powercode Integration
First, set the relevant parameters for Powercode (powercode_api_key, powercode_api_url, etc.) in `/etc/lqos.conf`.
To test the Powercode Integration, use
```shell
python3 integrationPowercode.py
```
On the first successful run, it will create a ShapedDevices.csv file.
You can modify the network.json file manually to reflect Site/AP bandwidth limits.
ShapedDevices.csv will be overwritten every time the Powercode integration is run.
You have the option to run integrationPowercode.py automatically on boot and every X minutes (set by the parameter `queue_refresh_interval_mins`), which is highly recommended. This can be enabled by setting ```enable_powercode = true``` in `/etc/lqos.conf`.
## Sonar Integration
First, set the relevant parameters for Sonar (sonar_api_key, sonar_api_url, etc.) in `/etc/lqos.conf`.
To test the Sonar Integration, use
```shell
python3 integrationSonar.py
```
On the first successful run, it will create a ShapedDevices.csv file.
If a network.json file exists, it will not be overwritten, unless you set ```always_overwrite_network_json = true```.
You can modify the network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the Sonar integration is run.
You have the option to run integrationSonar.py automatically on boot and every X minutes (set by the parameter `queue_refresh_interval_mins`), which is highly recommended. This can be enabled by setting ```enable_sonar = true``` in `/etc/lqos.conf`.

View File

@ -0,0 +1,28 @@
# Performance Tuning
## Ubuntu Starts Slowly (~2 minutes)
### List all services which requires network
```shell
systemctl show -p WantedBy network-online.target
```
### For Ubuntu 22.04 this command can help
```shell
systemctl disable cloud-config iscsid cloud-final
```
### Set proper governor for CPU (baremetal/hypervisior host)
```shell
cpupower frequency-set --governor performance
```
### OSPF
It is recommended to set the OSPF timers of both OSPF neighbors (core and edge router) to minimize downtime upon a reboot of the LibreQoS server.
* hello interval
* dead

View File

@ -0,0 +1,48 @@
# Troubleshooting
## Common Issues
### LibreQoS Is Running, But Traffic Not Shaping
- In /etc/lqos.conf swap the interfaces
- Restart lqosd
```
cd /opt/libreqos/src
sudo python3 LibreQoS.py
```
If that fixes it, and if you are using the scheduler service, run ```sudo systemctl restart lqos_scheduler```
Make sure your services are running properly
- `lqosd.service`
- `lqos_scheduler`
The Web UI and lqos_scheduler are dependent on the `lqos.service` being in a healthy, running state.
For example to check the status of lqosd, run:
```sudo systemctl status lqosd```
### lqosd not running or failed to start
At the command-line, type ```sudo RUST_LOG=info /opt/libreqos/src/bin/lqosd``` which will provide specifics regarding why it failed to start.
### RTNETLINK answers: Invalid argument
This tends to show up when the MQ qdisc cannot be added correctly to the NIC interface. This would suggest the NIC has insufficient RX/TX queues. Please make sure you are using the [recommended NICs](../SystemRequirements/Networking.md).
### InfluxDB "Failed to update bandwidth graphs"
The scheduler (scheduler.py) runs the InfluxDB integration within a try/except statement. If it fails to update InfluxDB, it will report "Failed to update bandwidth graphs".
To find the exact cause of the failure, please run ```python3 graphInfluxDB.py``` which will provde more specific errors.
### All customer IPs are listed under Unknown IPs, rather than Shaped Devices in GUI
```
cd /opt/libreqos/src
sudo systemctl stop lqos_scheduler
sudo python3 LibreQoS.py
```
The console output from running LibreQoS.py directly provides more specific errors regarding issues with ShapedDevices.csv and network.json
Once you have identified the error and fixed ShapedDevices.csv and/or Network.json, please then run
```sudo systemctl start lqos_scheduler```

View File

@ -0,0 +1,19 @@
# Updating 1.5 To Latest Version
```{warning}
If you use the XDP bridge, traffic will briefly stop passing through the bridge when lqosd restarts (XDP bridge is only operating while lqosd runs).
```
## If you installed with Git
1. Change to your `LibreQoS` directory (e.g. `cd /opt/LibreQoS`)
2. Update from Git: `git pull`
3. ```git switch develop```
5. Recompile: `./build-rust.sh`
6. `sudo rust/remove_pinned_maps.sh`
Run the following commands to reload the LibreQoS services.
```shell
sudo systemctl restart lqosd lqos_scheduler
```

View File

@ -17,6 +17,8 @@ Welcome to the LibreQoS documentation!
:caption: Read me first!
docs/Quickstart/networkdesignassumptions
docs/SystemRequirements/Compute
docs/SystemRequirements/Networking
.. toctree::
:maxdepth: 1
@ -26,31 +28,39 @@ Welcome to the LibreQoS documentation!
.. toctree::
:maxdepth: 1
:caption: Quickstart:
:caption: v1.4:
docs/SystemRequirements/Compute
docs/SystemRequirements/Networking
docs/Quickstart/quickstart-prereq
docs/Quickstart/quickstart-libreqos-1.4
docs/Quickstart/configuration
docs/Quickstart/services-and-run
docs/Quickstart/share
docs/v1.4/Quickstart/quickstart-prereq
docs/v1.4/Quickstart/quickstart-libreqos-1.4
docs/v1.4/Quickstart/configuration
docs/v1.4/Quickstart/services-and-run
docs/v1.4/Quickstart/share
.. toctree::
::maxdepth: 1
:caption: Updates:
docs/v1.4/Updates/update
docs/Updates/update
docs/v1.4/TechnicalDocs/complex-install
docs/v1.4/TechnicalDocs/troubleshooting
docs/v1.4/TechnicalDocs/integrations
docs/v1.4/TechnicalDocs/extras
docs/v1.4/TechnicalDocs/performance-tuning
.. toctree::
:maxdepth: 1
:caption: Technical Documentation:
:caption: v1.5:
docs/TechnicalDocs/complex-install
docs/TechnicalDocs/troubleshooting
docs/TechnicalDocs/integrations
docs/TechnicalDocs/extras
docs/TechnicalDocs/performance-tuning
docs/v1.5/Quickstart/quickstart-prereq
docs/v1.5/Quickstart/quickstart-libreqos-1.5
docs/v1.5/Quickstart/configuration
docs/v1.5/Quickstart/services-and-run
docs/v1.5/Quickstart/share
docs/v1.5/Updates/update
docs/v1.5/TechnicalDocs/complex-install
docs/v1.5/TechnicalDocs/troubleshooting
docs/v1.5/TechnicalDocs/integrations
docs/v1.5/TechnicalDocs/extras
docs/v1.5/TechnicalDocs/performance-tuning
.. toctree::
:maxdepth: 1

View File

@ -61,8 +61,8 @@ echo "#!/bin/bash" >> postinst
echo "# Install Python Dependencies" >> postinst
echo "pushd /opt/libreqos" >> postinst
# - Setup Python dependencies as a post-install task
echo "python3 -m pip install --break-system-packages -r src/requirements.txt" >> postinst
echo "sudo python3 -m pip install --break-system-packages -r src/requirements.txt" >> postinst
echo "PIP_BREAK_SYSTEM_PACKAGES=1 python3 -m pip install -r src/requirements.txt" >> postinst
echo "PIP_BREAK_SYSTEM_PACKAGES=1 sudo python3 -m pip install -r src/requirements.txt" >> postinst
# - Run lqsetup
echo "/opt/libreqos/src/bin/lqos_setup" >> postinst
# - Setup the services

View File

@ -1,5 +1,7 @@
from pythonCheck import checkPythonVersion
checkPythonVersion()
import requests
import warnings
import os
@ -13,6 +15,8 @@ from requests.auth import HTTPBasicAuth
if find_ipv6_using_mikrotik() == True:
from mikrotikFindIPv6 import pullMikrotikIPv6
from integrationCommon import NetworkGraph, NetworkNode, NodeType
import os
import csv
def buildHeaders():
"""
@ -299,4 +303,4 @@ def importFromSplynx():
createShaper()
if __name__ == '__main__':
importFromSplynx()
importFromSplynx()

View File

@ -473,8 +473,8 @@ def buildFullGraph():
continue
if uisp_suspended_strategy() == "slow":
print("WARNING: Site " + name + " is suspended")
download = 1
upload = 1
download = 2
upload = 2
if site['identification']['status'] == "disconnected":
print("WARNING: Site " + name + " is disconnected")

805
src/rust/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -44,6 +44,8 @@ anyhow = "1"
thiserror = "1"
tokio = { version = "1", features = [ "full" ] }
log = "0"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
bincode = "1"
once_cell = "1"
nix = { version = "0", features = ["time"] }
@ -55,18 +57,23 @@ ip_network_table = "0"
ip_network = "0"
sha2 = "0"
uuid = { version = "1", features = ["v4", "fast-rng" ] }
dashmap = "5"
dashmap = "5.1.0"
toml = "0.8.8"
zerocopy = {version = "0.6.1", features = [ "simd" ] }
sysinfo = "=0.30.13"
zerocopy = {version = "0.8.5", features = [ "derive", "zerocopy-derive", "simd" ] }
sysinfo = { version = "0", default-features = false, features = [ "system" ] }
default-net = "0"
reqwest = { version = "0", features = ["json", "blocking"] }
pyo3 = "0.20.3"
colored = "2"
miniz_oxide = "0.7"
miniz_oxide = "0.8"
byteorder = "1"
num-traits = "0.2.19"
clap = { version = "4", features = ["derive"] }
timerfd = "1.6"
crossbeam-channel = { version = "0.5" }
crossbeam-queue = "0.3.11"
arc-swap = "1.7.1"
# May have to change this one for ARM?
jemallocator = "0.5"
mimalloc = "0.1.43"

View File

@ -8,8 +8,9 @@ license = "GPL-2.0-only"
tokio = { version = "1.25.0", features = ["full"] }
anyhow = { workspace = true }
env_logger = "0"
log = { workspace = true }
tracing = { workspace = true }
lqos_bus = { path = "../lqos_bus" }
serde_cbor = { workspace = true }
sqlite = "0.30.4"
axum = "0.6"
log = "0.4"

View File

@ -16,7 +16,7 @@ lqos_config = { path = "../lqos_config" }
lqos_utils = { path = "../lqos_utils" }
lts_client = { path = "../lts_client" }
tokio = { workspace = true }
log = { workspace = true }
tracing = { workspace = true }
nix = { workspace = true }
serde_cbor = { workspace = true }

View File

@ -1,6 +1,7 @@
mod v1;
use serde::{Serialize, Deserialize};
use thiserror::Error;
use tracing::warn;
pub use v1::*;
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -19,8 +20,8 @@ pub fn build_stats(stats: &AnonymousUsageV1) -> Result<Vec<u8>, StatsError> {
let mut result = Vec::new();
let payload = serde_cbor::to_vec(stats);
if let Err(e) = payload {
log::warn!("Unable to serialize statistics. Not sending them.");
log::warn!("{e:?}");
warn!("Unable to serialize statistics. Not sending them.");
warn!("{e:?}");
return Err(StatsError::SerializeFail);
}
let payload = payload.unwrap();

View File

@ -3,7 +3,7 @@ use crate::{
bus::BusClientError, decode_response, encode_request, BusRequest, BusResponse, BusSession,
BUS_SOCKET_PATH,
};
use log::error;
use tracing::error;
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
net::UnixStream,

View File

@ -7,7 +7,7 @@ mod session;
mod unix_socket_server;
mod queue_data;
pub use client::bus_request;
use log::error;
use tracing::error;
pub use persistent_client::BusClient;
pub use reply::BusReply;
pub use request::{BusRequest, StatsRequest, TopFlowType};

View File

@ -3,7 +3,7 @@ use crate::{
decode_response, encode_request, BusRequest, BusResponse, BusSession,
BUS_SOCKET_PATH,
};
use log::{error, warn};
use tracing::{error, warn};
use std::time::Duration;
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},

View File

@ -132,6 +132,9 @@ pub enum BusRequest {
/// The parent of the map to retrieve
parent: usize,
},
/// Request the full network tree
GetFullNetworkMap,
/// Retrieves the top N queues from the root level, and summarizes
/// the others as "other"
@ -139,6 +142,9 @@ pub enum BusRequest {
/// Retrieve node names from network.json
GetNodeNamesFromIds(Vec<usize>),
/// Get all circuits and usage statistics
GetAllCircuits,
/// Retrieve stats for all queues above a named circuit id
GetFunnel {
@ -190,6 +196,9 @@ pub enum BusRequest {
/// Lat/Lon of Endpoints
CurrentEndpointLatLon,
/// Duration of flows
FlowDuration,
/// Ether Protocol Summary
EtherProtocolSummary,

View File

@ -1,7 +1,5 @@
use super::QueueStoreTransit;
use crate::{
ip_stats::{FlowbeeSummaryData, PacketHeader}, IpMapping, IpStats, XdpPpingResult,
};
use crate::{ip_stats::{FlowbeeSummaryData, PacketHeader}, Circuit, IpMapping, IpStats, XdpPpingResult};
use lts_client::transport_data::{StatsTotals, StatsHost, StatsTreeNode};
use serde::{Deserialize, Serialize};
use std::net::IpAddr;
@ -82,6 +80,9 @@ pub enum BusResponse {
/// Named nodes from network.json
NodeNames(Vec<(usize, String)>),
/// Circuit data
CircuitData(Vec<Circuit>),
/// Statistics from lqosd
LqosdStats {
@ -138,6 +139,9 @@ pub enum BusResponse {
/// Current Lat/Lon of endpoints
CurrentLatLon(Vec<(f64, f64, String, u64, f32)>),
/// Duration of flows
FlowDuration(Vec<(usize, u64)>),
/// Summary of Ether Protocol
EtherProtocols{
/// Number of IPv4 Bytes

View File

@ -2,7 +2,7 @@ use crate::{
decode_request, encode_response, BusReply, BusRequest, BusResponse,
BUS_SOCKET_PATH,
};
use log::{error, warn};
use tracing::{debug, error, info, warn};
use std::{ffi::CString, fs::remove_file};
use thiserror::Error;
use tokio::{
@ -92,8 +92,9 @@ impl UnixSocketServer {
pub async fn listen(
&self,
handle_bus_requests: fn(&[BusRequest], &mut Vec<BusResponse>),
mut bus_rx: tokio::sync::mpsc::Receiver<(tokio::sync::oneshot::Sender<BusReply>, BusRequest)>,
) -> Result<(), UnixSocketServerError> {
// Setup the listener and grant permissions to it
// Set up the listener and grant permissions to it
let listener = UnixListener::bind(BUS_SOCKET_PATH);
if listener.is_err() {
error!("Unable to bind to {BUS_SOCKET_PATH}");
@ -102,42 +103,56 @@ impl UnixSocketServer {
}
let listener = listener.unwrap();
Self::make_socket_public()?;
warn!("Listening on: {}", BUS_SOCKET_PATH);
info!("Listening on: {}", BUS_SOCKET_PATH);
loop {
let ret = listener.accept().await;
if ret.is_err() {
error!("Unable to listen for requests on bound {BUS_SOCKET_PATH}");
error!("{:?}", ret);
return Err(UnixSocketServerError::ListenFail);
}
let (mut socket, _) = ret.unwrap();
tokio::spawn(async move {
loop {
let mut buf = vec![0; READ_BUFFER_SIZE];
let bytes_read = socket.read(&mut buf).await;
if bytes_read.is_err() {
warn!("Unable to read from client socket. Server remains alive.");
warn!("This is probably harmless.");
warn!("{:?}", bytes_read);
break; // Escape out of the thread
}
if let Ok(request) = decode_request(&buf) {
tokio::select!(
ret = bus_rx.recv() => {
// We received a channel-based message
if let Some((reply_channel, msg)) = ret {
let mut response = BusReply { responses: Vec::with_capacity(8) };
handle_bus_requests(&request.requests, &mut response.responses);
let _ =
reply_unix(&encode_response(&response).unwrap(), &mut socket)
.await;
if !request.persist {
break;
handle_bus_requests(&[msg], &mut response.responses);
if let Err(e) = reply_channel.send(response) {
warn!("Unable to send response back to client: {:?}", e);
}
} else {
warn!("Invalid data on local socket");
break;
}
}
});
},
ret = listener.accept() => {
// We received a UNIX socket message
if ret.is_err() {
error!("Unable to listen for requests on bound {BUS_SOCKET_PATH}");
error!("{:?}", ret);
return Err(UnixSocketServerError::ListenFail);
}
let (mut socket, _) = ret.unwrap();
tokio::spawn(async move {
loop {
let mut buf = vec![0; READ_BUFFER_SIZE];
let bytes_read = socket.read(&mut buf).await;
if bytes_read.is_err() {
debug!("Unable to read from client socket. Server remains alive.");
debug!("This is probably harmless.");
debug!("{:?}", bytes_read);
break; // Escape out of the thread
}
if let Ok(request) = decode_request(&buf) {
let mut response = BusReply { responses: Vec::with_capacity(8) };
handle_bus_requests(&request.requests, &mut response.responses);
let _ =
reply_unix(&encode_response(&response).unwrap(), &mut socket)
.await;
if !request.persist {
break;
}
} else {
warn!("Invalid data on local socket");
break;
}
}
});
},
);
}
//Ok(()) // unreachable
}
@ -155,8 +170,8 @@ async fn reply_unix(
) -> Result<(), UnixSocketServerError> {
let ret = socket.write_all(response).await;
if ret.is_err() {
warn!("Unable to write to UNIX socket. This is usually harmless, meaning the client went away.");
warn!("{:?}", ret);
debug!("Unable to write to UNIX socket. This is usually harmless, meaning the client went away.");
debug!("{:?}", ret);
return Err(UnixSocketServerError::WriteFail);
};
Ok(())

View File

@ -1,3 +1,4 @@
use std::net::IpAddr;
use crate::TcHandle;
use serde::{Deserialize, Serialize};
use lqos_utils::units::DownUpOrder;
@ -182,3 +183,30 @@ pub struct FlowbeeSummaryData {
/// Circuit Name
pub circuit_name: String,
}
/// Circuit statistics for transmit
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
pub struct Circuit {
/// The IP address of the host.
pub ip: IpAddr,
/// Current bytes-per-second passing through this host.
pub bytes_per_second: DownUpOrder<u64>,
/// Median latency for this host at the current time.
pub median_latency: Option<f32>,
/// TCP Retransmits for this host at the current time.
pub tcp_retransmits: DownUpOrder<u64>,
/// The mapped circuit ID
pub circuit_id: Option<String>,
/// The mapped device ID
pub device_id: Option<String>,
/// The parent node of the device
pub parent_node: Option<String>,
/// The circuit name
pub circuit_name: Option<String>,
/// The device name
pub device_name: Option<String>,
/// The current plan for this circuit.
pub plan: DownUpOrder<u32>,
/// The last time this host was seen, in nanoseconds since boot.
pub last_seen_nanos: u64,
}

View File

@ -14,14 +14,15 @@ mod bus;
mod ip_stats;
pub use ip_stats::{
tos_parser, IpMapping, IpStats, PacketHeader,
XdpPpingResult, FlowbeeSummaryData, FlowbeeProtocol
XdpPpingResult, FlowbeeSummaryData, FlowbeeProtocol,
Circuit
};
mod tc_handle;
pub use bus::{
bus_request, decode_request, decode_response, encode_request,
encode_response, BusClient, BusReply, BusRequest, BusResponse, BusSession,
CakeDiffTinTransit, CakeDiffTransit, CakeTransit, QueueStoreTransit,
UnixSocketServer, BUS_SOCKET_PATH, StatsRequest, TopFlowType
UnixSocketServer, BUS_SOCKET_PATH, StatsRequest, TopFlowType,
};
pub use tc_handle::TcHandle;

View File

@ -1,4 +1,4 @@
use log::error;
use tracing::error;
use lqos_utils::hex_string::read_hex_string;
use serde::{Deserialize, Serialize};
use thiserror::Error;

View File

@ -14,8 +14,10 @@ ip_network_table = { workspace = true }
ip_network = { workspace = true }
sha2 = { workspace = true }
uuid = { workspace = true }
log = { workspace = true }
tracing = { workspace = true }
dashmap = { workspace = true }
pyo3 = { workspace = true }
toml = { workspace = true }
lqos_utils = { path = "../lqos_utils" }
arc-swap = { workspace = true }
once_cell = { workspace = true }

View File

@ -1,7 +1,7 @@
//! The `authentication` module provides authorization for use of the
//! local web UI on LibreQoS boxes. It maps to `/<install dir>/lqusers.toml`
use log::{error, warn};
use tracing::{error, warn};
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use std::{
@ -61,7 +61,7 @@ impl WebUsers {
fn path() -> Result<PathBuf, AuthenticationError> {
let base_path = crate::load_config()
.map_err(|_| AuthenticationError::UnableToLoadEtcLqos)?
.lqos_directory;
.lqos_directory.clone();
let filename = Path::new(&base_path).join("lqusers.toml");
Ok(filename)
}

View File

@ -1,5 +1,5 @@
//! Manages the `/etc/lqos.conf` file.
use log::error;
use tracing::{error, info};
use serde::{Deserialize, Serialize};
use toml_edit::{DocumentMut, value};
use std::{fs, path::Path};
@ -171,7 +171,7 @@ impl EtcLqos {
}
pub(crate) fn load_from_string(raw: &str) -> Result<Self, EtcLqosError> {
log::info!("Trying to load old TOML version from /etc/lqos.conf");
info!("Trying to load old TOML version from /etc/lqos.conf");
let document = raw.parse::<DocumentMut>();
match document {
Err(e) => {
@ -203,14 +203,14 @@ impl EtcLqos {
let cfg_path = Path::new("/etc/lqos.conf");
let backup_path = Path::new("/etc/lqos.conf.backup");
if let Err(e) = std::fs::copy(cfg_path, backup_path) {
log::error!("Unable to backup /etc/lqos.conf");
log::error!("{e:?}");
error!("Unable to backup /etc/lqos.conf");
error!("{e:?}");
return Err(EtcLqosError::BackupFail);
}
let new_cfg = document.to_string();
if let Err(e) = fs::write(cfg_path, new_cfg) {
log::error!("Unable to write to /etc/lqos.conf");
log::error!("{e:?}");
error!("Unable to write to /etc/lqos.conf");
error!("{e:?}");
return Err(EtcLqosError::WriteFail);
}
Ok(())
@ -249,8 +249,8 @@ pub fn enable_long_term_stats(license_key: String) {
let new_cfg = config_doc.to_string();
if let Err(e) = fs::write(Path::new("/etc/lqos.conf"), new_cfg) {
log::error!("Unable to write to /etc/lqos.conf");
log::error!("{e:?}");
error!("Unable to write to /etc/lqos.conf");
error!("{e:?}");
return;
}
}
@ -278,8 +278,8 @@ fn check_config(cfg_doc: &mut DocumentMut, cfg: &mut EtcLqos) {
cfg_doc["node_id"] = value(format!("{:x}", hash));
println!("Updating");
if let Err(e) = cfg.save(cfg_doc) {
log::error!("Unable to save /etc/lqos.conf");
log::error!("{e:?}");
error!("Unable to save /etc/lqos.conf");
error!("{e:?}");
}
}
}

View File

@ -7,6 +7,7 @@ use super::{
};
use thiserror::Error;
use toml_edit::DocumentMut;
use tracing::{debug, error, info};
#[derive(Debug, Error)]
pub enum MigrationError {
@ -23,7 +24,7 @@ pub enum MigrationError {
}
pub fn migrate_if_needed() -> Result<(), MigrationError> {
log::info!("Checking config file version");
debug!("Checking config file version");
let raw =
std::fs::read_to_string("/etc/lqos.conf").map_err(|e| MigrationError::ReadError(e))?;
@ -31,18 +32,18 @@ pub fn migrate_if_needed() -> Result<(), MigrationError> {
.parse::<DocumentMut>()
.map_err(|e| MigrationError::ParseError(e))?;
if let Some((_key, version)) = doc.get_key_value("version") {
log::info!("Configuration file is at version {}", version.as_str().unwrap());
debug!("Configuration file is at version {}", version.as_str().unwrap());
if version.as_str().unwrap().trim() == "1.5" {
log::info!("Configuration file is already at version 1.5, no migration needed");
debug!("Configuration file is already at version 1.5, no migration needed");
return Ok(());
} else {
log::error!("Configuration file is at version {}, but this version of lqos only supports version 1.5", version.as_str().unwrap());
error!("Configuration file is at version {}, but this version of lqos only supports version 1.5", version.as_str().unwrap());
return Err(MigrationError::UnknownVersion(
version.as_str().unwrap().to_string(),
));
}
} else {
log::info!("No version found in configuration file, assuming 1.4x and migration is needed");
info!("No version found in configuration file, assuming 1.4x and migration is needed");
let new_config = migrate_14_to_15()?;
// Backup the old configuration
std::fs::rename("/etc/lqos.conf", "/etc/lqos.conf.backup14")

View File

@ -6,8 +6,13 @@ use std::path::Path;
use self::migration::migrate_if_needed;
pub use self::v15::Config;
pub use etclqos_migration::*;
use std::sync::Mutex;
use std::sync::{Arc, Mutex};
use std::sync::atomic::AtomicBool;
use arc_swap::ArcSwap;
use once_cell::sync::Lazy;
use thiserror::Error;
use tracing::{debug, error, info};
mod migration;
mod python_migration;
#[cfg(test)]
@ -15,55 +20,75 @@ pub mod test_data;
mod v15;
pub use v15::{Tunables, BridgeConfig};
static CONFIG: Mutex<Option<Config>> = Mutex::new(None);
static CONFIG_LOADED: AtomicBool = AtomicBool::new(false);
static CONFIG: Lazy<ArcSwap<Config>> = Lazy::new(|| {
match load_config() {
Ok(config) => {
CONFIG_LOADED.store(true, std::sync::atomic::Ordering::SeqCst);
ArcSwap::new(config)
},
Err(e) => {
error!("Unable to load configuration: {:?}", e);
ArcSwap::new(Arc::new(Config::default()))
}
}
});
static LOADER_MUTEX: Mutex<bool> = Mutex::new(false);
/// Load the configuration from `/etc/lqos.conf`.
pub fn load_config() -> Result<Config, LibreQoSConfigError> {
let mut config_location = "/etc/lqos.conf".to_string();
if let Ok(lqos_config) = std::env::var("LQOS_CONFIG") {
config_location = lqos_config;
log::info!("Overriding lqos.conf location from environment variable.");
pub fn load_config() -> Result<Arc<Config>, LibreQoSConfigError> {
// If we have a cached version, return it
let mut lock = LOADER_MUTEX.lock().unwrap();
*lock = !(*lock); // Not actually useful, prevents it from being optimized away
if CONFIG_LOADED.load(std::sync::atomic::Ordering::SeqCst) {
let clone = CONFIG.load().clone();
return Ok(clone);
}
let config_location = if let Ok(lqos_config) = std::env::var("LQOS_CONFIG") {
info!("Overriding lqos.conf location from environment variable.");
lqos_config
} else {
"/etc/lqos.conf".to_string()
};
let mut lock = CONFIG.lock().unwrap();
if lock.is_none() {
log::info!("Loading configuration file {config_location}");
migrate_if_needed().map_err(|e| {
log::error!("Unable to migrate configuration: {:?}", e);
LibreQoSConfigError::FileNotFoud
})?;
debug!("Loading configuration file {config_location}");
migrate_if_needed().map_err(|e| {
error!("Unable to migrate configuration: {:?}", e);
LibreQoSConfigError::FileNotFoud
})?;
let file_result = std::fs::read_to_string(&config_location);
if file_result.is_err() {
log::error!("Unable to open {config_location}");
return Err(LibreQoSConfigError::FileNotFoud);
}
let raw = file_result.unwrap();
let file_result = std::fs::read_to_string(&config_location);
if file_result.is_err() {
error!("Unable to open {config_location}");
return Err(LibreQoSConfigError::FileNotFoud);
}
let raw = file_result.unwrap();
let config_result = Config::load_from_string(&raw);
if config_result.is_err() {
log::error!("Unable to parse /etc/lqos.conf");
log::error!("Error: {:?}", config_result);
return Err(LibreQoSConfigError::ParseError(format!(
"{:?}",
config_result
)));
}
let mut final_config = config_result.unwrap(); // We know it's good at this point
// Check for environment variable overrides
if let Ok(lqos_dir) = std::env::var("LQOS_DIRECTORY") {
final_config.lqos_directory = lqos_dir;
}
log::info!("Set cached version of config file");
*lock = Some(final_config);
let config_result = Config::load_from_string(&raw);
if config_result.is_err() {
error!("Unable to parse /etc/lqos.conf");
error!("Error: {:?}", config_result);
return Err(LibreQoSConfigError::ParseError(format!(
"{:?}",
config_result
)));
}
let mut final_config = config_result.unwrap(); // We know it's good at this point
// Check for environment variable overrides
if let Ok(lqos_dir) = std::env::var("LQOS_DIRECTORY") {
final_config.lqos_directory = lqos_dir;
}
Ok(lock.as_ref().unwrap().clone())
debug!("Set cached version of config file");
let new_config = Arc::new(final_config.clone());
Ok(new_config)
}
/// Enables LTS reporting in the configuration file.
/*/// Enables LTS reporting in the configuration file.
pub fn enable_long_term_stats(license_key: String) -> Result<(), LibreQoSConfigError> {
let mut config = load_config()?;
let mut lock = CONFIG.lock().unwrap();
@ -83,13 +108,12 @@ pub fn enable_long_term_stats(license_key: String) -> Result<(), LibreQoSConfigE
*lock = Some(config);
Ok(())
}
}*/
/// Update the configuration on disk
pub fn update_config(new_config: &Config) -> Result<(), LibreQoSConfigError> {
log::info!("Updating stored configuration");
let mut lock = CONFIG.lock().unwrap();
*lock = Some(new_config.clone());
debug!("Updating stored configuration");
CONFIG.store(Arc::new(new_config.clone()));
// Does the configuration exist?
let config_path = Path::new("/etc/lqos.conf");
@ -97,7 +121,7 @@ pub fn update_config(new_config: &Config) -> Result<(), LibreQoSConfigError> {
let backup_path = Path::new("/etc/lqos.conf.webbackup");
std::fs::copy(config_path, backup_path)
.map_err(|e| {
log::error!("Unable to create backup configuration: {e:?}");
error!("Unable to create backup configuration: {e:?}");
LibreQoSConfigError::CannotCopy
})?;
}
@ -105,12 +129,12 @@ pub fn update_config(new_config: &Config) -> Result<(), LibreQoSConfigError> {
// Serialize the new one
let serialized = toml::to_string_pretty(new_config)
.map_err(|e| {
log::error!("Unable to serialize new configuration to TOML: {e:?}");
error!("Unable to serialize new configuration to TOML: {e:?}");
LibreQoSConfigError::SerializeError
})?;
std::fs::write(config_path, serialized)
.map_err(|e| {
log::error!("Unable to write new configuration: {e:?}");
error!("Unable to write new configuration: {e:?}");
LibreQoSConfigError::CannotWrite
})?;
@ -122,15 +146,15 @@ pub fn update_config(new_config: &Config) -> Result<(), LibreQoSConfigError> {
/// intended for use when the XDP bridge is disabled by pre-flight
/// because of a Linux bridge.
pub fn disable_xdp_bridge() -> Result<(), LibreQoSConfigError> {
let mut config = load_config()?;
let mut lock = CONFIG.lock().unwrap();
let config = load_config()?;
let mut config = (*config).clone();
if let Some(bridge) = &mut config.bridge {
bridge.use_xdp_bridge = false;
}
// Write the lock
*lock = Some(config);
CONFIG.store(Arc::new(config));
Ok(())
}

View File

@ -71,6 +71,9 @@ pub struct Config {
/// InfluxDB Configuration
pub influxdb: super::influxdb::InfluxDbConfig,
/// Option to disable the webserver for headless/CLI operation
pub disable_webserver: Option<bool>,
}
impl Config {
@ -137,6 +140,7 @@ impl Default for Config {
packet_capture_time: 10,
queue_check_period_ms: 1000,
flows: None,
disable_webserver: None,
}
}
}

View File

@ -14,9 +14,9 @@ mod shaped_devices;
pub use authentication::{UserRole, WebUsers};
pub use etc::{load_config, Config, enable_long_term_stats, Tunables, BridgeConfig, update_config, disable_xdp_bridge};
pub use network_json::{NetworkJson, NetworkJsonNode, NetworkJsonTransport, NetworkJsonCounting};
pub use network_json::{NetworkJson, NetworkJsonNode, NetworkJsonTransport};
pub use program_control::load_libreqos;
pub use shaped_devices::{ConfigShapedDevices, ShapedDevice};
/// Used as a constant in determining buffer preallocation
pub const SUPPORTED_CUSTOMERS: usize = 16_000_000;
pub const SUPPORTED_CUSTOMERS: usize = 100_000;

View File

@ -1,18 +1,16 @@
mod network_json_node;
mod network_json_transport;
mod network_json_counting;
use dashmap::DashSet;
use log::{error, info};
use tracing::{debug, error, warn};
use serde_json::{Map, Value};
use std::{
fs, path::{Path, PathBuf},
};
use std::collections::HashSet;
use thiserror::Error;
use lqos_utils::units::DownUpOrder;
pub use network_json_node::NetworkJsonNode;
pub use network_json_transport::NetworkJsonTransport;
pub use network_json_counting::NetworkJsonCounting;
/// Holder for the network.json representation.
/// This is condensed into a single level vector with index-based referencing
@ -21,7 +19,7 @@ pub use network_json_counting::NetworkJsonCounting;
pub struct NetworkJson {
/// Nodes that make up the tree, flattened and referenced by index number.
/// TODO: We should add a primary key to nodes in network.json.
nodes: Vec<NetworkJsonNode>,
pub nodes: Vec<NetworkJsonNode>,
}
impl Default for NetworkJson {
@ -36,6 +34,11 @@ impl NetworkJson {
Self { nodes: Vec::new() }
}
/// Retrieves the length and capacity for the nodes vector.
pub fn len_and_capacity(&self) -> (usize, usize) {
(self.nodes.len(), self.nodes.capacity())
}
/// The path to the current `network.json` file, determined
/// by acquiring the prefix from the `/etc/lqos.conf` configuration
/// file.
@ -66,7 +69,7 @@ impl NetworkJson {
current_marks: DownUpOrder::zeroed(),
parents: Vec::new(),
immediate_parent: None,
rtts: DashSet::new(),
rtts: HashSet::new(),
node_type: None,
}];
if !Self::exists() {
@ -137,24 +140,76 @@ impl NetworkJson {
/// Obtains a reference to nodes once we're sure that
/// doing so will provide valid data.
pub fn get_nodes_when_ready(&self) -> &Vec<NetworkJsonNode> {
//log::warn!("Awaiting the network tree");
//atomic_wait::wait(&self.busy, 1);
//log::warn!("Acquired");
&self.nodes
}
/// Starts an update cycle. This clones the nodes into
/// another structure - work will be performed on the clone.
pub fn begin_update_cycle(&self) -> NetworkJsonCounting {
NetworkJsonCounting::begin_update_cycle(self.nodes.clone())
/// Sets all current throughput values to zero
/// Note that due to interior mutability, this does not require mutable
/// access.
pub fn zero_throughput_and_rtt(&mut self) {
//log::warn!("Locking network tree for throughput cycle");
self.nodes.iter_mut().for_each(|n| {
n.current_throughput.set_to_zero();
n.current_tcp_retransmits.set_to_zero();
n.rtts.clear();
n.current_drops.set_to_zero();
n.current_marks.set_to_zero();
});
}
/// Finishes an update cycle. This is called after all updates
/// have been made to the clone, and the clone is then copied back
/// into the main structure.
pub fn finish_update_cycle(&mut self, counting: NetworkJsonCounting) {
if !counting.nodes.is_empty() {
self.nodes = counting.nodes;
/// Add throughput numbers to node entries. Note that this does *not* require
/// mutable access due to atomics and interior mutability - so it is safe to use
/// a read lock.
pub fn add_throughput_cycle(
&mut self,
targets: &[usize],
bytes: (u64, u64),
) {
for idx in targets {
// Safety first: use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.current_throughput.checked_add_tuple(bytes);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
/// Record RTT time in the tree. Note that due to interior mutability,
/// this does not require mutable access.
pub fn add_rtt_cycle(&mut self, targets: &[usize], rtt: f32) {
for idx in targets {
// Safety first: use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.rtts.insert((rtt * 100.0) as u16);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
/// Record TCP Retransmits in the tree.
pub fn add_retransmit_cycle(&mut self, targets: &[usize], tcp_retransmits: DownUpOrder<u64>) {
for idx in targets {
// Safety first; use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.current_tcp_retransmits.checked_add(tcp_retransmits);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
/// Adds a series of CAKE marks and drops to the tree structure.
pub fn add_queue_cycle(&mut self, targets: &[usize], marks: &DownUpOrder<u64>, drops: &DownUpOrder<u64>) {
for idx in targets {
// Safety first; use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.current_marks.checked_add(*marks);
node.current_drops.checked_add(*drops);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
}
@ -178,7 +233,7 @@ fn recurse_node(
parents: &[usize],
immediate_parent: usize,
) {
info!("Mapping {name} from network.json");
debug!("Mapping {name} from network.json");
let mut parents = parents.to_vec();
let my_id = if name != "children" {
parents.push(nodes.len());
@ -198,7 +253,7 @@ fn recurse_node(
current_marks: DownUpOrder::zeroed(),
name: name.to_string(),
immediate_parent: Some(immediate_parent),
rtts: DashSet::new(),
rtts: HashSet::new(),
node_type: json.get("type").map(|v| v.as_str().unwrap().to_string()),
};

View File

@ -1,91 +0,0 @@
use log::warn;
use lqos_utils::units::DownUpOrder;
use crate::NetworkJsonNode;
/// Type used while updating the network tree with new data.
/// Rather than have a race condition while the updates are performed
/// (and potentially new requests come in, and receive invalid data),
/// we copy the network tree into this structure, and then update this
/// structure. Once the updates are complete, we copy the data back
/// into the main network tree.
pub struct NetworkJsonCounting {
pub(super) nodes: Vec<NetworkJsonNode>,
}
impl NetworkJsonCounting {
/// Starts an update cycle. This clones the nodes into
/// the `NetworkJsonCounting` structure - work will be performed on the clone.
pub fn begin_update_cycle(nodes: Vec<NetworkJsonNode>) -> Self {
Self { nodes }
}
/// Sets all current throughput values to zero
/// Note that due to interior mutability, this does not require mutable
/// access.
pub fn zero_throughput_and_rtt(&mut self) {
//log::warn!("Locking network tree for throughput cycle");
self.nodes.iter_mut().for_each(|n| {
n.current_throughput.set_to_zero();
n.current_tcp_retransmits.set_to_zero();
n.rtts.clear();
n.current_drops.set_to_zero();
n.current_marks.set_to_zero();
});
}
/// Add throughput numbers to node entries. Note that this does *not* require
/// mutable access due to atomics and interior mutability - so it is safe to use
/// a read lock.
pub fn add_throughput_cycle(
&mut self,
targets: &[usize],
bytes: (u64, u64),
) {
for idx in targets {
// Safety first: use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.current_throughput.checked_add_tuple(bytes);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
/// Record RTT time in the tree. Note that due to interior mutability,
/// this does not require mutable access.
pub fn add_rtt_cycle(&self, targets: &[usize], rtt: f32) {
for idx in targets {
// Safety first: use "get" to ensure that the node exists
if let Some(node) = self.nodes.get(*idx) {
node.rtts.insert((rtt * 100.0) as u16);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
/// Record TCP Retransmits in the tree.
pub fn add_retransmit_cycle(&mut self, targets: &[usize], tcp_retransmits: DownUpOrder<u64>) {
for idx in targets {
// Safety first; use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.current_tcp_retransmits.checked_add(tcp_retransmits);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
/// Adds a series of CAKE marks and drops to the tree structure.
pub fn add_queue_cycle(&mut self, targets: &[usize], marks: &DownUpOrder<u64>, drops: &DownUpOrder<u64>) {
for idx in targets {
// Safety first; use "get" to ensure that the node exists
if let Some(node) = self.nodes.get_mut(*idx) {
node.current_marks.checked_add(*marks);
node.current_drops.checked_add(*drops);
} else {
warn!("No network tree entry for index {idx}");
}
}
}
}

View File

@ -1,4 +1,4 @@
use dashmap::DashSet;
use std::collections::HashSet;
use lqos_utils::units::DownUpOrder;
use crate::NetworkJsonTransport;
@ -26,7 +26,7 @@ pub struct NetworkJsonNode {
/// Approximate RTTs reported for this level of the tree.
/// It's never going to be as statistically accurate as the actual
/// numbers, being based on medians.
pub rtts: DashSet<u16>,
pub rtts: HashSet<u16>,
/// A list of indices in the `NetworkJson` vector of nodes
/// linking to parent nodes

View File

@ -1,4 +1,4 @@
use log::error;
use tracing::error;
use thiserror::Error;
use std::{
path::{Path, PathBuf},

View File

@ -2,9 +2,8 @@ mod serializable;
mod shaped_device;
use std::net::IpAddr;
use crate::SUPPORTED_CUSTOMERS;
use csv::{QuoteStyle, ReaderBuilder, WriterBuilder};
use log::error;
use tracing::{debug, error};
use serializable::SerializableShapedDevice;
pub use shaped_device::ShapedDevice;
use std::path::{Path, PathBuf};
@ -40,7 +39,7 @@ impl ConfigShapedDevices {
crate::load_config().map_err(|_| ShapedDevicesError::ConfigLoadError)?;
let base_path = Path::new(&cfg.lqos_directory);
let full_path = base_path.join("ShapedDevices.csv");
log::info!("ShapedDevices.csv path: {:?}", full_path);
debug!("ShapedDevices.csv path: {:?}", full_path);
Ok(full_path)
}
@ -69,20 +68,20 @@ impl ConfigShapedDevices {
// Example: StringRecord(["1", "968 Circle St., Gurnee, IL 60031", "1", "Device 1", "", "", "192.168.101.2", "", "25", "5", "10000", "10000", ""])
let mut devices = Vec::with_capacity(SUPPORTED_CUSTOMERS);
let mut devices = Vec::new(); // Note that this used to be supported_customers, but we're going to let it grow organically
for result in reader.records() {
if let Ok(result) = result {
let device = ShapedDevice::from_csv(&result);
if let Ok(device) = device {
devices.push(device);
} else {
log::error!("Error reading Device line: {:?}", &device);
error!("Error reading Device line: {:?}", &device);
return Err(ShapedDevicesError::DeviceDecode(format!(
"DEVICE DECODE: {device:?}"
)));
}
} else {
log::error!("Error reading CSV record: {:?}", result);
error!("Error reading CSV record: {:?}", result);
if let csv::ErrorKind::UnequalLengths { pos, expected_len, len } =
result.as_ref().err().as_ref().unwrap().kind()
{
@ -116,7 +115,7 @@ impl ConfigShapedDevices {
/// Replace the current shaped devices list with a new one
pub fn replace_with_new_data(&mut self, devices: Vec<ShapedDevice>) {
self.devices = devices;
log::info!("{:?}", self.devices);
debug!("{:?}", self.devices);
self.trie = ConfigShapedDevices::make_trie(&self.devices);
}

View File

@ -1,5 +1,5 @@
use csv::StringRecord;
use log::error;
use tracing::error;
use serde::{Deserialize, Serialize};
use std::net::{Ipv4Addr, Ipv6Addr};

View File

@ -9,8 +9,9 @@ lqos_utils = { path = "../lqos_utils" }
lqos_bus = { path = "../lqos_bus" }
lqos_sys = { path = "../lqos_sys" }
lqos_config = { path = "../lqos_config" }
log = { workspace = true }
tracing = { workspace = true }
zerocopy = { workspace = true }
once_cell = { workspace = true}
dashmap = { workspace = true }
anyhow = { workspace = true }
anyhow = { workspace = true }
timerfd = { workspace = true }

View File

@ -6,13 +6,17 @@ mod config;
/// Interface to the performance tracking system
pub mod perf_interface;
pub mod stats;
use std::time::Duration;
use tracing::{debug, error, warn};
use timerfd::{SetTimeFlags, TimerFd, TimerState};
pub use config::{HeimdalConfig, HeimdallMode};
mod timeline;
pub use timeline::{n_second_packet_dump, n_second_pcap, hyperfocus_on_target};
mod pcap;
mod watchlist;
use lqos_utils::fdtimer::periodic;
pub use watchlist::{heimdall_expire, heimdall_watch_ip, set_heimdall_mode};
use anyhow::Result;
use crate::timeline::expire_timeline;
@ -29,22 +33,38 @@ const TIMELINE_EXPIRE_SECS: u64 = 10;
const SESSION_EXPIRE_SECONDS: u64 = 600;
/// Interface to running Heimdall (start this when lqosd starts)
/// This is async to match the other spawning systems.
pub async fn start_heimdall() {
pub fn start_heimdall() -> Result<()> {
if set_heimdall_mode(HeimdallMode::WatchOnly).is_err() {
log::error!(
error!(
"Unable to set Heimdall Mode. Packet watching will be unavailable."
);
return;
anyhow::bail!("Unable to set Heimdall Mode.");
}
let interval_ms = 1000; // 1 second
log::info!("Heimdall check period set to {interval_ms} ms.");
debug!("Heimdall check period set to {interval_ms} ms.");
std::thread::spawn(move || {
periodic(interval_ms, "Heimdall Packet Watcher", &mut || {
std::thread::Builder::new()
.name("Heimdall Packet Watcher".to_string())
.spawn(move || {
let mut tfd = TimerFd::new().unwrap();
assert_eq!(tfd.get_state(), TimerState::Disarmed);
tfd.set_state(TimerState::Periodic{
current: Duration::from_millis(interval_ms),
interval: Duration::from_millis(interval_ms) }
, SetTimeFlags::Default
);
loop {
heimdall_expire();
expire_timeline();
});
});
let missed_ticks = tfd.read();
if missed_ticks > 1 {
warn!("Heimdall Missed {} ticks", missed_ticks - 1);
}
}
})?;
Ok(())
}

View File

@ -1,8 +1,8 @@
use std::time::Duration;
use zerocopy::AsBytes;
use zerocopy::{Immutable, IntoBytes};
use crate::perf_interface::{HeimdallEvent, PACKET_OCTET_SIZE};
#[derive(AsBytes)]
#[derive(IntoBytes, Immutable)]
#[repr(C)]
pub(crate) struct PcapFileHeader {
magic: u32,
@ -28,7 +28,7 @@ impl PcapFileHeader {
}
}
#[derive(AsBytes)]
#[derive(IntoBytes, Immutable)]
#[repr(C)]
pub(crate) struct PcapPacketHeader {
ts_sec: u32,

View File

@ -1,4 +1,5 @@
use std::{ffi::c_void, slice};
use tracing::warn;
use lqos_utils::XdpIpAddress;
use zerocopy::FromBytes;
use crate::timeline::store_on_timeline;
@ -67,7 +68,7 @@ pub unsafe extern "C" fn heimdall_handle_events(
) -> i32 {
const EVENT_SIZE: usize = std::mem::size_of::<HeimdallEvent>();
if data_size < EVENT_SIZE {
log::warn!("Warning: incoming data too small in Heimdall buffer");
warn!("Warning: incoming data too small in Heimdall buffer");
return 0;
}
@ -75,7 +76,7 @@ pub unsafe extern "C" fn heimdall_handle_events(
let data_u8 = data as *const u8;
let data_slice : &[u8] = slice::from_raw_parts(data_u8, EVENT_SIZE);
if let Some(incoming) = HeimdallEvent::read_from(data_slice) {
if let Ok(incoming) = HeimdallEvent::read_from_bytes(data_slice) {
store_on_timeline(incoming);
} else {
println!("Failed to decode");

View File

@ -16,7 +16,8 @@ use std::{
sync::atomic::{AtomicBool, AtomicUsize},
time::Duration,
};
use zerocopy::AsBytes;
use tracing::warn;
use zerocopy::IntoBytes;
impl HeimdallEvent {
fn as_header(&self) -> PacketHeader {
@ -116,7 +117,7 @@ pub fn hyperfocus_on_target(ip: XdpIpAddress) -> Option<(usize, usize)> {
};
let new_id =
FOCUS_SESSION_ID.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
std::thread::spawn(move || {
let _ = std::thread::Builder::new().name("HeimdalTimeline".to_string()).spawn(move || {
for _ in 0..capture_time {
let _ = set_heimdall_mode(HeimdallMode::Analysis);
heimdall_watch_ip(ip);
@ -142,7 +143,7 @@ pub fn hyperfocus_on_target(ip: XdpIpAddress) -> Option<(usize, usize)> {
});
Some((new_id, capture_time))
} else {
log::warn!(
warn!(
"Heimdall was busy and won't start another collection session."
);
None

View File

@ -4,6 +4,7 @@ use lqos_sys::bpf_map::BpfMap;
use lqos_utils::{unix_time::time_since_boot, XdpIpAddress};
use once_cell::sync::Lazy;
use std::time::Duration;
use tracing::{debug, info};
const HEIMDALL_CFG_PATH: &str = "/sys/fs/bpf/heimdall_config";
const HEIMDALL_WATCH_PATH: &str = "/sys/fs/bpf/heimdall_watching";
@ -35,7 +36,7 @@ impl HeimdallWatching {
}
fn stop_watching(&mut self) {
log::info!("Heimdall stopped watching {}", self.ip_address.as_ip().to_string());
info!("Heimdall stopped watching {}", self.ip_address.as_ip().to_string());
let mut map =
BpfMap::<XdpIpAddress, u32>::from_path(HEIMDALL_WATCH_PATH).unwrap();
map.delete(&mut self.ip_address).unwrap();
@ -70,7 +71,7 @@ pub fn heimdall_watch_ip(ip: XdpIpAddress) {
watch.expiration = expire.as_nanos();
}
} else if let Ok(h) = HeimdallWatching::new(ip) {
log::info!("Heimdall is watching {}", ip.as_ip().to_string());
debug!("Heimdall is watching {}", ip.as_ip().to_string());
HEIMDALL_WATCH_LIST.insert(ip, h);
}
}

View File

@ -101,8 +101,8 @@ pub(crate) fn get_weights_rust() -> Result<Vec<DeviceWeightResponse>> {
println!("Using LTS weights");
let config = load_config().unwrap();
let org_key = config.long_term_stats.license_key.unwrap();
let node_id = config.node_id;
let org_key = config.long_term_stats.license_key.clone().unwrap();
let node_id = config.node_id.clone();
// Get current local time as unix timestamp
let now = chrono::Utc::now().timestamp();

View File

@ -288,7 +288,7 @@ fn is_libre_already_running() -> PyResult<bool> {
let sys = System::new_all();
let pid = sysinfo::Pid::from(pid as usize);
if let Some(process) = sys.processes().get(&pid) {
if process.name().contains("python") {
if process.name().to_string_lossy().contains("python") {
return Ok(true);
}
}
@ -373,7 +373,7 @@ fn use_bin_packing_to_balance_cpu() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.use_binpacking)
}
https://github.com/LibreQoE/LibreQoS/pull/564
#[pyfunction]
fn monitor_mode_only() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
@ -480,19 +480,22 @@ fn exception_cpes() -> PyResult<Vec<PyExceptionCpe>> {
#[pyfunction]
fn uisp_site() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.site)
let site = config.uisp_integration.site.clone();
Ok(site)
}
#[pyfunction]
fn uisp_strategy() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.strategy)
let strategy = config.uisp_integration.strategy.clone();
Ok(strategy)
}
#[pyfunction]
fn uisp_suspended_strategy() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.suspended_strategy)
let strategy = config.uisp_integration.suspended_strategy.clone();
Ok(strategy)
}
#[pyfunction]
@ -516,31 +519,36 @@ fn use_ptmp_as_parent() -> PyResult<bool> {
#[pyfunction]
fn uisp_base_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.url)
let url = config.uisp_integration.url.clone();
Ok(url)
}
#[pyfunction]
fn uisp_auth_token() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.token)
let token = config.uisp_integration.token.clone();
Ok(token)
}
#[pyfunction]
fn splynx_api_key() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.spylnx_integration.api_key)
let key = config.spylnx_integration.api_key.clone();
Ok(key)
}
#[pyfunction]
fn splynx_api_secret() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.spylnx_integration.api_secret)
let secret = config.spylnx_integration.api_secret.clone();
Ok(secret)
}
#[pyfunction]
fn splynx_api_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.spylnx_integration.url)
let url = config.spylnx_integration.url.clone();
Ok(url)
}
#[pyfunction]
@ -570,13 +578,15 @@ fn automatic_import_powercode() -> PyResult<bool> {
#[pyfunction]
fn powercode_api_key() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.powercode_integration.powercode_api_key)
let key = config.powercode_integration.powercode_api_key.clone();
Ok(key)
}
#[pyfunction]
fn powercode_api_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.powercode_integration.powercode_api_url)
let url = config.powercode_integration.powercode_api_url.clone();
Ok(url)
}
#[pyfunction]
@ -588,37 +598,43 @@ fn automatic_import_sonar() -> PyResult<bool> {
#[pyfunction]
fn sonar_api_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.sonar_api_url)
let url = config.sonar_integration.sonar_api_url.clone();
Ok(url)
}
#[pyfunction]
fn sonar_api_key() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.sonar_api_key)
let key = config.sonar_integration.sonar_api_key.clone();
Ok(key)
}
#[pyfunction]
fn snmp_community() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.snmp_community)
let key = config.sonar_integration.snmp_community.clone();
Ok(key)
}
#[pyfunction]
fn sonar_airmax_ap_model_ids() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.airmax_model_ids)
let key = config.sonar_integration.airmax_model_ids.clone();
Ok(key)
}
#[pyfunction]
fn sonar_ltu_ap_model_ids() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.ltu_model_ids)
let key = config.sonar_integration.ltu_model_ids.clone();
Ok(key)
}
#[pyfunction]
fn sonar_active_status_ids() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.active_status_ids)
let key = config.sonar_integration.active_status_ids.clone();
Ok(key)
}
#[pyfunction]
@ -630,25 +646,29 @@ fn influx_db_enabled() -> PyResult<bool> {
#[pyfunction]
fn influx_db_bucket() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.bucket)
let bucket = config.influxdb.bucket.clone();
Ok(bucket)
}
#[pyfunction]
fn influx_db_org() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.org)
let org = config.influxdb.org.clone();
Ok(org)
}
#[pyfunction]
fn influx_db_token() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.token)
let token = config.influxdb.token.clone();
Ok(token)
}
#[pyfunction]
fn influx_db_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.url)
let url = config.influxdb.url.clone();
Ok(url)
}
#[pyfunction]
@ -674,10 +694,11 @@ pub fn get_tree_weights() -> PyResult<Vec<device_weights::NetworkNodeWeight>> {
#[pyfunction]
pub fn get_libreqos_directory() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.lqos_directory)
let dir = config.lqos_directory.clone();
Ok(dir)
}
#[pyfunction]
pub fn is_network_flat() -> PyResult<bool> {
Ok(lqos_config::NetworkJson::load().unwrap().get_nodes_when_ready().len() == 1)
}
}

View File

@ -12,11 +12,12 @@ lqos_bus = { path = "../lqos_bus" }
lqos_config = { path = "../lqos_config" }
lqos_sys = { path = "../lqos_sys" }
lqos_utils = { path = "../lqos_utils" }
log = { workspace = true }
log-once = "0.4.0"
tracing = { workspace = true }
tokio = { workspace = true }
once_cell = { workspace = true}
dashmap = { workspace = true }
anyhow = { workspace = true }
arc-swap = { workspace = true }
[dev-dependencies]
criterion = { version = "0", features = [ "html_reports"] }

View File

@ -1,5 +1,5 @@
use crate::queue_types::QueueType;
use log::error;
use tracing::error;
use serde::Serialize;
use thiserror::Error;

View File

@ -1,7 +1,7 @@
mod queing_structure_json_monitor;
mod queue_network;
mod queue_node;
use log::error;
use tracing::error;
pub use queing_structure_json_monitor::spawn_queue_structure_monitor;
pub(crate) use queing_structure_json_monitor::QUEUE_STRUCTURE;
use queue_network::QueueNetwork;

View File

@ -1,17 +1,16 @@
use std::sync::RwLock;
use std::sync::Arc;
use arc_swap::ArcSwap;
use crate::queue_structure::{
queue_network::QueueNetwork, queue_node::QueueNode, read_queueing_structure,
};
use log::{error, info};
use tracing::{debug, error, info};
use lqos_utils::file_watcher::FileWatcher;
use once_cell::sync::Lazy;
use thiserror::Error;
use tokio::task::spawn_blocking;
use crate::tracking::ALL_QUEUE_SUMMARY;
pub(crate) static QUEUE_STRUCTURE: Lazy<RwLock<QueueStructure>> =
Lazy::new(|| RwLock::new(QueueStructure::new()));
pub(crate) static QUEUE_STRUCTURE: Lazy<ArcSwap<QueueStructure>> =
Lazy::new(|| ArcSwap::new(Arc::new(QueueStructure::new())));
#[derive(Clone)]
pub(crate) struct QueueStructure {
@ -26,28 +25,27 @@ impl QueueStructure {
Self { maybe_queues: None }
}
}
fn update(&mut self) {
ALL_QUEUE_SUMMARY.clear();
if let Ok(queues) = read_queueing_structure() {
self.maybe_queues = Some(queues);
} else {
self.maybe_queues = None;
}
}
}
/// Global file watched for `queueStructure.json`.
/// Reloads the queue structure when it is available.
pub async fn spawn_queue_structure_monitor() {
spawn_blocking(|| {
let _ = watch_for_queueing_structure_changing();
});
pub fn spawn_queue_structure_monitor() -> anyhow::Result<()> {
std::thread::Builder::new()
.name("Queue Structure Monitor".to_string())
.spawn(|| {
if let Err(e) = watch_for_queueing_structure_changing() {
error!("Error watching for queueingStructure.json: {:?}", e);
}
})?;
Ok(())
}
fn update_queue_structure() {
info!("queueingStructure.json reloaded");
QUEUE_STRUCTURE.write().unwrap().update();
debug!("queueingStructure.json reloaded");
let new_queue_structure = QueueStructure::new();
ALL_QUEUE_SUMMARY.clear();
QUEUE_STRUCTURE.store(Arc::new(new_queue_structure));
}
/// Fires up a Linux file system watcher than notifies

View File

@ -1,5 +1,5 @@
use super::{queue_node::QueueNode, QueueStructureError};
use log::error;
use tracing::error;
use serde_json::Value;
use std::path::{Path, PathBuf};

View File

@ -1,5 +1,5 @@
use super::QueueStructureError;
use log::error;
use tracing::{error, warn};
use lqos_bus::TcHandle;
use lqos_utils::hex_string::read_hex_string;
use serde_json::Value;
@ -216,18 +216,18 @@ impl QueueNode {
result.circuits.push(n.unwrap());
}
} else {
log::warn!("Children was not an object");
log::warn!("{:?}", value);
warn!("Children was not an object");
warn!("{:?}", value);
}
}
"idForCircuitsWithoutParentNodes" | "type" => {
// Ignore
}
_ => log::error!("I don't know how to parse key: [{key}]"),
_ => error!("I don't know how to parse key: [{key}]"),
}
}
} else {
log::warn!("Unable to parse node structure for [{key}]");
warn!("Unable to parse node structure for [{key}]");
}
Ok(result)
}

View File

@ -2,7 +2,7 @@ pub(crate) mod tc_cake;
mod tc_fq_codel;
mod tc_htb;
mod tc_mq;
use log::warn;
use tracing::{debug, warn};
use serde::Serialize;
use serde_json::Value;
use thiserror::Error;
@ -30,7 +30,7 @@ impl QueueType {
"cake" => Ok(QueueType::Cake(tc_cake::TcCake::from_json(map)?)),
"clsact" => Ok(QueueType::ClsAct),
_ => {
warn!("I don't know how to parse qdisc type {kind}");
debug!("I don't know how to parse qdisc type {kind}");
Err(QDiscError::UnknownQdisc(format!("Unknown queue kind: {kind}")))
}
}
@ -90,11 +90,11 @@ macro_rules! parse_tc_handle {
if let Ok(handle) = TcHandle::from_string(s) {
$target = handle;
} else {
info_once!("Unable to extract TC handle from string");
tracing::info!("Unable to extract TC handle from string");
$target = TcHandle::default();
}
} else {
info_once!("Unable to extract string for TC handle");
tracing::info!("Unable to extract string for TC handle");
$target = TcHandle::default();
}
};

View File

@ -1,7 +1,6 @@
use super::QDiscError;
use crate::parse_tc_handle;
use log::warn;
use log_once::info_once;
use tracing::{error, info, warn};
use lqos_bus::TcHandle;
use lqos_utils::{dashy_table_enum, string_table_enum};
use serde::{Deserialize, Serialize};
@ -140,7 +139,7 @@ impl TcCake {
}
"kind" => {}
_ => {
log::error!("Unknown entry in Tc-cake: {key}");
error!("Unknown entry in Tc-cake: {key}");
}
}
}
@ -182,7 +181,7 @@ impl TcCakeOptions {
parse_tc_handle!(result.fwmark, value);
}
_ => {
info_once!(
info!(
"Unknown entry in tc-cake-options json decoder: {key}"
);
}
@ -259,7 +258,7 @@ impl TcCakeTin {
result.flow_quantum = value.as_u64().unwrap_or(0) as u16
}
_ => {
info_once!("Unknown entry in tc-cake-tin json decoder: {key}");
info!("Unknown entry in tc-cake-tin json decoder: {key}");
}
}
}

View File

@ -7,10 +7,10 @@
use super::QDiscError;
use crate::parse_tc_handle;
use log_once::info_once;
use lqos_bus::TcHandle;
use serde::Serialize;
use serde_json::Value;
use tracing::info;
#[derive(Default, Clone, Debug, Serialize)]
pub struct TcFqCodel {
@ -81,7 +81,7 @@ impl TcFqCodel {
"options" => result.options = TcFqCodelOptions::from_json(value)?,
"kind" => {}
_ => {
info_once!("Unknown entry in tc-codel json decoder: {key}");
info!("Unknown entry in tc-codel json decoder: {key}");
}
}
}
@ -109,7 +109,7 @@ impl TcFqCodelOptions {
result.drop_batch = value.as_u64().unwrap_or(0) as u16
}
_ => {
info_once!(
info!(
"Unknown entry in tc-codel-options json decoder: {key}"
);
}

View File

@ -5,10 +5,10 @@
use super::QDiscError;
use crate::parse_tc_handle;
use log_once::info_once;
use lqos_bus::TcHandle;
use serde::Serialize;
use serde_json::Value;
use tracing::info;
#[derive(Default, Clone, Debug, Serialize)]
pub struct TcHtb {
@ -55,7 +55,7 @@ impl TcHtb {
"options" => result.options = TcHtbOptions::from_json(value)?,
"kind" => {}
_ => {
info_once!("Unknown entry in tc-HTB json decoder: {key}");
info!("Unknown entry in tc-HTB json decoder: {key}");
}
}
}
@ -81,7 +81,7 @@ impl TcHtbOptions {
result.direct_qlen = value.as_u64().unwrap_or(0) as u32
}
_ => {
info_once!("Unknown entry in tc-HTB json decoder: {key}");
info!("Unknown entry in tc-HTB json decoder: {key}");
}
}
}

View File

@ -4,10 +4,10 @@
use super::QDiscError;
use crate::parse_tc_handle;
use log_once::info_once;
use lqos_bus::TcHandle;
use serde::Serialize;
use serde_json::Value;
use tracing::info;
#[derive(Default, Clone, Debug, Serialize)]
pub struct TcMultiQueue {
@ -43,7 +43,7 @@ impl TcMultiQueue {
"kind" => {}
"options" => {}
_ => {
info_once!("Unknown entry in tc-MQ json decoder: {key}");
info!("Unknown entry in tc-MQ json decoder: {key}");
}
}
}

View File

@ -63,8 +63,11 @@ impl AllQueueData {
q.marks = DownUpOrder::zeroed();
}
let mut seen_queue_ids = Vec::new();
// Make download markings
for dl in download.into_iter() {
seen_queue_ids.push(dl.circuit_id.clone());
if let Some(q) = lock.get_mut(&dl.circuit_id) {
// We need to update it
q.drops.down = dl.drops;
@ -85,6 +88,7 @@ impl AllQueueData {
// Make upload markings
for ul in upload.into_iter() {
seen_queue_ids.push(ul.circuit_id.clone());
if let Some(q) = lock.get_mut(&ul.circuit_id) {
// We need to update it
q.drops.up = ul.drops;
@ -102,6 +106,9 @@ impl AllQueueData {
lock.insert(ul.circuit_id.clone(), new_record);
}
}
// Remove any queues that were not seen
lock.retain(|k, _| seen_queue_ids.contains(k));
}
pub fn iterate_queues(&self, mut f: impl FnMut(&str, &DownUpOrder<u64>, &DownUpOrder<u64>)) {

View File

@ -3,7 +3,7 @@ use crate::{
circuit_to_queue::CIRCUIT_TO_QUEUE, interval::QUEUE_MONITOR_INTERVAL,
queue_store::QueueStore, tracking::reader::read_named_queue_from_interface,
};
use log::info;
use tracing::{debug, info, warn};
use lqos_utils::fdtimer::periodic;
mod reader;
mod watched_queues;
@ -139,7 +139,7 @@ fn connect_queues_to_circuit_up(structure: &[QueueNode], queues: &[QueueType]) -
fn all_queue_reader() {
let start = Instant::now();
let structure = QUEUE_STRUCTURE.read().unwrap();
let structure = QUEUE_STRUCTURE.load();
if let Some(structure) = &structure.maybe_queues {
if let Ok(config) = lqos_config::load_config() {
// Get all the queues
@ -173,22 +173,24 @@ fn all_queue_reader() {
//println!("{}", download.len() + upload.len());
ALL_QUEUE_SUMMARY.ingest_batch(download, upload);
} else {
log::warn!("(TC monitor) Unable to read configuration");
warn!("(TC monitor) Unable to read configuration");
}
} else {
log::warn!("(TC monitor) Not reading queues due to structure not yet ready");
warn!("(TC monitor) Not reading queues due to structure not yet ready");
}
let elapsed = start.elapsed();
log::debug!("(TC monitor) Completed in {:.5} seconds", elapsed.as_secs_f32());
debug!("(TC monitor) Completed in {:.5} seconds", elapsed.as_secs_f32());
}
/// Spawns a thread that periodically reads the queue statistics from
/// the Linux `tc` shaper, and stores them in a `QueueStore` for later
/// retrieval.
pub fn spawn_queue_monitor() {
std::thread::spawn(|| {
pub fn spawn_queue_monitor() -> anyhow::Result<()> {
std::thread::Builder::new()
.name("Queue Monitor".to_string())
.spawn(|| {
// Setup the queue monitor loop
info!("Starting Queue Monitor Thread.");
debug!("Starting Queue Monitor Thread.");
let interval_ms = if let Ok(config) = lqos_config::load_config() {
config.queue_check_period_ms
} else {
@ -196,18 +198,22 @@ pub fn spawn_queue_monitor() {
};
QUEUE_MONITOR_INTERVAL
.store(interval_ms, std::sync::atomic::Ordering::Relaxed);
info!("Queue check period set to {interval_ms} ms.");
debug!("Queue check period set to {interval_ms} ms.");
// Setup the Linux timer fd system
periodic(interval_ms, "Queue Reader", &mut || {
track_queues();
});
});
})?;
// Set up a 2nd thread to periodically gather ALL the queue stats
std::thread::spawn(|| {
std::thread::Builder::new()
.name("All Queue Monitor".to_string())
.spawn(|| {
periodic(2000, "All Queues", &mut || {
all_queue_reader();
})
});
})?;
Ok(())
}

View File

@ -1,5 +1,5 @@
use crate::{deserialize_tc_tree, queue_types::QueueType};
use log::{error, info};
use tracing::{debug, error, info};
use lqos_bus::TcHandle;
use std::process::Command;
use thiserror::Error;
@ -33,8 +33,8 @@ pub fn read_all_queues_from_interface(
})?;
let result = deserialize_tc_tree(&raw_json)
.map_err(|e| {
info!("Failed to deserialize TC tree result.");
info!("{:?}", e);
debug!("Failed to deserialize TC tree result.");
debug!("{:?}", e);
QueueReaderError::Deserialization
})?;

View File

@ -1,6 +1,6 @@
use crate::queue_structure::QUEUE_STRUCTURE;
use dashmap::DashMap;
use log::{info, warn};
use tracing::{info, warn};
use lqos_bus::TcHandle;
use lqos_sys::num_possible_cpus;
use lqos_utils::unix_time::unix_now;
@ -55,7 +55,8 @@ pub fn add_watched_queue(circuit_id: &str) {
}
}
if let Some(queues) = &QUEUE_STRUCTURE.read().unwrap().maybe_queues {
let queues = QUEUE_STRUCTURE.load();
if let Some(queues) = &queues.maybe_queues {
if let Some(circuit) = queues.iter().find(|c| {
c.circuit_id.is_some() && c.circuit_id.as_ref().unwrap() == circuit_id
}) {

View File

@ -46,7 +46,7 @@ fn read_line() -> String {
fn get_lts_key() -> String {
if let Ok(cfg) = load_config() {
if let Some(key) = cfg.long_term_stats.license_key {
if let Some(key) = &cfg.long_term_stats.license_key {
return key.clone();
}
}
@ -113,7 +113,7 @@ fn summarize(filename: &str) {
}
fn sanity_checks() {
if let Err(e) = run_sanity_checks() {
if let Err(e) = run_sanity_checks(true) {
println!("Sanity Check Failed: {e:?}");
}
}

View File

@ -20,8 +20,8 @@ pub struct SanityCheck {
pub comments: String,
}
pub fn run_sanity_checks() -> anyhow::Result<SanityChecks> {
println!("Running Sanity Checks");
pub fn run_sanity_checks(echo: bool) -> anyhow::Result<SanityChecks> {
if echo { println!("Running Sanity Checks"); }
let mut results = Vec::new();
// Run the checks
@ -42,15 +42,15 @@ pub fn run_sanity_checks() -> anyhow::Result<SanityChecks> {
let mut any_errors = false;
for s in results.iter() {
if s.success {
success(&format!("{} {}", s.name, s.comments));
if echo { success(&format!("{} {}", s.name, s.comments)); }
} else {
error(&format!("{}: {}", s.name, s.comments));
any_errors = true;
if echo { any_errors = true; }
}
}
if any_errors {
error("ERRORS FOUND DURING SANITY CHECK");
if echo { error("ERRORS FOUND DURING SANITY CHECK"); }
}
Ok(SanityChecks { results })

View File

@ -51,7 +51,7 @@ impl SupportDump {
}
pub fn gather_all_support_info(sender: &str, comments: &str, lts_key: &str) -> anyhow::Result<SupportDump> {
let sanity_checks = run_sanity_checks()?;
let sanity_checks = run_sanity_checks(false)?;
let mut data_targets: Vec<Box<dyn SupportInfo>> = vec![
lqos_config::LqosConfig::boxed(),

View File

@ -10,7 +10,7 @@ libbpf-sys = "1"
anyhow = { workspace = true }
lqos_bus = { path = "../lqos_bus" }
lqos_config = { path = "../lqos_config" }
log = { workspace = true }
tracing = { workspace = true }
lqos_utils = { path = "../lqos_utils" }
once_cell = { workspace = true}
thiserror = { workspace = true }

View File

@ -1,6 +1,6 @@
use crate::{bpf_map::BpfMap, lqos_kernel::interface_name_to_index};
use anyhow::Result;
use log::info;
use tracing::debug;
#[repr(C)]
#[derive(Default, Clone, Debug)]
@ -19,13 +19,13 @@ const INTERFACE_PATH: &str = "/sys/fs/bpf/bifrost_interface_map";
const VLAN_PATH: &str = "/sys/fs/bpf/bifrost_vlan_map";
pub(crate) fn clear_bifrost() -> Result<()> {
info!("Clearing bifrost maps");
debug!("Clearing bifrost maps");
let mut interface_map =
BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?;
let mut vlan_map = BpfMap::<u32, BifrostVlan>::from_path(VLAN_PATH)?;
info!("Clearing VLANs");
debug!("Clearing VLANs");
vlan_map.clear_no_repeat()?;
info!("Clearing Interfaces");
debug!("Clearing Interfaces");
interface_map.clear_no_repeat()?;
Ok(())
}
@ -34,7 +34,7 @@ pub(crate) fn map_multi_interface_mode(
to_internet: &str,
to_lan: &str,
) -> Result<()> {
info!("Interface maps (multi-interface)");
debug!("Interface maps (multi-interface)");
let mut interface_map =
BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?;
@ -46,7 +46,7 @@ pub(crate) fn map_multi_interface_mode(
scan_vlans: 0,
};
interface_map.insert(&mut from, &mut mapping)?;
info!("Mapped bifrost interface {}->{}", from, redirect_to);
debug!("Mapped bifrost interface {}->{}", from, redirect_to);
// LAN
let mut from = interface_name_to_index(to_lan)?;
@ -56,7 +56,7 @@ pub(crate) fn map_multi_interface_mode(
scan_vlans: 0,
};
interface_map.insert(&mut from, &mut mapping)?;
info!("Mapped bifrost interface {}->{}", from, redirect_to);
debug!("Mapped bifrost interface {}->{}", from, redirect_to);
Ok(())
}
@ -66,7 +66,7 @@ pub(crate) fn map_single_interface_mode(
internet_vlan: u32,
lan_vlan: u32,
) -> Result<()> {
info!("Interface maps (single interface)");
debug!("Interface maps (single interface)");
let mut interface_map =
BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?;
@ -80,27 +80,27 @@ pub(crate) fn map_single_interface_mode(
scan_vlans: 1,
};
interface_map.insert(&mut from, &mut mapping)?;
info!("Mapped bifrost interface {}->{}", from, redirect_to);
debug!("Mapped bifrost interface {}->{}", from, redirect_to);
// VLANs - Internet
let mut key: u32 = (interface_name_to_index(&interface)? << 16) | internet_vlan;
let mut val = BifrostVlan { redirect_to: lan_vlan };
vlan_map.insert(&mut key, &mut val)?;
info!(
debug!(
"Mapped bifrost VLAN: {}:{} => {}",
interface, internet_vlan, lan_vlan
);
info!("{key}");
debug!("{key}");
// VLANs - LAN
let mut key: u32 = (interface_name_to_index(&interface)? << 16) | lan_vlan;
let mut val = BifrostVlan { redirect_to: internet_vlan };
vlan_map.insert(&mut key, &mut val)?;
info!(
debug!(
"Mapped bifrost VLAN: {}:{} => {}",
interface, lan_vlan, internet_vlan
);
info!("{key}");
debug!("{key}");
Ok(())
}

View File

@ -7,6 +7,7 @@ use std::{
fmt::Debug, fs::File, io::Read, marker::PhantomData, os::fd::FromRawFd,
};
use thiserror::Error;
use tracing::error;
use zerocopy::FromBytes;
/// Represents a link to an eBPF defined iterator. The iterators
@ -65,7 +66,7 @@ where
let link_fd = unsafe { bpf::bpf_link__fd(self.link) };
let iter_fd = unsafe { bpf::bpf_iter_create(link_fd) };
if iter_fd < 0 {
log::error!("Unable to create map file descriptor");
error!("Unable to create map file descriptor");
Err(BpfIteratorError::FailedToCreateFd)
} else {
unsafe { Ok(File::from_raw_fd(iter_fd)) }
@ -85,8 +86,8 @@ where
let bytes_read = file.read_to_end(&mut buf);
match bytes_read {
Err(e) => {
log::error!("Unable to read from kernel map iterator file");
log::error!("{e:?}");
error!("Unable to read from kernel map iterator file");
error!("{e:?}");
Err(BpfIteratorError::UnableToCreateIterator)
}
Ok(bytes) => {
@ -130,8 +131,8 @@ where
let bytes_read = file.read_to_end(&mut buf);
match bytes_read {
Err(e) => {
log::error!("Unable to read from kernel map iterator file");
log::error!("{e:?}");
error!("Unable to read from kernel map iterator file");
error!("{e:?}");
Err(BpfIteratorError::UnableToCreateIterator)
}
Ok(_) => {
@ -151,12 +152,12 @@ where
if !key.is_empty() && !values.is_empty() {
callback(&key[0], &values[0]);
} else {
log::error!("Empty key or value found in iterator");
error!("Empty key or value found in iterator");
if key.is_empty() {
log::error!("Empty key");
error!("Empty key");
}
if values.is_empty() {
log::error!("Empty value");
error!("Empty value");
}
}
@ -255,5 +256,14 @@ pub fn end_flows(flows: &mut [FlowbeeKey]) -> anyhow::Result<()> {
map.delete(flow)?;
}
Ok(())
}
pub(crate) fn expire_throughput(keys: &mut [XdpIpAddress]) -> anyhow::Result<()> {
let mut map = BpfMap::<XdpIpAddress, HostCounter>::from_path("/sys/fs/bpf/map_traffic")?;
for key in keys {
map.delete(key).unwrap();
}
Ok(())
}

View File

@ -1,6 +1,6 @@
use anyhow::{Error, Result};
use libbpf_sys::{bpf_map_update_elem, bpf_obj_get};
use log::info;
use tracing::debug;
use std::{ffi::CString, os::raw::c_void};
use crate::{num_possible_cpus, linux::map_txq_config_base_setup};
@ -39,7 +39,7 @@ impl CpuMapping {
let queue_size = 2048u32;
let val_ptr: *const u32 = &queue_size;
for cpu in 0..cpu_count {
info!("Mapping core #{cpu}");
debug!("Mapping core #{cpu}");
// Insert into the cpu map
let cpu_ptr: *const u32 = &cpu;
let error = unsafe {
@ -86,14 +86,14 @@ impl Drop for CpuMapping {
/// Emulates xd_setup from cpumap
pub(crate) fn xps_setup_default_disable(interface: &str) -> Result<()> {
use std::io::Write;
info!("xps_setup");
debug!("xps_setup");
let queues = sorted_txq_xps_cpus(interface)?;
for (cpu, xps_cpu) in queues.iter().enumerate() {
let mask = cpu_to_mask_disabled(cpu);
let mut f = std::fs::OpenOptions::new().write(true).open(xps_cpu)?;
f.write_all(mask.to_string().as_bytes())?;
f.flush()?;
info!("Mapped TX queue for CPU {cpu}");
debug!("Mapped TX queue for CPU {cpu}");
}
Ok(())

View File

@ -5,7 +5,7 @@ use zerocopy::FromBytes;
use lqos_utils::units::DownUpOrder;
/// Representation of the eBPF `flow_key_t` type.
#[derive(Debug, Clone, Default, PartialEq, Eq, Hash, FromBytes)]
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Hash, FromBytes)]
#[repr(C)]
pub struct FlowbeeKey {
/// Mapped `XdpIpAddress` source for the flow.
@ -25,7 +25,7 @@ pub struct FlowbeeKey {
}
/// Mapped representation of the eBPF `flow_data_t` type.
#[derive(Debug, Clone, Default, FromBytes)]
#[derive(Debug, Clone, Copy, Default, FromBytes)]
#[repr(C)]
pub struct FlowbeeData {
/// Time (nanos) when the connection was established

View File

@ -0,0 +1,59 @@
use std::time::Duration;
use tracing::{debug, error};
use lqos_utils::unix_time::time_since_boot;
/// Starts a periodic garbage collector that will run every hour.
/// This is used to clean up old eBPF map entries to limit memory usage.
pub fn bpf_garbage_collector() {
const SLEEP_TIME: u64 = 60 * 60; // 1 Hour
//const SLEEP_TIME: u64 = 5 * 60; // 5 Minutes
debug!("Starting BPF garbage collector");
let result = std::thread::Builder::new()
.name("bpf_garbage_collector".to_string())
.spawn(|| {
loop {
std::thread::sleep(Duration::from_secs(SLEEP_TIME));
debug!("Running BPF garbage collector");
throughput_garbage_collect();
}
});
if let Err(e) = result {
error!("Failed to start BPF garbage collector: {:?}", e);
}
}
/// Iterates through all throughput entries, building a list of any that
/// haven't been seen for an hour. These are then bulk deleted.
fn throughput_garbage_collect() {
const EXPIRY_TIME: u64 = 60 * 60; // 1 Hour
//const EXPIRY_TIME: u64 = 5 * 60; // 5 Minutes
let Ok(now) = time_since_boot() else { return };
let now = Duration::from(now).as_nanos() as u64;
let period_nanos = EXPIRY_TIME * 1_000_000_000;
let period_ago = now - period_nanos;
let mut expired = Vec::new();
unsafe {
crate::bpf_iterator::iterate_throughput(&mut |ip, counters| {
let last_seen: u64 = counters
.iter()
.map(|c| c.last_seen)
.collect::<Vec<_>>()
.into_iter()
.max()
.unwrap_or(0);
if last_seen < period_ago {
expired.push(ip.clone());
}
});
}
if !expired.is_empty() {
debug!("Garbage collecting {} throughput entries", expired.len());
if let Err(e) = crate::bpf_iterator::expire_throughput(&mut expired) {
error!("Failed to garbage collect throughput: {:?}", e);
}
}
}

View File

@ -21,6 +21,7 @@ mod bpf_iterator;
/// Data shared between eBPF and Heimdall that needs local access
/// for map control.
pub mod flowbee_data;
mod garbage_collector;
pub use ip_mapping::{
add_ip_to_tc, clear_ips_from_tc, del_ip_from_tc, list_mapped_ips, clear_hot_cache,
@ -30,4 +31,5 @@ pub use linux::num_possible_cpus;
pub use lqos_kernel::max_tracked_ips;
pub use throughput::{throughput_for_each, HostCounter};
pub use bpf_iterator::{iterate_flows, end_flows};
pub use lqos_kernel::interface_name_to_index;
pub use lqos_kernel::interface_name_to_index;
pub use garbage_collector::bpf_garbage_collector;

View File

@ -1,6 +1,6 @@
use std::{fs::read_to_string, path::Path};
use log::error;
use tracing::error;
use thiserror::Error;
const POSSIBLE_CPUS_PATH: &str = "/sys/devices/system/cpu/possible";

View File

@ -1,6 +1,6 @@
use std::ffi::c_void;
use libbpf_sys::bpf_map_update_elem;
use log::error;
use tracing::error;
use thiserror::Error;
use crate::num_possible_cpus;

View File

@ -10,7 +10,7 @@ use libbpf_sys::{
XDP_FLAGS_DRV_MODE, XDP_FLAGS_HW_MODE, XDP_FLAGS_SKB_MODE,
XDP_FLAGS_UPDATE_IF_NOEXIST,
};
use log::{info, warn};
use tracing::{debug, error, info, warn};
use nix::libc::{geteuid, if_nametoindex};
use std::{ffi::{CString, c_void}, process::Command};
@ -55,7 +55,7 @@ pub fn interface_name_to_index(interface_name: &str) -> Result<u32> {
}
pub fn unload_xdp_from_interface(interface_name: &str) -> Result<()> {
info!("Unloading XDP/TC");
debug!("Unloading XDP/TC");
check_root()?;
let interface_index = interface_name_to_index(interface_name)?.try_into()?;
unsafe {
@ -83,7 +83,7 @@ pub fn unload_xdp_from_interface(interface_name: &str) -> Result<()> {
fn set_strict_mode() -> Result<()> {
let err = unsafe { libbpf_set_strict_mode(LIBBPF_STRICT_ALL) };
#[cfg(not(debug_assertions))]
//#[cfg(not(debug_assertions))]
unsafe {
bpf::do_not_print();
}
@ -175,7 +175,7 @@ pub fn attach_xdp_and_tc_to_interface(
let heimdall_events_map = unsafe { bpf::bpf_object__find_map_by_name((*skeleton).obj, heimdall_events_name.as_ptr()) };
let heimdall_events_fd = unsafe { bpf::bpf_map__fd(heimdall_events_map) };
if heimdall_events_fd < 0 {
log::error!("Unable to load Heimdall Events FD");
error!("Unable to load Heimdall Events FD");
return Err(anyhow::Error::msg("Unable to load Heimdall Events FD"));
}
let opts: *const bpf::ring_buffer_opts = std::ptr::null();
@ -186,18 +186,18 @@ pub fn attach_xdp_and_tc_to_interface(
opts as *mut c_void, opts)
};
if unsafe { bpf::libbpf_get_error(heimdall_perf_buffer as *mut c_void) != 0 } {
log::error!("Failed to create Heimdall event buffer");
error!("Failed to create Heimdall event buffer");
return Err(anyhow::Error::msg("Failed to create Heimdall event buffer"));
}
let handle = PerfBufferHandle(heimdall_perf_buffer);
std::thread::spawn(|| poll_perf_events(handle));
std::thread::Builder::new().name("HeimdallEvents".to_string()).spawn(|| poll_perf_events(handle))?;
// Find and attach the Flowbee handler
let flowbee_events_name = CString::new("flowbee_events").unwrap();
let flowbee_events_map = unsafe { bpf::bpf_object__find_map_by_name((*skeleton).obj, flowbee_events_name.as_ptr()) };
let flowbee_events_fd = unsafe { bpf::bpf_map__fd(flowbee_events_map) };
if flowbee_events_fd < 0 {
log::error!("Unable to load Flowbee Events FD");
error!("Unable to load Flowbee Events FD");
return Err(anyhow::Error::msg("Unable to load Flowbee Events FD"));
}
let opts: *const bpf::ring_buffer_opts = std::ptr::null();
@ -208,11 +208,13 @@ pub fn attach_xdp_and_tc_to_interface(
opts as *mut c_void, opts)
};
if unsafe { bpf::libbpf_get_error(flowbee_perf_buffer as *mut c_void) != 0 } {
log::error!("Failed to create Flowbee event buffer");
error!("Failed to create Flowbee event buffer");
return Err(anyhow::Error::msg("Failed to create Flowbee event buffer"));
}
let handle = PerfBufferHandle(flowbee_perf_buffer);
std::thread::spawn(|| poll_perf_events(handle));
std::thread::Builder::new()
.name(format!("FlowEvents_{}", interface_name))
.spawn(|| poll_perf_events(handle))?;
// Remove any previous entry
let _r = Command::new("tc")
@ -240,11 +242,11 @@ pub fn attach_xdp_and_tc_to_interface(
if let Some(bridge) = &etc.bridge {
if bridge.use_xdp_bridge {
// Enable "promiscuous" mode on interfaces
info!("Enabling promiscuous mode on {}", &bridge.to_internet);
debug!("Enabling promiscuous mode on {}", &bridge.to_internet);
std::process::Command::new("/bin/ip")
.args(["link", "set", &bridge.to_internet, "promisc", "on"])
.output()?;
info!("Enabling promiscuous mode on {}", &bridge.to_network);
debug!("Enabling promiscuous mode on {}", &bridge.to_network);
std::process::Command::new("/bin/ip")
.args(["link", "set", &bridge.to_network, "promisc", "on"])
.output()?;
@ -265,7 +267,7 @@ pub fn attach_xdp_and_tc_to_interface(
if let Some(stick) = &etc.single_interface {
// Enable "promiscuous" mode on interface
info!("Enabling promiscuous mode on {}", &stick.interface);
debug!("Enabling promiscuous mode on {}", &stick.interface);
std::process::Command::new("/bin/ip")
.args(["link", "set", &stick.interface, "promisc", "on"])
.output()?;
@ -350,7 +352,7 @@ fn poll_perf_events(heimdall_perf_buffer: PerfBufferHandle) {
loop {
let err = unsafe { bpf::ring_buffer__poll(heimdall_perf_buffer, 100) };
if err < 0 {
log::error!("Error polling perfbuffer");
error!("Error polling perfbuffer");
}
}
}
}

View File

@ -32,4 +32,4 @@ pub fn throughput_for_each(
unsafe {
crate::bpf_iterator::iterate_throughput(callback);
}
}
}

View File

@ -7,7 +7,7 @@ license = "GPL-2.0-only"
[dependencies]
serde = { workspace = true }
nix = { workspace = true }
log = { workspace = true }
tracing = { workspace = true }
notify = { version = "5.0.0", default-features = false } # Not using crossbeam because of Tokio
thiserror = { workspace = true }
byteorder = { workspace = true }

Some files were not shown because too many files have changed in this diff Show More