Merge pull request #459 from LibreQoE/unifig

Unified Configuration System
This commit is contained in:
Herbert "TheBracket 2024-02-07 12:55:32 -06:00 committed by GitHub
commit 206ba2641f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
77 changed files with 3123 additions and 2039 deletions

View File

@ -2,7 +2,7 @@
LibreQoS is a Quality of Experience (QoE) Smart Queue Management (SQM) system designed for Internet Service Providers to optimize the flow of their network traffic and thus reduce bufferbloat, keep the network responsive, and improve the end-user experience. LibreQoS is a Quality of Experience (QoE) Smart Queue Management (SQM) system designed for Internet Service Providers to optimize the flow of their network traffic and thus reduce bufferbloat, keep the network responsive, and improve the end-user experience.
Servers running LibreQoS can shape traffic for many thousands of customers. Servers running LibreQoS can shape traffic for thousands of customers. On higher-end servers, LibreQoS is capable of shaping 50-80 Gbps of traffic.
Learn more at [LibreQoS.io](https://libreqos.io/)! Learn more at [LibreQoS.io](https://libreqos.io/)!
@ -25,9 +25,9 @@ Please support the continued development of LibreQoS by sponsoring us via [GitHu
[ReadTheDocs](https://libreqos.readthedocs.io/en/latest/) [ReadTheDocs](https://libreqos.readthedocs.io/en/latest/)
## Matrix Chat ## LibreQoS Chat
Our Matrix chat channel is available at [https://matrix.to/#/#libreqos:matrix.org](https://matrix.to/#/#libreqos:matrix.org). Our Zulip chat server is available at [https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/](https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/).
## Long-Term Stats (LTS) ## Long-Term Stats (LTS)

5
docs/ChangeNotes/v1.5.md Normal file
View File

@ -0,0 +1,5 @@
# LibreQoS v1.4 to v1.5 Change Summary
## Unified Configuration
All configuration has been moved into `/etc/lqos.conf`.

View File

@ -15,27 +15,23 @@ Now edit the file to match your setup with
sudo nano /etc/lqos.conf sudo nano /etc/lqos.conf
``` ```
Change `enp1s0f1` and `enp1s0f2` to match your network interfaces. It doesn't matter which one is which. Notice, it's paring the interfaces, so when you first enter enps0f<ins>**1**</ins> in the first line, the `redirect_to` parameter is enp1s0f<ins>**2**</ins> (replacing with your actual interface names). Change `eth0` and `eth1` to match your network interfaces. The interface facing the Internet should be specified in `to_internet`. The interfacing facing your ISP network should be in `to_network`.
- First Line: `name = "enp1s0f1", redirect_to = "enp1s0f2"`
- Second Line: `name = "enp1s0f2", redirect_to = "enp1s0f1"`
Then, if using Bifrost/XDP set `use_xdp_bridge = true` under that same `[bridge]` section. Then, if using Bifrost/XDP set `use_xdp_bridge = true` under that same `[bridge]` section.
## Configure ispConfig.py For example:
Copy ispConfig.example.py to ispConfig.py and edit as needed ```toml
[bridge]
```shell use_xdp_bridge = true
cd /opt/libreqos/src/ to_internet = "eth0"
cp ispConfig.example.py ispConfig.py to_network = "eth1"
nano ispConfig.py
``` ```
- Set upstreamBandwidthCapacityDownloadMbps and upstreamBandwidthCapacityUploadMbps to match the bandwidth in Mbps of your network's upstream / WAN internet connection. The same can be done for generatedPNDownloadMbps and generatedPNUploadMbps. ## Configure Your Network Settings
- Set interfaceA to the interface facing your core router (or bridged internal network if your network is bridged)
- Set interfaceB to the interface facing your edge router - Set `uplink_bandwidth_mbps` and `downlink_bandwidth_mbps` to match the bandwidth in Mbps of your network's upstream / WAN internet connection. The same can be done for `generated_pn_download_mbps` and `generated_pn_upload_mbps`.
- Set ```enableActualShellCommands = True``` to allow the program to actually run the commands. - Set ```dry_run = false``` to allow the program to actually run the commands.
## Network.json ## Network.json

View File

@ -27,7 +27,8 @@ There are two options for the bridge to pass data through your two interfaces:
- Bifrost XDP-Accelerated Bridge - Bifrost XDP-Accelerated Bridge
- Regular Linux Bridge - Regular Linux Bridge
The Bifrost Bridge is faster and generally recommended, but may not work perfectly in a VM setup using virtualized NICs. The Bifrost Bridge is recommended for Intel NICs with XDP support, such as the X520 and X710.
The regular Linux bridge is recommended for Nvidea/Mellanox NICs such as the ConnectX-5 series (which have superior bridge performance), and VM setups using virtualized NICs.
To use the Bifrost bridge, skip the regular Linux bridge section below, and be sure to enable Bifrost/XDP in lqos.conf a few sections below. To use the Bifrost bridge, skip the regular Linux bridge section below, and be sure to enable Bifrost/XDP in lqos.conf a few sections below.
### Adding a regular Linux bridge (if not using Bifrost XDP bridge) ### Adding a regular Linux bridge (if not using Bifrost XDP bridge)

View File

@ -14,8 +14,8 @@ Single-thread CPU performance will determine the max throughput of a single HTB
| 250 Mbps | 1250 | | 250 Mbps | 1250 |
| 500 Mbps | 1500 | | 500 Mbps | 1500 |
| 1 Gbps | 2000 | | 1 Gbps | 2000 |
| 2 Gbps | 3000 | | 3 Gbps | 3000 |
| 4 Gbps | 4000 | | 10 Gbps | 4000 |
Below is a table of approximate aggregate throughput capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 2700 or greater: Below is a table of approximate aggregate throughput capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 2700 or greater:
@ -26,8 +26,8 @@ Below is a table of approximate aggregate throughput capacity, assuming a a CPU
| 5 Gbps | 6 | | 5 Gbps | 6 |
| 10 Gbps | 8 | | 10 Gbps | 8 |
| 20 Gbps | 16 | | 20 Gbps | 16 |
| 50 Gbps* | 32 | | 50 Gbps | 32 |
| 100 Gbps * | 64 |
(* Estimated) (* Estimated)
So for example, an ISP delivering 1Gbps service plans with 10Gbps aggregate throughput would choose a CPU with a 2500+ single-thread score and 8 cores, such as the Intel Xeon E-2388G @ 3.20GHz. So for example, an ISP delivering 1Gbps service plans with 10Gbps aggregate throughput would choose a CPU with a 2500+ single-thread score and 8 cores, such as the Intel Xeon E-2388G @ 3.20GHz.

View File

@ -2,7 +2,7 @@
## UISP Integration ## UISP Integration
First, set the relevant parameters for UISP (uispAuthToken, UISPbaseURL, etc.) in ispConfig.py. First, set the relevant parameters for UISP (uispAuthToken, UISPbaseURL, etc.) in `/etc/lqos.conf`.
To test the UISP Integration, use To test the UISP Integration, use
@ -14,11 +14,11 @@ On the first successful run, it will create a network.json and ShapedDevices.csv
If a network.json file exists, it will not be overwritten. If a network.json file exists, it will not be overwritten.
You can modify the network.json file to more accurately reflect bandwidth limits. You can modify the network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the UISP integration is run. ShapedDevices.csv will be overwritten every time the UISP integration is run.
You have the option to run integrationUISP.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportUISP = True``` in ispConfig.py You have the option to run integrationUISP.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```enable_uisp = true``` in `/etc/lqos.conf`
## Powercode Integration ## Powercode Integration
First, set the relevant parameters for Sonar (powercode_api_key, powercode_api_url, etc.) in ispConfig.py. First, set the relevant parameters for Powercode (powercode_api_key, powercode_api_url, etc.) in `/etc/lqos.conf`.
To test the Powercode Integration, use To test the Powercode Integration, use
@ -29,11 +29,11 @@ python3 integrationPowercode.py
On the first successful run, it will create a ShapedDevices.csv file. On the first successful run, it will create a ShapedDevices.csv file.
You can modify the network.json file manually to reflect Site/AP bandwidth limits. You can modify the network.json file manually to reflect Site/AP bandwidth limits.
ShapedDevices.csv will be overwritten every time the Powercode integration is run. ShapedDevices.csv will be overwritten every time the Powercode integration is run.
You have the option to run integrationPowercode.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportPowercode = True``` in ispConfig.py You have the option to run integrationPowercode.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```enable_powercode = true``` in `/etc/lqos.conf`
## Sonar Integration ## Sonar Integration
First, set the relevant parameters for Sonar (sonar_api_key, sonar_api_url, etc.) in ispConfig.py. First, set the relevant parameters for Sonar (sonar_api_key, sonar_api_url, etc.) in `/etc/lqos.conf`.
To test the Sonar Integration, use To test the Sonar Integration, use
@ -45,11 +45,11 @@ On the first successful run, it will create a ShapedDevices.csv file.
If a network.json file exists, it will not be overwritten. If a network.json file exists, it will not be overwritten.
You can modify the network.json file to more accurately reflect bandwidth limits. You can modify the network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the Sonar integration is run. ShapedDevices.csv will be overwritten every time the Sonar integration is run.
You have the option to run integrationSonar.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportSonar = True``` in ispConfig.py You have the option to run integrationSonar.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```enable_sonar = true``` in `/etc/lqos.conf`
## Splynx Integration ## Splynx Integration
First, set the relevant parameters for Splynx (splynx_api_key, splynx_api_secret, etc.) in ispConfig.py. First, set the relevant parameters for Splynx (splynx_api_key, splynx_api_secret, etc.) in `/etc/lqos.conf`.
The Splynx Integration uses Basic authentication. For using this type of authentication, please make sure you enable [Unsecure access](https://splynx.docs.apiary.io/#introduction/authentication) in your Splynx API key settings. Also the Splynx API key should be granted access to the necessary permissions. The Splynx Integration uses Basic authentication. For using this type of authentication, please make sure you enable [Unsecure access](https://splynx.docs.apiary.io/#introduction/authentication) in your Splynx API key settings. Also the Splynx API key should be granted access to the necessary permissions.
@ -62,4 +62,4 @@ python3 integrationSplynx.py
On the first successful run, it will create a ShapedDevices.csv file. On the first successful run, it will create a ShapedDevices.csv file.
You can manually create your network.json file to more accurately reflect bandwidth limits. You can manually create your network.json file to more accurately reflect bandwidth limits.
ShapedDevices.csv will be overwritten every time the Splynx integration is run. ShapedDevices.csv will be overwritten every time the Splynx integration is run.
You have the option to run integrationSplynx.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```automaticImportSplynx = True``` in ispConfig.py You have the option to run integrationSplynx.py automatically on boot and every 10 minutes, which is recommended. This can be enabled by setting ```enable_spylnx = true``` in `/etc/lqos.conf`.

View File

@ -20,21 +20,20 @@ import shutil
import binpacking import binpacking
from deepdiff import DeepDiff from deepdiff import DeepDiff
from ispConfig import sqm, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, \
interfaceA, interfaceB, enableActualShellCommands, useBinPackingToBalanceCPU, monitorOnlyMode, \
runShellCommandsAsSudo, generatedPNDownloadMbps, generatedPNUploadMbps, queuesAvailableOverride, \
OnAStick
from liblqos_python import is_lqosd_alive, clear_ip_mappings, delete_ip_mapping, validate_shaped_devices, \ from liblqos_python import is_lqosd_alive, clear_ip_mappings, delete_ip_mapping, validate_shaped_devices, \
is_libre_already_running, create_lock_file, free_lock_file, add_ip_mapping, BatchedCommands is_libre_already_running, create_lock_file, free_lock_file, add_ip_mapping, BatchedCommands, \
check_config, sqm, upstream_bandwidth_capacity_download_mbps, upstream_bandwidth_capacity_upload_mbps, \
interface_a, interface_b, enable_actual_shell_commands, use_bin_packing_to_balance_cpu, monitor_mode_only, \
run_shell_commands_as_sudo, generated_pn_download_mbps, generated_pn_upload_mbps, queues_available_override, \
on_a_stick
# Automatically account for TCP overhead of plans. For example a 100Mbps plan needs to be set to 109Mbps for the user to ever see that result on a speed test # Automatically account for TCP overhead of plans. For example a 100Mbps plan needs to be set to 109Mbps for the user to ever see that result on a speed test
# Does not apply to nodes of any sort, just endpoint devices # Does not apply to nodes of any sort, just endpoint devices
tcpOverheadFactor = 1.09 tcpOverheadFactor = 1.09
def shell(command): def shell(command):
if enableActualShellCommands: if enable_actual_shell_commands():
if runShellCommandsAsSudo: if run_shell_commands_as_sudo():
command = 'sudo ' + command command = 'sudo ' + command
logging.info(command) logging.info(command)
commands = command.split(' ') commands = command.split(' ')
@ -49,7 +48,7 @@ def shell(command):
def shellReturn(command): def shellReturn(command):
returnableString = '' returnableString = ''
if enableActualShellCommands: if enable_actual_shell_commands():
commands = command.split(' ') commands = command.split(' ')
proc = subprocess.Popen(commands, stdout=subprocess.PIPE) proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
@ -72,11 +71,11 @@ def checkIfFirstRunSinceBoot():
return True return True
def clearPriorSettings(interfaceA, interfaceB): def clearPriorSettings(interfaceA, interfaceB):
if enableActualShellCommands: if enable_actual_shell_commands():
if 'mq' in shellReturn('tc qdisc show dev ' + interfaceA + ' root'): if 'mq' in shellReturn('tc qdisc show dev ' + interfaceA + ' root'):
print('MQ detected. Will delete and recreate mq qdisc.') print('MQ detected. Will delete and recreate mq qdisc.')
# Clear tc filter # Clear tc filter
if OnAStick == True: if on_a_stick() == True:
shell('tc qdisc delete dev ' + interfaceA + ' root') shell('tc qdisc delete dev ' + interfaceA + ' root')
else: else:
shell('tc qdisc delete dev ' + interfaceA + ' root') shell('tc qdisc delete dev ' + interfaceA + ' root')
@ -84,7 +83,7 @@ def clearPriorSettings(interfaceA, interfaceB):
def tearDown(interfaceA, interfaceB): def tearDown(interfaceA, interfaceB):
# Full teardown of everything for exiting LibreQoS # Full teardown of everything for exiting LibreQoS
if enableActualShellCommands: if enable_actual_shell_commands():
# Clear IP filters and remove xdp program from interfaces # Clear IP filters and remove xdp program from interfaces
#result = os.system('./bin/xdp_iphash_to_cpu_cmdline clear') #result = os.system('./bin/xdp_iphash_to_cpu_cmdline clear')
clear_ip_mappings() # Use the bus clear_ip_mappings() # Use the bus
@ -92,8 +91,8 @@ def tearDown(interfaceA, interfaceB):
def findQueuesAvailable(interfaceName): def findQueuesAvailable(interfaceName):
# Find queues and CPU cores available. Use min between those two as queuesAvailable # Find queues and CPU cores available. Use min between those two as queuesAvailable
if enableActualShellCommands: if enable_actual_shell_commands():
if queuesAvailableOverride == 0: if queues_available_override() == 0:
queuesAvailable = 0 queuesAvailable = 0
path = '/sys/class/net/' + interfaceName + '/queues/' path = '/sys/class/net/' + interfaceName + '/queues/'
directory_contents = os.listdir(path) directory_contents = os.listdir(path)
@ -102,7 +101,7 @@ def findQueuesAvailable(interfaceName):
queuesAvailable += 1 queuesAvailable += 1
print(f"Interface {interfaceName} NIC queues:\t\t\t" + str(queuesAvailable)) print(f"Interface {interfaceName} NIC queues:\t\t\t" + str(queuesAvailable))
else: else:
queuesAvailable = queuesAvailableOverride queuesAvailable = queues_available_override()
print(f"Interface {interfaceName} NIC queues (Override):\t\t\t" + str(queuesAvailable)) print(f"Interface {interfaceName} NIC queues (Override):\t\t\t" + str(queuesAvailable))
cpuCount = multiprocessing.cpu_count() cpuCount = multiprocessing.cpu_count()
print("CPU cores:\t\t\t" + str(cpuCount)) print("CPU cores:\t\t\t" + str(cpuCount))
@ -298,7 +297,7 @@ def loadSubscriberCircuits(shapedDevicesFile):
for row in commentsRemoved: for row in commentsRemoved:
circuitID, circuitName, deviceID, deviceName, ParentNode, mac, ipv4_input, ipv6_input, downloadMin, uploadMin, downloadMax, uploadMax, comment = row circuitID, circuitName, deviceID, deviceName, ParentNode, mac, ipv4_input, ipv6_input, downloadMin, uploadMin, downloadMax, uploadMax, comment = row
# If in monitorOnlyMode, override bandwidth rates to where no shaping will actually occur # If in monitorOnlyMode, override bandwidth rates to where no shaping will actually occur
if monitorOnlyMode == True: if monitor_mode_only() == True:
downloadMin = 10000 downloadMin = 10000
uploadMin = 10000 uploadMin = 10000
downloadMax = 10000 downloadMax = 10000
@ -333,7 +332,7 @@ def loadSubscriberCircuits(shapedDevicesFile):
errorMessageString = "Device " + deviceName + " with deviceID " + deviceID + " had different Parent Node from other devices of circuit ID #" + circuitID errorMessageString = "Device " + deviceName + " with deviceID " + deviceID + " had different Parent Node from other devices of circuit ID #" + circuitID
raise ValueError(errorMessageString) raise ValueError(errorMessageString)
# Check if bandwidth parameters match other cdevices of this same circuit ID, but only check if monitorOnlyMode is Off # Check if bandwidth parameters match other cdevices of this same circuit ID, but only check if monitorOnlyMode is Off
if monitorOnlyMode == False: if monitor_mode_only() == False:
if ((circuit['minDownload'] != round(int(downloadMin)*tcpOverheadFactor)) if ((circuit['minDownload'] != round(int(downloadMin)*tcpOverheadFactor))
or (circuit['minUpload'] != round(int(uploadMin)*tcpOverheadFactor)) or (circuit['minUpload'] != round(int(uploadMin)*tcpOverheadFactor))
or (circuit['maxDownload'] != round(int(downloadMax)*tcpOverheadFactor)) or (circuit['maxDownload'] != round(int(downloadMax)*tcpOverheadFactor))
@ -427,10 +426,10 @@ def refreshShapers():
ipMapBatch = BatchedCommands() ipMapBatch = BatchedCommands()
# Warn user if enableActualShellCommands is False, because that would mean no actual commands are executing # Warn user if enableActualShellCommands is False, because that would mean no actual commands are executing
if enableActualShellCommands == False: if enable_actual_shell_commands() == False:
warnings.warn("enableActualShellCommands is set to False. None of the commands below will actually be executed. Simulated run.", stacklevel=2) warnings.warn("enableActualShellCommands is set to False. None of the commands below will actually be executed. Simulated run.", stacklevel=2)
# Warn user if monitorOnlyMode is True, because that would mean no actual shaping is happening # Warn user if monitorOnlyMode is True, because that would mean no actual shaping is happening
if monitorOnlyMode == True: if monitor_mode_only() == True:
warnings.warn("monitorOnlyMode is set to True. Shaping will not occur.", stacklevel=2) warnings.warn("monitorOnlyMode is set to True. Shaping will not occur.", stacklevel=2)
@ -474,18 +473,18 @@ def refreshShapers():
# Pull rx/tx queues / CPU cores available # Pull rx/tx queues / CPU cores available
# Handling the case when the number of queues for interfaces are different # Handling the case when the number of queues for interfaces are different
InterfaceAQueuesAvailable = findQueuesAvailable(interfaceA) InterfaceAQueuesAvailable = findQueuesAvailable(interface_a())
InterfaceBQueuesAvailable = findQueuesAvailable(interfaceB) InterfaceBQueuesAvailable = findQueuesAvailable(interface_b())
queuesAvailable = min(InterfaceAQueuesAvailable, InterfaceBQueuesAvailable) queuesAvailable = min(InterfaceAQueuesAvailable, InterfaceBQueuesAvailable)
stickOffset = 0 stickOffset = 0
if OnAStick: if on_a_stick():
print("On-a-stick override dividing queues") print("On-a-stick override dividing queues")
# The idea here is that download use queues 0 - n/2, upload uses the other half # The idea here is that download use queues 0 - n/2, upload uses the other half
queuesAvailable = math.floor(queuesAvailable / 2) queuesAvailable = math.floor(queuesAvailable / 2)
stickOffset = queuesAvailable stickOffset = queuesAvailable
# If in monitorOnlyMode, override network.json bandwidth rates to where no shaping will actually occur # If in monitorOnlyMode, override network.json bandwidth rates to where no shaping will actually occur
if monitorOnlyMode == True: if monitor_mode_only() == True:
def overrideNetworkBandwidths(data): def overrideNetworkBandwidths(data):
for elem in data: for elem in data:
if 'children' in data[elem]: if 'children' in data[elem]:
@ -499,12 +498,12 @@ def refreshShapers():
generatedPNs = [] generatedPNs = []
numberOfGeneratedPNs = queuesAvailable numberOfGeneratedPNs = queuesAvailable
# If in monitorOnlyMode, override bandwidth rates to where no shaping will actually occur # If in monitorOnlyMode, override bandwidth rates to where no shaping will actually occur
if monitorOnlyMode == True: if monitor_mode_only() == True:
chosenDownloadMbps = 10000 chosenDownloadMbps = 10000
chosenUploadMbps = 10000 chosenUploadMbps = 10000
else: else:
chosenDownloadMbps = generatedPNDownloadMbps chosenDownloadMbps = generated_pn_download_mbps()
chosenUploadMbps = generatedPNDownloadMbps chosenUploadMbps = generated_pn_upload_mbps()
for x in range(numberOfGeneratedPNs): for x in range(numberOfGeneratedPNs):
genPNname = "Generated_PN_" + str(x+1) genPNname = "Generated_PN_" + str(x+1)
network[genPNname] = { network[genPNname] = {
@ -512,7 +511,7 @@ def refreshShapers():
"uploadBandwidthMbps": chosenUploadMbps "uploadBandwidthMbps": chosenUploadMbps
} }
generatedPNs.append(genPNname) generatedPNs.append(genPNname)
if useBinPackingToBalanceCPU: if use_bin_packing_to_balance_cpu():
print("Using binpacking module to sort circuits by CPU core") print("Using binpacking module to sort circuits by CPU core")
bins = binpacking.to_constant_bin_number(dictForCircuitsWithoutParentNodes, numberOfGeneratedPNs) bins = binpacking.to_constant_bin_number(dictForCircuitsWithoutParentNodes, numberOfGeneratedPNs)
genPNcounter = 0 genPNcounter = 0
@ -574,7 +573,7 @@ def refreshShapers():
inheritBandwidthMaxes(data[node]['children'], data[node]['downloadBandwidthMbps'], data[node]['uploadBandwidthMbps']) inheritBandwidthMaxes(data[node]['children'], data[node]['downloadBandwidthMbps'], data[node]['uploadBandwidthMbps'])
#return data #return data
# Here is the actual call to the recursive function # Here is the actual call to the recursive function
inheritBandwidthMaxes(network, parentMaxDL=upstreamBandwidthCapacityDownloadMbps, parentMaxUL=upstreamBandwidthCapacityUploadMbps) inheritBandwidthMaxes(network, parentMaxDL=upstream_bandwidth_capacity_download_mbps(), parentMaxUL=upstream_bandwidth_capacity_upload_mbps())
# Compress network.json. HTB only supports 8 levels of HTB depth. Compress to 8 layers if beyond 8. # Compress network.json. HTB only supports 8 levels of HTB depth. Compress to 8 layers if beyond 8.
@ -629,7 +628,7 @@ def refreshShapers():
data[node]['parentClassID'] = parentClassID data[node]['parentClassID'] = parentClassID
data[node]['up_parentClassID'] = upParentClassID data[node]['up_parentClassID'] = upParentClassID
# If in monitorOnlyMode, override bandwidth rates to where no shaping will actually occur # If in monitorOnlyMode, override bandwidth rates to where no shaping will actually occur
if monitorOnlyMode == True: if monitor_mode_only() == True:
data[node]['downloadBandwidthMbps'] = 10000 data[node]['downloadBandwidthMbps'] = 10000
data[node]['uploadBandwidthMbps'] = 10000 data[node]['uploadBandwidthMbps'] = 10000
# If not in monitorOnlyMode # If not in monitorOnlyMode
@ -659,7 +658,7 @@ def refreshShapers():
for circuit in subscriberCircuits: for circuit in subscriberCircuits:
#If a device from ShapedDevices.csv lists this node as its Parent Node, attach it as a leaf to this node HTB #If a device from ShapedDevices.csv lists this node as its Parent Node, attach it as a leaf to this node HTB
if node == circuit['ParentNode']: if node == circuit['ParentNode']:
if monitorOnlyMode == False: if monitor_mode_only() == False:
if circuit['maxDownload'] > data[node]['downloadBandwidthMbps']: if circuit['maxDownload'] > data[node]['downloadBandwidthMbps']:
logging.info("downloadMax of Circuit ID [" + circuit['circuitID'] + "] exceeded that of its parent node. Reducing to that of its parent node now.", stacklevel=2) logging.info("downloadMax of Circuit ID [" + circuit['circuitID'] + "] exceeded that of its parent node. Reducing to that of its parent node now.", stacklevel=2)
if circuit['maxUpload'] > data[node]['uploadBandwidthMbps']: if circuit['maxUpload'] > data[node]['uploadBandwidthMbps']:
@ -712,50 +711,50 @@ def refreshShapers():
major += 1 major += 1
return minorByCPU return minorByCPU
# Here is the actual call to the recursive traverseNetwork() function. finalMinor is not used. # Here is the actual call to the recursive traverseNetwork() function. finalMinor is not used.
minorByCPU = traverseNetwork(network, 0, major=1, minorByCPU=minorByCPUpreloaded, queue=1, parentClassID=None, upParentClassID=None, parentMaxDL=upstreamBandwidthCapacityDownloadMbps, parentMaxUL=upstreamBandwidthCapacityUploadMbps) minorByCPU = traverseNetwork(network, 0, major=1, minorByCPU=minorByCPUpreloaded, queue=1, parentClassID=None, upParentClassID=None, parentMaxDL=upstream_bandwidth_capacity_download_mbps(), parentMaxUL=upstream_bandwidth_capacity_upload_mbps())
linuxTCcommands = [] linuxTCcommands = []
devicesShaped = [] devicesShaped = []
# Root HTB Setup # Root HTB Setup
# Create MQ qdisc for each CPU core / rx-tx queue. Generate commands to create corresponding HTB and leaf classes. Prepare commands for execution later # Create MQ qdisc for each CPU core / rx-tx queue. Generate commands to create corresponding HTB and leaf classes. Prepare commands for execution later
thisInterface = interfaceA thisInterface = interface_a()
logging.info("# MQ Setup for " + thisInterface) logging.info("# MQ Setup for " + thisInterface)
command = 'qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq' command = 'qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq'
linuxTCcommands.append(command) linuxTCcommands.append(command)
for queue in range(queuesAvailable): for queue in range(queuesAvailable):
command = 'qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+1) + ' handle ' + hex(queue+1) + ': htb default 2' command = 'qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+1) + ' handle ' + hex(queue+1) + ': htb default 2'
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ': classid ' + hex(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityDownloadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps) + 'mbit' command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ': classid ' + hex(queue+1) + ':1 htb rate '+ str(upstream_bandwidth_capacity_download_mbps()) + 'mbit ceil ' + str(upstream_bandwidth_capacity_download_mbps()) + 'mbit'
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 ' + sqm command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 ' + sqm()
linuxTCcommands.append(command) linuxTCcommands.append(command)
# Default class - traffic gets passed through this limiter with lower priority if it enters the top HTB without a specific class. # Default class - traffic gets passed through this limiter with lower priority if it enters the top HTB without a specific class.
# Technically, that should not even happen. So don't expect much if any traffic in this default class. # Technically, that should not even happen. So don't expect much if any traffic in this default class.
# Only 1/4 of defaultClassCapacity is guaranteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling. # Only 1/4 of defaultClassCapacity is guaranteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 classid ' + hex(queue+1) + ':2 htb rate ' + str(round((upstreamBandwidthCapacityDownloadMbps-1)/4)) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps-1) + 'mbit prio 5' command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 classid ' + hex(queue+1) + ':2 htb rate ' + str(round((upstream_bandwidth_capacity_download_mbps()-1)/4)) + 'mbit ceil ' + str(upstream_bandwidth_capacity_download_mbps()-1) + 'mbit prio 5'
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':2 ' + sqm command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':2 ' + sqm()
linuxTCcommands.append(command) linuxTCcommands.append(command)
# Note the use of stickOffset, and not replacing the root queue if we're on a stick # Note the use of stickOffset, and not replacing the root queue if we're on a stick
thisInterface = interfaceB thisInterface = interface_b()
logging.info("# MQ Setup for " + thisInterface) logging.info("# MQ Setup for " + thisInterface)
if not OnAStick: if not on_a_stick():
command = 'qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq' command = 'qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq'
linuxTCcommands.append(command) linuxTCcommands.append(command)
for queue in range(queuesAvailable): for queue in range(queuesAvailable):
command = 'qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+stickOffset+1) + ' handle ' + hex(queue+stickOffset+1) + ': htb default 2' command = 'qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+stickOffset+1) + ' handle ' + hex(queue+stickOffset+1) + ': htb default 2'
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ': classid ' + hex(queue+stickOffset+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityUploadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps) + 'mbit' command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ': classid ' + hex(queue+stickOffset+1) + ':1 htb rate '+ str(upstream_bandwidth_capacity_upload_mbps()) + 'mbit ceil ' + str(upstream_bandwidth_capacity_upload_mbps()) + 'mbit'
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ':1 ' + sqm command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ':1 ' + sqm()
linuxTCcommands.append(command) linuxTCcommands.append(command)
# Default class - traffic gets passed through this limiter with lower priority if it enters the top HTB without a specific class. # Default class - traffic gets passed through this limiter with lower priority if it enters the top HTB without a specific class.
# Technically, that should not even happen. So don't expect much if any traffic in this default class. # Technically, that should not even happen. So don't expect much if any traffic in this default class.
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling. # Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ':1 classid ' + hex(queue+stickOffset+1) + ':2 htb rate ' + str(round((upstreamBandwidthCapacityUploadMbps-1)/4)) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps-1) + 'mbit prio 5' command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ':1 classid ' + hex(queue+stickOffset+1) + ':2 htb rate ' + str(round((upstream_bandwidth_capacity_upload_mbps()-1)/4)) + 'mbit ceil ' + str(upstream_bandwidth_capacity_upload_mbps()-1) + 'mbit prio 5'
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ':2 ' + sqm command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+stickOffset+1) + ':2 ' + sqm()
linuxTCcommands.append(command) linuxTCcommands.append(command)
@ -767,7 +766,7 @@ def refreshShapers():
def sqmFixupRate(rate:int, sqm:str) -> str: def sqmFixupRate(rate:int, sqm:str) -> str:
# If we aren't using cake, just return the sqm string # If we aren't using cake, just return the sqm string
if not sqm.startswith("cake") or "rtt" in sqm: if not sqm.startswith("cake") or "rtt" in sqm:
return sqm return sqm()
# If we are using cake, we need to fixup the rate # If we are using cake, we need to fixup the rate
# Based on: 1 MTU is 1500 bytes, or 12,000 bits. # Based on: 1 MTU is 1500 bytes, or 12,000 bits.
# At 1 Mbps, (1,000 bits per ms) transmitting an MTU takes 12ms. Add 3ms for overhead, and we get 15ms. # At 1 Mbps, (1,000 bits per ms) transmitting an MTU takes 12ms. Add 3ms for overhead, and we get 15ms.
@ -783,11 +782,11 @@ def refreshShapers():
case _: return sqm case _: return sqm
for node in data: for node in data:
command = 'class add dev ' + interfaceA + ' parent ' + data[node]['parentClassID'] + ' classid ' + data[node]['classMinor'] + ' htb rate '+ str(data[node]['downloadBandwidthMbpsMin']) + 'mbit ceil '+ str(data[node]['downloadBandwidthMbps']) + 'mbit prio 3' command = 'class add dev ' + interface_a() + ' parent ' + data[node]['parentClassID'] + ' classid ' + data[node]['classMinor'] + ' htb rate '+ str(data[node]['downloadBandwidthMbpsMin']) + 'mbit ceil '+ str(data[node]['downloadBandwidthMbps']) + 'mbit prio 3'
linuxTCcommands.append(command) linuxTCcommands.append(command)
logging.info("Up ParentClassID: " + data[node]['up_parentClassID']) logging.info("Up ParentClassID: " + data[node]['up_parentClassID'])
logging.info("ClassMinor: " + data[node]['classMinor']) logging.info("ClassMinor: " + data[node]['classMinor'])
command = 'class add dev ' + interfaceB + ' parent ' + data[node]['up_parentClassID'] + ' classid ' + data[node]['classMinor'] + ' htb rate '+ str(data[node]['uploadBandwidthMbpsMin']) + 'mbit ceil '+ str(data[node]['uploadBandwidthMbps']) + 'mbit prio 3' command = 'class add dev ' + interface_b() + ' parent ' + data[node]['up_parentClassID'] + ' classid ' + data[node]['classMinor'] + ' htb rate '+ str(data[node]['uploadBandwidthMbpsMin']) + 'mbit ceil '+ str(data[node]['uploadBandwidthMbps']) + 'mbit prio 3'
linuxTCcommands.append(command) linuxTCcommands.append(command)
if 'circuits' in data[node]: if 'circuits' in data[node]:
for circuit in data[node]['circuits']: for circuit in data[node]['circuits']:
@ -799,21 +798,21 @@ def refreshShapers():
if 'comment' in circuit['devices'][0]: if 'comment' in circuit['devices'][0]:
tcComment = tcComment + '| Comment: ' + circuit['devices'][0]['comment'] tcComment = tcComment + '| Comment: ' + circuit['devices'][0]['comment']
tcComment = tcComment.replace("\n", "") tcComment = tcComment.replace("\n", "")
command = 'class add dev ' + interfaceA + ' parent ' + data[node]['classid'] + ' classid ' + circuit['classMinor'] + ' htb rate '+ str(circuit['minDownload']) + 'mbit ceil '+ str(circuit['maxDownload']) + 'mbit prio 3' + tcComment command = 'class add dev ' + interface_a() + ' parent ' + data[node]['classid'] + ' classid ' + circuit['classMinor'] + ' htb rate '+ str(circuit['minDownload']) + 'mbit ceil '+ str(circuit['maxDownload']) + 'mbit prio 3' + tcComment
linuxTCcommands.append(command) linuxTCcommands.append(command)
# Only add CAKE / fq_codel qdisc if monitorOnlyMode is Off # Only add CAKE / fq_codel qdisc if monitorOnlyMode is Off
if monitorOnlyMode == False: if monitor_mode_only() == False:
# SQM Fixup for lower rates # SQM Fixup for lower rates
useSqm = sqmFixupRate(circuit['maxDownload'], sqm) useSqm = sqmFixupRate(circuit['maxDownload'], sqm())
command = 'qdisc add dev ' + interfaceA + ' parent ' + circuit['classMajor'] + ':' + circuit['classMinor'] + ' ' + useSqm command = 'qdisc add dev ' + interface_a() + ' parent ' + circuit['classMajor'] + ':' + circuit['classMinor'] + ' ' + useSqm
linuxTCcommands.append(command) linuxTCcommands.append(command)
command = 'class add dev ' + interfaceB + ' parent ' + data[node]['up_classid'] + ' classid ' + circuit['classMinor'] + ' htb rate '+ str(circuit['minUpload']) + 'mbit ceil '+ str(circuit['maxUpload']) + 'mbit prio 3' command = 'class add dev ' + interface_b() + ' parent ' + data[node]['up_classid'] + ' classid ' + circuit['classMinor'] + ' htb rate '+ str(circuit['minUpload']) + 'mbit ceil '+ str(circuit['maxUpload']) + 'mbit prio 3'
linuxTCcommands.append(command) linuxTCcommands.append(command)
# Only add CAKE / fq_codel qdisc if monitorOnlyMode is Off # Only add CAKE / fq_codel qdisc if monitorOnlyMode is Off
if monitorOnlyMode == False: if monitor_mode_only() == False:
# SQM Fixup for lower rates # SQM Fixup for lower rates
useSqm = sqmFixupRate(circuit['maxUpload'], sqm) useSqm = sqmFixupRate(circuit['maxUpload'], sqm())
command = 'qdisc add dev ' + interfaceB + ' parent ' + circuit['up_classMajor'] + ':' + circuit['classMinor'] + ' ' + useSqm command = 'qdisc add dev ' + interface_b() + ' parent ' + circuit['up_classMajor'] + ':' + circuit['classMinor'] + ' ' + useSqm
linuxTCcommands.append(command) linuxTCcommands.append(command)
pass pass
for device in circuit['devices']: for device in circuit['devices']:
@ -821,14 +820,14 @@ def refreshShapers():
for ipv4 in device['ipv4s']: for ipv4 in device['ipv4s']:
ipMapBatch.add_ip_mapping(str(ipv4), circuit['classid'], data[node]['cpuNum'], False) ipMapBatch.add_ip_mapping(str(ipv4), circuit['classid'], data[node]['cpuNum'], False)
#xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv4) + ' --cpu ' + data[node]['cpuNum'] + ' --classid ' + circuit['classid']) #xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv4) + ' --cpu ' + data[node]['cpuNum'] + ' --classid ' + circuit['classid'])
if OnAStick: if on_a_stick():
ipMapBatch.add_ip_mapping(str(ipv4), circuit['up_classid'], data[node]['up_cpuNum'], True) ipMapBatch.add_ip_mapping(str(ipv4), circuit['up_classid'], data[node]['up_cpuNum'], True)
#xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv4) + ' --cpu ' + data[node]['up_cpuNum'] + ' --classid ' + circuit['up_classid'] + ' --upload 1') #xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv4) + ' --cpu ' + data[node]['up_cpuNum'] + ' --classid ' + circuit['up_classid'] + ' --upload 1')
if device['ipv6s']: if device['ipv6s']:
for ipv6 in device['ipv6s']: for ipv6 in device['ipv6s']:
ipMapBatch.add_ip_mapping(str(ipv6), circuit['classid'], data[node]['cpuNum'], False) ipMapBatch.add_ip_mapping(str(ipv6), circuit['classid'], data[node]['cpuNum'], False)
#xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv6) + ' --cpu ' + data[node]['cpuNum'] + ' --classid ' + circuit['classid']) #xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv6) + ' --cpu ' + data[node]['cpuNum'] + ' --classid ' + circuit['classid'])
if OnAStick: if on_a_stick():
ipMapBatch.add_ip_mapping(str(ipv6), circuit['up_classid'], data[node]['up_cpuNum'], True) ipMapBatch.add_ip_mapping(str(ipv6), circuit['up_classid'], data[node]['up_cpuNum'], True)
#xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv6) + ' --cpu ' + data[node]['up_cpuNum'] + ' --classid ' + circuit['up_classid'] + ' --upload 1') #xdpCPUmapCommands.append('./bin/xdp_iphash_to_cpu_cmdline add --ip ' + str(ipv6) + ' --cpu ' + data[node]['up_cpuNum'] + ' --classid ' + circuit['up_classid'] + ' --upload 1')
if device['deviceName'] not in devicesShaped: if device['deviceName'] not in devicesShaped:
@ -853,12 +852,12 @@ def refreshShapers():
# Clear Prior Settings # Clear Prior Settings
clearPriorSettings(interfaceA, interfaceB) clearPriorSettings(interface_a(), interface_b())
# Setup XDP and disable XPS regardless of whether it is first run or not (necessary to handle cases where systemctl stop was used) # Setup XDP and disable XPS regardless of whether it is first run or not (necessary to handle cases where systemctl stop was used)
xdpStartTime = datetime.now() xdpStartTime = datetime.now()
if enableActualShellCommands: if enable_actual_shell_commands():
# Here we use os.system for the command, because otherwise it sometimes gltiches out with Popen in shell() # Here we use os.system for the command, because otherwise it sometimes gltiches out with Popen in shell()
#result = os.system('./bin/xdp_iphash_to_cpu_cmdline clear') #result = os.system('./bin/xdp_iphash_to_cpu_cmdline clear')
clear_ip_mappings() # Use the bus clear_ip_mappings() # Use the bus
@ -894,7 +893,7 @@ def refreshShapers():
xdpFilterStartTime = datetime.now() xdpFilterStartTime = datetime.now()
print("Executing XDP-CPUMAP-TC IP filter commands") print("Executing XDP-CPUMAP-TC IP filter commands")
numXdpCommands = ipMapBatch.length(); numXdpCommands = ipMapBatch.length();
if enableActualShellCommands: if enable_actual_shell_commands():
ipMapBatch.submit() ipMapBatch.submit()
#for command in xdpCPUmapCommands: #for command in xdpCPUmapCommands:
# logging.info(command) # logging.info(command)
@ -971,7 +970,7 @@ def refreshShapersUpdateOnly():
# Warn user if enableActualShellCommands is False, because that would mean no actual commands are executing # Warn user if enableActualShellCommands is False, because that would mean no actual commands are executing
if enableActualShellCommands == False: if enable_actual_shell_commands() == False:
warnings.warn("enableActualShellCommands is set to False. None of the commands below will actually be executed. Simulated run.", stacklevel=2) warnings.warn("enableActualShellCommands is set to False. None of the commands below will actually be executed. Simulated run.", stacklevel=2)
@ -1052,6 +1051,13 @@ if __name__ == '__main__':
print("ERROR: lqosd is not running. Aborting") print("ERROR: lqosd is not running. Aborting")
os._exit(-1) os._exit(-1)
# Check that the configuration file is usable
if check_config():
print("Configuration from /etc/lqos.conf is usable")
else:
print("ERROR: Unable to load configuration from /etc/lqos.conf")
os.exit(-1)
# Check that we aren't running LibreQoS.py more than once at a time # Check that we aren't running LibreQoS.py more than once at a time
if is_libre_already_running(): if is_libre_already_running():
print("LibreQoS.py is already running in another process. Aborting.") print("LibreQoS.py is already running in another process. Aborting.")
@ -1093,10 +1099,10 @@ if __name__ == '__main__':
if args.validate: if args.validate:
status = validateNetworkAndDevices() status = validateNetworkAndDevices()
elif args.clearrules: elif args.clearrules:
tearDown(interfaceA, interfaceB) tearDown(interface_a(), interface_b())
elif args.updateonly: elif args.updateonly:
# Single-interface updates don't work at all right now. # Single-interface updates don't work at all right now.
if OnAStick: if on_a_stick():
print("--updateonly is not supported for single-interface configurations") print("--updateonly is not supported for single-interface configurations")
os.exit(-1) os.exit(-1)
refreshShapersUpdateOnly() refreshShapersUpdateOnly()

View File

@ -3,8 +3,7 @@ checkPythonVersion()
import os import os
import csv import csv
import json import json
from ispConfig import uispSite, uispStrategy, overwriteNetworkJSONalways from liblqos_python import overwrite_network_json_always
from ispConfig import generatedPNUploadMbps, generatedPNDownloadMbps, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps
from integrationCommon import NetworkGraph, NetworkNode, NodeType from integrationCommon import NetworkGraph, NetworkNode, NodeType
def csvToNetworkJSONfile(): def csvToNetworkJSONfile():
@ -46,7 +45,7 @@ def csvToNetworkJSONfile():
net.prepareTree() net.prepareTree()
net.plotNetworkGraph(False) net.plotNetworkGraph(False)
if net.doesNetworkJsonExist(): if net.doesNetworkJsonExist():
if overwriteNetworkJSONalways: if overwrite_network_json_always():
net.createNetworkJson() net.createNetworkJson()
else: else:
print("network.json already exists and overwriteNetworkJSONalways set to False. Leaving in-place.") print("network.json already exists and overwriteNetworkJSONalways set to False. Leaving in-place.")

View File

@ -10,8 +10,7 @@ import psutil
from influxdb_client import InfluxDBClient, Point from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS from influxdb_client.client.write_api import SYNCHRONOUS
from ispConfig import interfaceA, interfaceB, influxDBEnabled, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl, sqm from liblqos_python import interface_a, interface_b, influx_db_enabled, influx_db_bucket, influx_db_org, influx_db_token, influx_db_url, sqm
def getInterfaceStats(interface): def getInterfaceStats(interface):
command = 'tc -j -s qdisc show dev ' + interface command = 'tc -j -s qdisc show dev ' + interface
@ -29,7 +28,7 @@ def chunk_list(l, n):
yield l[i:i + n] yield l[i:i + n]
def getCircuitBandwidthStats(subscriberCircuits, tinsStats): def getCircuitBandwidthStats(subscriberCircuits, tinsStats):
interfaces = [interfaceA, interfaceB] interfaces = [interface_a(), interface_b()]
ifaceStats = list(map(getInterfaceStats, interfaces)) ifaceStats = list(map(getInterfaceStats, interfaces))
for circuit in subscriberCircuits: for circuit in subscriberCircuits:
@ -79,7 +78,7 @@ def getCircuitBandwidthStats(subscriberCircuits, tinsStats):
else: else:
overloadFactor = 0.0 overloadFactor = 0.0
if 'cake diffserv4' in sqm: if 'cake diffserv4' in sqm():
tinCounter = 1 tinCounter = 1
for tin in element['tins']: for tin in element['tins']:
sent_packets = float(tin['sent_packets']) sent_packets = float(tin['sent_packets'])
@ -106,7 +105,7 @@ def getCircuitBandwidthStats(subscriberCircuits, tinsStats):
circuit['stats']['currentQuery']['packetsSent' + dirSuffix] = packets circuit['stats']['currentQuery']['packetsSent' + dirSuffix] = packets
circuit['stats']['currentQuery']['overloadFactor' + dirSuffix] = overloadFactor circuit['stats']['currentQuery']['overloadFactor' + dirSuffix] = overloadFactor
#if 'cake diffserv4' in sqm: #if 'cake diffserv4' in sqm():
# circuit['stats']['currentQuery']['tins'] = theseTins # circuit['stats']['currentQuery']['tins'] = theseTins
circuit['stats']['currentQuery']['time'] = datetime.now().isoformat() circuit['stats']['currentQuery']['time'] = datetime.now().isoformat()
@ -428,9 +427,9 @@ def refreshBandwidthGraphs():
parentNodes = getParentNodeBandwidthStats(parentNodes, subscriberCircuits) parentNodes = getParentNodeBandwidthStats(parentNodes, subscriberCircuits)
print("Writing data to InfluxDB") print("Writing data to InfluxDB")
client = InfluxDBClient( client = InfluxDBClient(
url=influxDBurl, url=influx_db_url(),
token=influxDBtoken, token=influx_db_token(),
org=influxDBOrg org=influx_db_org()
) )
# Record current timestamp, use for all points added # Record current timestamp, use for all points added
@ -463,7 +462,7 @@ def refreshBandwidthGraphs():
queriesToSend.append(p) queriesToSend.append(p)
if seenSomethingBesides0s: if seenSomethingBesides0s:
write_api.write(bucket=influxDBBucket, record=queriesToSend) write_api.write(bucket=influx_db_bucket(), record=queriesToSend)
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.") # print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
queriesToSendCount += len(queriesToSend) queriesToSendCount += len(queriesToSend)
@ -488,11 +487,11 @@ def refreshBandwidthGraphs():
queriesToSend.append(p) queriesToSend.append(p)
if seenSomethingBesides0s: if seenSomethingBesides0s:
write_api.write(bucket=influxDBBucket, record=queriesToSend) write_api.write(bucket=influx_db_bucket(), record=queriesToSend)
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.") # print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
queriesToSendCount += len(queriesToSend) queriesToSendCount += len(queriesToSend)
if 'cake diffserv4' in sqm: if 'cake diffserv4' in sqm():
seenSomethingBesides0s = False seenSomethingBesides0s = False
queriesToSend = [] queriesToSend = []
listOfTins = ['Bulk', 'BestEffort', 'Video', 'Voice'] listOfTins = ['Bulk', 'BestEffort', 'Video', 'Voice']
@ -507,7 +506,7 @@ def refreshBandwidthGraphs():
queriesToSend.append(p) queriesToSend.append(p)
if seenSomethingBesides0s: if seenSomethingBesides0s:
write_api.write(bucket=influxDBBucket, record=queriesToSend) write_api.write(bucket=influx_db_bucket(), record=queriesToSend)
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.") # print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
queriesToSendCount += len(queriesToSend) queriesToSendCount += len(queriesToSend)
@ -517,7 +516,7 @@ def refreshBandwidthGraphs():
for index, item in enumerate(cpuVals): for index, item in enumerate(cpuVals):
p = Point('CPU').field('CPU_' + str(index), item) p = Point('CPU').field('CPU_' + str(index), item)
queriesToSend.append(p) queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend) write_api.write(bucket=influx_db_bucket(), record=queriesToSend)
queriesToSendCount += len(queriesToSend) queriesToSendCount += len(queriesToSend)
print("Added " + str(queriesToSendCount) + " points to InfluxDB.") print("Added " + str(queriesToSendCount) + " points to InfluxDB.")
@ -557,9 +556,9 @@ def refreshLatencyGraphs():
parentNodes = getParentNodeLatencyStats(parentNodes, subscriberCircuits) parentNodes = getParentNodeLatencyStats(parentNodes, subscriberCircuits)
print("Writing data to InfluxDB") print("Writing data to InfluxDB")
client = InfluxDBClient( client = InfluxDBClient(
url=influxDBurl, url=influx_db_url(),
token=influxDBtoken, token=influx_db_token(),
org=influxDBOrg org=influx_db_org()
) )
# Record current timestamp, use for all points added # Record current timestamp, use for all points added
@ -577,7 +576,7 @@ def refreshLatencyGraphs():
tcpLatency = float(circuit['stats']['sinceLastQuery']['tcpLatency']) tcpLatency = float(circuit['stats']['sinceLastQuery']['tcpLatency'])
p = Point('TCP Latency').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("TCP Latency", tcpLatency).time(timestamp) p = Point('TCP Latency').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("TCP Latency", tcpLatency).time(timestamp)
queriesToSend.append(p) queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend) write_api.write(bucket=influx_db_bucket(), record=queriesToSend)
queriesToSendCount += len(queriesToSend) queriesToSendCount += len(queriesToSend)
queriesToSend = [] queriesToSend = []
@ -587,7 +586,7 @@ def refreshLatencyGraphs():
p = Point('TCP Latency').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("TCP Latency", tcpLatency).time(timestamp) p = Point('TCP Latency').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("TCP Latency", tcpLatency).time(timestamp)
queriesToSend.append(p) queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend) write_api.write(bucket=influx_db_bucket(), record=queriesToSend)
queriesToSendCount += len(queriesToSend) queriesToSendCount += len(queriesToSend)
listOfAllLatencies = [] listOfAllLatencies = []
@ -597,7 +596,7 @@ def refreshLatencyGraphs():
if len(listOfAllLatencies) > 0: if len(listOfAllLatencies) > 0:
currentNetworkLatency = statistics.median(listOfAllLatencies) currentNetworkLatency = statistics.median(listOfAllLatencies)
p = Point('TCP Latency').tag("Type", "Network").field("TCP Latency", currentNetworkLatency).time(timestamp) p = Point('TCP Latency').tag("Type", "Network").field("TCP Latency", currentNetworkLatency).time(timestamp)
write_api.write(bucket=influxDBBucket, record=p) write_api.write(bucket=influx_db_bucket(), record=p)
queriesToSendCount += 1 queriesToSendCount += 1
print("Added " + str(queriesToSendCount) + " points to InfluxDB.") print("Added " + str(queriesToSendCount) + " points to InfluxDB.")

View File

@ -2,7 +2,10 @@
# integrations. # integrations.
from typing import List, Any from typing import List, Any
from ispConfig import allowedSubnets, ignoreSubnets, generatedPNUploadMbps, generatedPNDownloadMbps, circuitNameUseAddress, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps from liblqos_python import allowed_subnets, ignore_subnets, generated_pn_download_mbps, generated_pn_upload_mbps, \
circuit_name_use_address, upstream_bandwidth_capacity_download_mbps, upstream_bandwidth_capacity_upload_mbps, \
find_ipv6_using_mikrotik, exclude_sites, bandwidth_overhead_factor, committed_bandwidth_multiplier, \
exception_cpes
import ipaddress import ipaddress
import enum import enum
import os import os
@ -12,7 +15,7 @@ def isInAllowedSubnets(inputIP):
isAllowed = False isAllowed = False
if '/' in inputIP: if '/' in inputIP:
inputIP = inputIP.split('/')[0] inputIP = inputIP.split('/')[0]
for subnet in allowedSubnets: for subnet in allowed_subnets():
if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)): if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)):
isAllowed = True isAllowed = True
return isAllowed return isAllowed
@ -23,7 +26,7 @@ def isInIgnoredSubnets(inputIP):
isIgnored = False isIgnored = False
if '/' in inputIP: if '/' in inputIP:
inputIP = inputIP.split('/')[0] inputIP = inputIP.split('/')[0]
for subnet in ignoreSubnets: for subnet in ignore_subnets():
if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)): if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)):
isIgnored = True isIgnored = True
return isIgnored return isIgnored
@ -98,7 +101,7 @@ class NetworkNode:
address: str address: str
mac: str mac: str
def __init__(self, id: str, displayName: str = "", parentId: str = "", type: NodeType = NodeType.site, download: int = generatedPNDownloadMbps, upload: int = generatedPNUploadMbps, ipv4: List = [], ipv6: List = [], address: str = "", mac: str = "", customerName: str = "") -> None: def __init__(self, id: str, displayName: str = "", parentId: str = "", type: NodeType = NodeType.site, download: int = generated_pn_download_mbps(), upload: int = generated_pn_upload_mbps(), ipv4: List = [], ipv6: List = [], address: str = "", mac: str = "", customerName: str = "") -> None:
self.id = id self.id = id
self.parentIndex = 0 self.parentIndex = 0
self.type = type self.type = type
@ -129,14 +132,13 @@ class NetworkGraph:
exceptionCPEs: Any exceptionCPEs: Any
def __init__(self) -> None: def __init__(self) -> None:
from ispConfig import findIPv6usingMikrotik, excludeSites, exceptionCPEs
self.nodes = [ self.nodes = [
NetworkNode("FakeRoot", type=NodeType.root, NetworkNode("FakeRoot", type=NodeType.root,
parentId="", displayName="Shaper Root") parentId="", displayName="Shaper Root")
] ]
self.excludeSites = excludeSites self.excludeSites = exclude_sites()
self.exceptionCPEs = exceptionCPEs self.exceptionCPEs = exception_cpes()
if findIPv6usingMikrotik: if find_ipv6_using_mikrotik():
from mikrotikFindIPv6 import pullMikrotikIPv6 from mikrotikFindIPv6 import pullMikrotikIPv6
self.ipv4ToIPv6 = pullMikrotikIPv6() self.ipv4ToIPv6 = pullMikrotikIPv6()
else: else:
@ -144,11 +146,13 @@ class NetworkGraph:
def addRawNode(self, node: NetworkNode) -> None: def addRawNode(self, node: NetworkNode) -> None:
# Adds a NetworkNode to the graph, unchanged. # Adds a NetworkNode to the graph, unchanged.
# If a site is excluded (via excludedSites in ispConfig) # If a site is excluded (via excludedSites in lqos.conf)
# it won't be added # it won't be added
if not node.displayName in self.excludeSites: if not node.displayName in self.excludeSites:
if node.displayName in self.exceptionCPEs.keys(): # TODO: Fixup exceptionCPE handling
node.parentId = self.exceptionCPEs[node.displayName] #print(self.excludeSites)
#if node.displayName in self.exceptionCPEs.keys():
# node.parentId = self.exceptionCPEs[node.displayName]
self.nodes.append(node) self.nodes.append(node)
def replaceRootNode(self, node: NetworkNode) -> None: def replaceRootNode(self, node: NetworkNode) -> None:
@ -315,7 +319,7 @@ class NetworkGraph:
data[node]['uploadBandwidthMbps'] = min(int(data[node]['uploadBandwidthMbps']),int(parentMaxUL)) data[node]['uploadBandwidthMbps'] = min(int(data[node]['uploadBandwidthMbps']),int(parentMaxUL))
if 'children' in data[node]: if 'children' in data[node]:
inheritBandwidthMaxes(data[node]['children'], data[node]['downloadBandwidthMbps'], data[node]['uploadBandwidthMbps']) inheritBandwidthMaxes(data[node]['children'], data[node]['downloadBandwidthMbps'], data[node]['uploadBandwidthMbps'])
inheritBandwidthMaxes(topLevelNode, parentMaxDL=upstreamBandwidthCapacityDownloadMbps, parentMaxUL=upstreamBandwidthCapacityUploadMbps) inheritBandwidthMaxes(topLevelNode, parentMaxDL=upstream_bandwidth_capacity_download_mbps, parentMaxUL=upstream_bandwidth_capacity_upload_mbps)
with open('network.json', 'w') as f: with open('network.json', 'w') as f:
json.dump(topLevelNode, f, indent=4) json.dump(topLevelNode, f, indent=4)
@ -355,19 +359,14 @@ class NetworkGraph:
def createShapedDevices(self): def createShapedDevices(self):
import csv import csv
from ispConfig import bandwidthOverheadFactor # Builds ShapedDevices.csv from the network tree.
try:
from ispConfig import committedBandwidthMultiplier
except:
committedBandwidthMultiplier = 0.98
# Builds ShapedDevices.csv from the network tree.
circuits = [] circuits = []
for (i, node) in enumerate(self.nodes): for (i, node) in enumerate(self.nodes):
if node.type == NodeType.client: if node.type == NodeType.client:
parent = self.nodes[node.parentIndex].displayName parent = self.nodes[node.parentIndex].displayName
if parent == "Shaper Root": parent = "" if parent == "Shaper Root": parent = ""
if circuitNameUseAddress: if circuit_name_use_address():
displayNameToUse = node.address displayNameToUse = node.address
else: else:
if node.type == NodeType.client: if node.type == NodeType.client:
@ -420,10 +419,10 @@ class NetworkGraph:
device["mac"], device["mac"],
device["ipv4"], device["ipv4"],
device["ipv6"], device["ipv6"],
int(float(circuit["download"]) * committedBandwidthMultiplier), int(float(circuit["download"]) * committed_bandwidth_multiplier()),
int(float(circuit["upload"]) * committedBandwidthMultiplier), int(float(circuit["upload"]) * committed_bandwidth_multiplier()),
int(float(circuit["download"]) * bandwidthOverheadFactor), int(float(circuit["download"]) * bandwidth_overhead_factor()),
int(float(circuit["upload"]) * bandwidthOverheadFactor), int(float(circuit["upload"]) * bandwidth_overhead_factor()),
"" ""
] ]
wr.writerow(row) wr.writerow(row)

View File

@ -2,20 +2,20 @@ from pythonCheck import checkPythonVersion
checkPythonVersion() checkPythonVersion()
import requests import requests
import warnings import warnings
from ispConfig import excludeSites, findIPv6usingMikrotik, bandwidthOverheadFactor, exceptionCPEs, powercode_api_key, powercode_api_url from liblqos_python import find_ipv6_using_mikrotik, powercode_api_key, powercode_api_url
from integrationCommon import isIpv4Permitted from integrationCommon import isIpv4Permitted
import base64 import base64
from requests.auth import HTTPBasicAuth from requests.auth import HTTPBasicAuth
if findIPv6usingMikrotik == True: if find_ipv6_using_mikrotik() == True:
from mikrotikFindIPv6 import pullMikrotikIPv6 from mikrotikFindIPv6 import pullMikrotikIPv6
from integrationCommon import NetworkGraph, NetworkNode, NodeType from integrationCommon import NetworkGraph, NetworkNode, NodeType
from urllib3.exceptions import InsecureRequestWarning from urllib3.exceptions import InsecureRequestWarning
def getCustomerInfo(): def getCustomerInfo():
headers= {'Content-Type': 'application/x-www-form-urlencoded'} headers= {'Content-Type': 'application/x-www-form-urlencoded'}
url = powercode_api_url + ":444/api/preseem/index.php" url = powercode_api_url() + ":444/api/preseem/index.php"
data = {} data = {}
data['apiKey'] = powercode_api_key data['apiKey'] = powercode_api_key()
data['action'] = 'list_customers' data['action'] = 'list_customers'
r = requests.post(url, data=data, headers=headers, verify=False, timeout=10) r = requests.post(url, data=data, headers=headers, verify=False, timeout=10)
@ -23,9 +23,9 @@ def getCustomerInfo():
def getListServices(): def getListServices():
headers= {'Content-Type': 'application/x-www-form-urlencoded'} headers= {'Content-Type': 'application/x-www-form-urlencoded'}
url = powercode_api_url + ":444/api/preseem/index.php" url = powercode_api_url() + ":444/api/preseem/index.php"
data = {} data = {}
data['apiKey'] = powercode_api_key data['apiKey'] = powercode_api_key()
data['action'] = 'list_services' data['action'] = 'list_services'
r = requests.post(url, data=data, headers=headers, verify=False, timeout=10) r = requests.post(url, data=data, headers=headers, verify=False, timeout=10)

View File

@ -1,79 +1,81 @@
import csv print("Deprecated for now.")
import os
import shutil
from datetime import datetime
from requests import get # import csv
# import os
# import shutil
# from datetime import datetime
from ispConfig import automaticImportRestHttp as restconf # from requests import get
from pydash import objects
requestsBaseConfig = { # from ispConfig import automaticImportRestHttp as restconf
'verify': True, # from pydash import objects
'headers': {
'accept': 'application/json' # requestsBaseConfig = {
} # 'verify': True,
} # 'headers': {
# 'accept': 'application/json'
# }
# }
def createShaper(): # def createShaper():
# shutil.copy('Shaper.csv', 'Shaper.csv.bak') # # shutil.copy('Shaper.csv', 'Shaper.csv.bak')
ts = datetime.now().strftime('%Y-%m-%d.%H-%M-%S') # ts = datetime.now().strftime('%Y-%m-%d.%H-%M-%S')
devicesURL = restconf.get('baseURL') + '/' + restconf.get('devicesURI').strip('/') # devicesURL = restconf.get('baseURL') + '/' + restconf.get('devicesURI').strip('/')
requestConfig = objects.defaults_deep({'params': {}}, restconf.get('requestsConfig'), requestsBaseConfig) # requestConfig = objects.defaults_deep({'params': {}}, restconf.get('requestsConfig'), requestsBaseConfig)
raw = get(devicesURL, **requestConfig, timeout=10) # raw = get(devicesURL, **requestConfig, timeout=10)
if raw.status_code != 200: # if raw.status_code != 200:
print('Failed to request ' + devicesURL + ', got ' + str(raw.status_code)) # print('Failed to request ' + devicesURL + ', got ' + str(raw.status_code))
return False # return False
devicesCsvFP = os.path.dirname(os.path.realpath(__file__)) + '/ShapedDevices.csv' # devicesCsvFP = os.path.dirname(os.path.realpath(__file__)) + '/ShapedDevices.csv'
with open(devicesCsvFP, 'w') as csvfile: # with open(devicesCsvFP, 'w') as csvfile:
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL) # wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
wr.writerow( # wr.writerow(
['Circuit ID', 'Circuit Name', 'Device ID', 'Device Name', 'Parent Node', 'MAC', 'IPv4', 'IPv6', # ['Circuit ID', 'Circuit Name', 'Device ID', 'Device Name', 'Parent Node', 'MAC', 'IPv4', 'IPv6',
'Download Min Mbps', 'Upload Min Mbps', 'Download Max Mbps', 'Upload Max Mbps', 'Comment']) # 'Download Min Mbps', 'Upload Min Mbps', 'Download Max Mbps', 'Upload Max Mbps', 'Comment'])
for row in raw.json(): # for row in raw.json():
wr.writerow(row.values()) # wr.writerow(row.values())
if restconf['logChanges']: # if restconf['logChanges']:
devicesBakFilePath = restconf['logChanges'].rstrip('/') + '/ShapedDevices.' + ts + '.csv' # devicesBakFilePath = restconf['logChanges'].rstrip('/') + '/ShapedDevices.' + ts + '.csv'
try: # try:
shutil.copy(devicesCsvFP, devicesBakFilePath) # shutil.copy(devicesCsvFP, devicesBakFilePath)
except: # except:
os.makedirs(restconf['logChanges'], exist_ok=True) # os.makedirs(restconf['logChanges'], exist_ok=True)
shutil.copy(devicesCsvFP, devicesBakFilePath) # shutil.copy(devicesCsvFP, devicesBakFilePath)
networkURL = restconf['baseURL'] + '/' + restconf['networkURI'].strip('/') # networkURL = restconf['baseURL'] + '/' + restconf['networkURI'].strip('/')
raw = get(networkURL, **requestConfig, timeout=10) # raw = get(networkURL, **requestConfig, timeout=10)
if raw.status_code != 200: # if raw.status_code != 200:
print('Failed to request ' + networkURL + ', got ' + str(raw.status_code)) # print('Failed to request ' + networkURL + ', got ' + str(raw.status_code))
return False # return False
networkJsonFP = os.path.dirname(os.path.realpath(__file__)) + '/network.json' # networkJsonFP = os.path.dirname(os.path.realpath(__file__)) + '/network.json'
with open(networkJsonFP, 'w') as handler: # with open(networkJsonFP, 'w') as handler:
handler.write(raw.text) # handler.write(raw.text)
if restconf['logChanges']: # if restconf['logChanges']:
networkBakFilePath = restconf['logChanges'].rstrip('/') + '/network.' + ts + '.json' # networkBakFilePath = restconf['logChanges'].rstrip('/') + '/network.' + ts + '.json'
try: # try:
shutil.copy(networkJsonFP, networkBakFilePath) # shutil.copy(networkJsonFP, networkBakFilePath)
except: # except:
os.makedirs(restconf['logChanges'], exist_ok=True) # os.makedirs(restconf['logChanges'], exist_ok=True)
shutil.copy(networkJsonFP, networkBakFilePath) # shutil.copy(networkJsonFP, networkBakFilePath)
def importFromRestHttp(): # def importFromRestHttp():
createShaper() # createShaper()
if __name__ == '__main__': # if __name__ == '__main__':
importFromRestHttp() # importFromRestHttp()

View File

@ -2,8 +2,9 @@ from pythonCheck import checkPythonVersion
checkPythonVersion() checkPythonVersion()
import requests import requests
import subprocess import subprocess
from ispConfig import sonar_api_url,sonar_api_key,sonar_airmax_ap_model_ids,sonar_active_status_ids,sonar_ltu_ap_model_ids,snmp_community from liblqos_python import sonar_api_key, sonar_api_url, snmp_community, sonar_airmax_ap_model_ids, \
all_models = sonar_airmax_ap_model_ids + sonar_ltu_ap_model_ids sonar_ltu_ap_model_ids, sonar_active_status_ids
all_models = sonar_airmax_ap_model_ids() + sonar_ltu_ap_model_ids()
from integrationCommon import NetworkGraph, NetworkNode, NodeType from integrationCommon import NetworkGraph, NetworkNode, NodeType
from multiprocessing.pool import ThreadPool from multiprocessing.pool import ThreadPool
@ -26,7 +27,7 @@ from multiprocessing.pool import ThreadPool
def sonarRequest(query,variables={}): def sonarRequest(query,variables={}):
r = requests.post(sonar_api_url, json={'query': query, 'variables': variables}, headers={'Authorization': 'Bearer ' + sonar_api_key}, timeout=10) r = requests.post(sonar_api_url(), json={'query': query, 'variables': variables}, headers={'Authorization': 'Bearer ' + sonar_api_key()}, timeout=10)
r_json = r.json() r_json = r.json()
# Sonar responses look like this: {"data": {"accounts": {"entities": [{"id": '1'},{"id": 2}]}}} # Sonar responses look like this: {"data": {"accounts": {"entities": [{"id": '1'},{"id": 2}]}}}
@ -36,7 +37,7 @@ def sonarRequest(query,variables={}):
return sonar_list return sonar_list
def getActiveStatuses(): def getActiveStatuses():
if not sonar_active_status_ids: if not sonar_active_status_ids():
query = """query getActiveStatuses { query = """query getActiveStatuses {
account_statuses (activates_account: true) { account_statuses (activates_account: true) {
entities { entities {
@ -52,7 +53,7 @@ def getActiveStatuses():
status_ids.append(status['id']) status_ids.append(status['id'])
return status_ids return status_ids
else: else:
return sonar_active_status_ids return sonar_active_status_ids()
# Sometimes the IP will be under the field data for an item and sometimes it will be assigned to the inventory item itself. # Sometimes the IP will be under the field data for an item and sometimes it will be assigned to the inventory item itself.
def findIPs(inventory_item): def findIPs(inventory_item):
@ -118,7 +119,7 @@ def getSitesAndAps():
} }
sites_and_aps = sonarRequest(query,variables) sites_and_aps = sonarRequest(query,variables)
# This should only return sites that have equipment on them that is in the list sonar_ubiquiti_ap_model_ids in ispConfig.py # This should only return sites that have equipment on them that is in the list sonar_ubiquiti_ap_model_ids in lqos.conf
sites = [] sites = []
aps = [] aps = []
for site in sites_and_aps: for site in sites_and_aps:
@ -184,7 +185,7 @@ def getAccounts(sonar_active_status_ids):
}""" }"""
active_status_ids = [] active_status_ids = []
for status_id in sonar_active_status_ids: for status_id in sonar_active_status_ids():
active_status_ids.append({ active_status_ids.append({
"attribute": "account_status_id", "attribute": "account_status_id",
"operator": "EQ", "operator": "EQ",
@ -246,12 +247,12 @@ def getAccounts(sonar_active_status_ids):
def mapApCpeMacs(ap): def mapApCpeMacs(ap):
macs = [] macs = []
macs_output = None macs_output = None
if ap['model'] in sonar_airmax_ap_model_ids: #Tested with Prism Gen2AC and Rocket M5. if ap['model'] in sonar_airmax_ap_model_ids(): #Tested with Prism Gen2AC and Rocket M5.
macs_output = subprocess.run(['snmpwalk', '-Os', '-v', '1', '-c', snmp_community, ap['ip'], '.1.3.6.1.4.1.41112.1.4.7.1.1.1'], capture_output=True).stdout.decode('utf8') macs_output = subprocess.run(['snmpwalk', '-Os', '-v', '1', '-c', snmp_community(), ap['ip'], '.1.3.6.1.4.1.41112.1.4.7.1.1.1'], capture_output=True).stdout.decode('utf8')
if ap['model'] in sonar_ltu_ap_model_ids: #Tested with LTU Rocket if ap['model'] in sonar_ltu_ap_model_ids(): #Tested with LTU Rocket
macs_output = subprocess.run(['snmpwalk', '-Os', '-v', '1', '-c', snmp_community, ap['ip'], '.1.3.6.1.4.1.41112.1.10.1.4.1.11'], capture_output=True).stdout.decode('utf8') macs_output = subprocess.run(['snmpwalk', '-Os', '-v', '1', '-c', snmp_community(), ap['ip'], '.1.3.6.1.4.1.41112.1.10.1.4.1.11'], capture_output=True).stdout.decode('utf8')
if macs_output: if macs_output:
name_output = subprocess.run(['snmpwalk', '-Os', '-v', '1', '-c', snmp_community, ap['ip'], '.1.3.6.1.2.1.1.5.0'], capture_output=True).stdout.decode('utf8') name_output = subprocess.run(['snmpwalk', '-Os', '-v', '1', '-c', snmp_community(), ap['ip'], '.1.3.6.1.2.1.1.5.0'], capture_output=True).stdout.decode('utf8')
ap['name'] = name_output[name_output.find('"')+1:name_output.rfind('"')] ap['name'] = name_output[name_output.find('"')+1:name_output.rfind('"')]
for mac_line in macs_output.splitlines(): for mac_line in macs_output.splitlines():
mac = mac_line[mac_line.find(':')+1:] mac = mac_line[mac_line.find(':')+1:]

View File

@ -2,23 +2,24 @@ from pythonCheck import checkPythonVersion
checkPythonVersion() checkPythonVersion()
import requests import requests
import warnings import warnings
from ispConfig import excludeSites, findIPv6usingMikrotik, bandwidthOverheadFactor, exceptionCPEs, splynx_api_key, splynx_api_secret, splynx_api_url from liblqos_python import exclude_sites, find_ipv6_using_mikrotik, bandwidth_overhead_factor, splynx_api_key, \
splynx_api_secret, splynx_api_url
from integrationCommon import isIpv4Permitted from integrationCommon import isIpv4Permitted
import base64 import base64
from requests.auth import HTTPBasicAuth from requests.auth import HTTPBasicAuth
if findIPv6usingMikrotik == True: if find_ipv6_using_mikrotik() == True:
from mikrotikFindIPv6 import pullMikrotikIPv6 from mikrotikFindIPv6 import pullMikrotikIPv6
from integrationCommon import NetworkGraph, NetworkNode, NodeType from integrationCommon import NetworkGraph, NetworkNode, NodeType
def buildHeaders(): def buildHeaders():
credentials = splynx_api_key + ':' + splynx_api_secret credentials = splynx_api_key() + ':' + splynx_api_secret()
credentials = base64.b64encode(credentials.encode()).decode() credentials = base64.b64encode(credentials.encode()).decode()
return {'Authorization' : "Basic %s" % credentials} return {'Authorization' : "Basic %s" % credentials}
def spylnxRequest(target, headers): def spylnxRequest(target, headers):
# Sends a REST GET request to Spylnx and returns the # Sends a REST GET request to Spylnx and returns the
# result in JSON # result in JSON
url = splynx_api_url + "/api/2.0/" + target url = splynx_api_url() + "/api/2.0/" + target
r = requests.get(url, headers=headers, timeout=10) r = requests.get(url, headers=headers, timeout=10)
return r.json() return r.json()

View File

@ -5,35 +5,16 @@ import os
import csv import csv
from datetime import datetime, timedelta from datetime import datetime, timedelta
from integrationCommon import isIpv4Permitted, fixSubnet from integrationCommon import isIpv4Permitted, fixSubnet
try: from liblqos_python import uisp_site, uisp_strategy, overwrite_network_json_always, uisp_suspended_strategy, \
from ispConfig import uispSite, uispStrategy, overwriteNetworkJSONalways airmax_capacity, ltu_capacity, use_ptmp_as_parent, uisp_base_url, uisp_auth_token, \
except: generated_pn_download_mbps, generated_pn_upload_mbps
from ispConfig import uispSite, uispStrategy
overwriteNetworkJSONalways = False
try:
from ispConfig import uispSuspendedStrategy
except:
uispSuspendedStrategy = "none"
try:
from ispConfig import airMax_capacity
except:
airMax_capacity = 0.65
try:
from ispConfig import ltu_capacity
except:
ltu_capacity = 0.90
try:
from ispConfig import usePtMPasParent
except:
usePtMPasParent = False
def uispRequest(target): def uispRequest(target):
# Sends an HTTP request to UISP and returns the # Sends an HTTP request to UISP and returns the
# result in JSON. You only need to specify the # result in JSON. You only need to specify the
# tail end of the URL, e.g. "sites" # tail end of the URL, e.g. "sites"
from ispConfig import UISPbaseURL, uispAuthToken url = uisp_base_url() + "/nms/api/v2.1/" + target
url = UISPbaseURL + "/nms/api/v2.1/" + target headers = {'accept': 'application/json', 'x-auth-token': uisp_auth_token()}
headers = {'accept': 'application/json', 'x-auth-token': uispAuthToken}
r = requests.get(url, headers=headers, timeout=60) r = requests.get(url, headers=headers, timeout=60)
return r.json() return r.json()
@ -41,7 +22,6 @@ def buildFlatGraph():
# Builds a high-performance (but lacking in site or AP bandwidth control) # Builds a high-performance (but lacking in site or AP bandwidth control)
# network. # network.
from integrationCommon import NetworkGraph, NetworkNode, NodeType from integrationCommon import NetworkGraph, NetworkNode, NodeType
from ispConfig import generatedPNUploadMbps, generatedPNDownloadMbps
# Load network sites # Load network sites
print("Loading Data from UISP") print("Loading Data from UISP")
@ -60,8 +40,8 @@ def buildFlatGraph():
customerName = '' customerName = ''
name = site['identification']['name'] name = site['identification']['name']
type = site['identification']['type'] type = site['identification']['type']
download = generatedPNDownloadMbps download = generated_pn_download_mbps()
upload = generatedPNUploadMbps upload = generated_pn_upload_mbps()
if (site['qos']['downloadSpeed']) and (site['qos']['uploadSpeed']): if (site['qos']['downloadSpeed']) and (site['qos']['uploadSpeed']):
download = int(round(site['qos']['downloadSpeed']/1000000)) download = int(round(site['qos']['downloadSpeed']/1000000))
upload = int(round(site['qos']['uploadSpeed']/1000000)) upload = int(round(site['qos']['uploadSpeed']/1000000))
@ -92,7 +72,7 @@ def buildFlatGraph():
net.prepareTree() net.prepareTree()
net.plotNetworkGraph(False) net.plotNetworkGraph(False)
if net.doesNetworkJsonExist(): if net.doesNetworkJsonExist():
if overwriteNetworkJSONalways: if overwrite_network_json_always():
net.createNetworkJson() net.createNetworkJson()
else: else:
print("network.json already exists and overwriteNetworkJSONalways set to False. Leaving in-place.") print("network.json already exists and overwriteNetworkJSONalways set to False. Leaving in-place.")
@ -156,8 +136,8 @@ def findApCapacities(devices, siteBandwidth):
if device['identification']['type'] == 'airMax': if device['identification']['type'] == 'airMax':
download, upload = airMaxCapacityCorrection(device, download, upload) download, upload = airMaxCapacityCorrection(device, download, upload)
elif device['identification']['model'] == 'LTU-Rocket': elif device['identification']['model'] == 'LTU-Rocket':
download = download * ltu_capacity download = download * ltu_capacity()
upload = upload * ltu_capacity upload = upload * ltu_capacity()
if device['identification']['model'] == 'WaveAP': if device['identification']['model'] == 'WaveAP':
if (download < 500) or (upload < 500): if (download < 500) or (upload < 500):
download = 2450 download = 2450
@ -188,8 +168,8 @@ def airMaxCapacityCorrection(device, download, upload):
upload = upload * 0.50 upload = upload * 0.50
# Flexible frame # Flexible frame
elif dlRatio == None: elif dlRatio == None:
download = download * airMax_capacity download = download * airmax_capacity()
upload = upload * airMax_capacity upload = upload * airmax_capacity()
return (download, upload) return (download, upload)
def findAirfibers(devices, generatedPNDownloadMbps, generatedPNUploadMbps): def findAirfibers(devices, generatedPNDownloadMbps, generatedPNUploadMbps):
@ -344,7 +324,7 @@ def findNodesBranchedOffPtMP(siteList, dataLinks, sites, rootSite, foundAirFiber
'upload': upload, 'upload': upload,
parent: apID parent: apID
} }
if usePtMPasParent: if use_ptmp_as_parent():
site['parent'] = apID site['parent'] = apID
print('Site ' + name + ' will use PtMP AP as parent.') print('Site ' + name + ' will use PtMP AP as parent.')
return siteList, nodeOffPtMP return siteList, nodeOffPtMP
@ -375,7 +355,7 @@ def buildFullGraph():
# Attempts to build a full network graph, incorporating as much of the UISP # Attempts to build a full network graph, incorporating as much of the UISP
# hierarchy as possible. # hierarchy as possible.
from integrationCommon import NetworkGraph, NetworkNode, NodeType from integrationCommon import NetworkGraph, NetworkNode, NodeType
from ispConfig import uispSite, generatedPNUploadMbps, generatedPNDownloadMbps uispSite = uisp_site()
# Load network sites # Load network sites
print("Loading Data from UISP") print("Loading Data from UISP")
@ -397,7 +377,7 @@ def buildFullGraph():
siteList = buildSiteList(sites, dataLinks) siteList = buildSiteList(sites, dataLinks)
rootSite = findInSiteList(siteList, uispSite) rootSite = findInSiteList(siteList, uispSite)
print("Finding PtP Capacities") print("Finding PtP Capacities")
foundAirFibersBySite = findAirfibers(devices, generatedPNDownloadMbps, generatedPNUploadMbps) foundAirFibersBySite = findAirfibers(devices, generated_pn_download_mbps(), generated_pn_upload_mbps())
print('Creating list of route overrides') print('Creating list of route overrides')
routeOverrides = loadRoutingOverrides() routeOverrides = loadRoutingOverrides()
if rootSite is None: if rootSite is None:
@ -425,8 +405,8 @@ def buildFullGraph():
id = site['identification']['id'] id = site['identification']['id']
name = site['identification']['name'] name = site['identification']['name']
type = site['identification']['type'] type = site['identification']['type']
download = generatedPNDownloadMbps download = generated_pn_download_mbps()
upload = generatedPNUploadMbps upload = generated_pn_upload_mbps()
address = "" address = ""
customerName = "" customerName = ""
parent = findInSiteListById(siteList, id)['parent'] parent = findInSiteListById(siteList, id)['parent']
@ -469,10 +449,10 @@ def buildFullGraph():
download = int(round(site['qos']['downloadSpeed']/1000000)) download = int(round(site['qos']['downloadSpeed']/1000000))
upload = int(round(site['qos']['uploadSpeed']/1000000)) upload = int(round(site['qos']['uploadSpeed']/1000000))
if site['identification'] is not None and site['identification']['suspended'] is not None and site['identification']['suspended'] == True: if site['identification'] is not None and site['identification']['suspended'] is not None and site['identification']['suspended'] == True:
if uispSuspendedStrategy == "ignore": if uisp_suspended_strategy() == "ignore":
print("WARNING: Site " + name + " is suspended") print("WARNING: Site " + name + " is suspended")
continue continue
if uispSuspendedStrategy == "slow": if uisp_suspended_strategy() == "slow":
print("WARNING: Site " + name + " is suspended") print("WARNING: Site " + name + " is suspended")
download = 1 download = 1
upload = 1 upload = 1
@ -530,13 +510,13 @@ def buildFullGraph():
else: else:
# Add some defaults in case they want to change them # Add some defaults in case they want to change them
siteBandwidth[node.displayName] = { siteBandwidth[node.displayName] = {
"download": generatedPNDownloadMbps, "upload": generatedPNUploadMbps} "download": generated_pn_download_mbps(), "upload": generated_pn_upload_mbps()}
net.prepareTree() net.prepareTree()
print('Plotting network graph') print('Plotting network graph')
net.plotNetworkGraph(False) net.plotNetworkGraph(False)
if net.doesNetworkJsonExist(): if net.doesNetworkJsonExist():
if overwriteNetworkJSONalways: if overwrite_network_json_always():
net.createNetworkJson() net.createNetworkJson()
else: else:
print("network.json already exists and overwriteNetworkJSONalways set to False. Leaving in-place.") print("network.json already exists and overwriteNetworkJSONalways set to False. Leaving in-place.")
@ -558,7 +538,7 @@ def buildFullGraph():
def importFromUISP(): def importFromUISP():
startTime = datetime.now() startTime = datetime.now()
match uispStrategy: match uisp_strategy():
case "full": buildFullGraph() case "full": buildFullGraph()
case default: buildFlatGraph() case default: buildFlatGraph()
endTime = datetime.now() endTime = datetime.now()

View File

@ -10,10 +10,11 @@ import subprocess
import warnings import warnings
import argparse import argparse
import logging import logging
from ispConfig import interfaceA, interfaceB, enableActualShellCommands, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, generatedPNDownloadMbps, generatedPNUploadMbps from liblqos_python import interface_a, interface_b, enable_actual_shell_commands, upstream_bandwidth_capacity_download_mbps, \
upstream_bandwidth_capacity_upload_mbps, generated_pn_download_mbps, generated_pn_upload_mbps
def shell(command): def shell(command):
if enableActualShellCommands: if enable_actual_shell_commands():
logging.info(command) logging.info(command)
commands = command.split(' ') commands = command.split(' ')
proc = subprocess.Popen(commands, stdout=subprocess.PIPE) proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
@ -24,7 +25,7 @@ def shell(command):
def safeShell(command): def safeShell(command):
safelyRan = True safelyRan = True
if enableActualShellCommands: if enable_actual_shell_commands():
commands = command.split(' ') commands = command.split(' ')
proc = subprocess.Popen(commands, stdout=subprocess.PIPE) proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
@ -61,7 +62,7 @@ def getQdiscForIPaddress(ipAddress):
def printStatsFromIP(ipAddress): def printStatsFromIP(ipAddress):
qDiscID = getQdiscForIPaddress(ipAddress) qDiscID = getQdiscForIPaddress(ipAddress)
if qDiscID != None: if qDiscID != None:
interfaces = [interfaceA, interfaceB] interfaces = [interface_a(), interface_b()]
for interface in interfaces: for interface in interfaces:
command = 'tc -s qdisc show dev ' + interface + ' parent ' + qDiscID command = 'tc -s qdisc show dev ' + interface + ' parent ' + qDiscID
commands = command.split(' ') commands = command.split(' ')
@ -77,7 +78,7 @@ def printCircuitClassInfo(ipAddress):
print("IP: " + ipAddress + " | Class ID: " + qDiscID) print("IP: " + ipAddress + " | Class ID: " + qDiscID)
print() print()
theClassID = '' theClassID = ''
interfaces = [interfaceA, interfaceB] interfaces = [interface_a(), interface_b()]
downloadMin = '' downloadMin = ''
downloadMax = '' downloadMax = ''
uploadMin = '' uploadMin = ''
@ -91,7 +92,7 @@ def printCircuitClassInfo(ipAddress):
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
if "htb" in line: if "htb" in line:
listOfThings = line.split(" ") listOfThings = line.split(" ")
if interface == interfaceA: if interface == interface_a():
downloadMin = line.split(' rate ')[1].split(' ')[0] downloadMin = line.split(' rate ')[1].split(' ')[0]
downloadMax = line.split(' ceil ')[1].split(' ')[0] downloadMax = line.split(' ceil ')[1].split(' ')[0]
burst = line.split(' burst ')[1].split(' ')[0] burst = line.split(' burst ')[1].split(' ')[0]
@ -103,8 +104,8 @@ def printCircuitClassInfo(ipAddress):
print("Upload rate/ceil: " + uploadMin + "/" + uploadMax) print("Upload rate/ceil: " + uploadMin + "/" + uploadMax)
print("burst/cburst: " + burst + "/" + cburst) print("burst/cburst: " + burst + "/" + cburst)
else: else:
download = min(upstreamBandwidthCapacityDownloadMbps, generatedPNDownloadMbps) download = min(upstream_bandwidth_capacity_download_mbps(), generated_pn_download_mbps())
upload = min(upstreamBandwidthCapacityUploadMbps, generatedPNUploadMbps) upload = min(upstream_bandwidth_capacity_upload_mbps(), generated_pn_upload_mbps())
bwString = str(download) + '/' + str(upload) bwString = str(download) + '/' + str(upload)
print("Invalid IP address provided (default queue limit is " + bwString + " Mbps)") print("Invalid IP address provided (default queue limit is " + bwString + " Mbps)")

View File

@ -1,226 +0,0 @@
[main]
lqos_directory = '/etc/lqos/' # /etc/lqos seems saner
lqos_bus = '/run/lqos'
[perms]
max_users = 0 # limiting connects is sane
group = 'lqos'
umask = 0770 # Restrict access to the bus to lqos group and root
[stats]
queue_check_period_us = 1000000 # 1/2 rx_usecs would be nice
[tuning]
stop_irq_balance = true
netdev_budget_usecs = 8000
netdev_budget_packets = 300
rx_usecs = 8
tx_usecs = 8
disable_rxvlan = true
disable_txvlan = true
disable_offload = [ "gso", "tso", "lro", "sg", "gro" ]
# For a two interface setup, use the following - and replace
# "enp1s0f1" and "enp1s0f2" with your network card names (obtained
# from `ip link`):
[bridge]
use_xdp_bridge = true
interface_mapping = [
{ name = "enp1s0f1", redirect_to = "enp1s0f2", scan_vlans = false },
{ name = "enp1s0f2", redirect_to = "enp1s0f1", scan_vlans = false }
]
vlan_mapping = []
# For "on a stick" (single interface mode):
# [bridge]
# use_xdp_bridge = true
# interface_mapping = [
# { name = "enp1s0f1", redirect_to = "enp1s0f1", scan_vlans = true }
# ]
# vlan_mapping = [
# { parent = "enp1s0f1", tag = 3, redirect_to = 4 },
# { parent = "enp1s0f1", tag = 4, redirect_to = 3 }
# ]
# Does the linux bridge still work? How do you set it up? It seems
# as hot as we are on all this new stuff the lowest barrier to entry
# is a default of the linux bridge.
# How does one setup a Proxmox VM? Everyone except the testbed is on a vm.
# NMS/CRM Integration
[NMS]
# If a device shows a WAN IP within these subnets...
# assume they are behind NAT / un-shapable, and ignore them
ignoreSubnets = ['192.168.0.0/16']
allowedSubnets = ['100.64.0.0/10']
# Stuff appearing on the bridge not on these networks is bad
# Spoofed traffic, non BCP38 issues from customers, etc also bad
# I am also not big on caseING variable names
mySubnets = ['x.y.z.x/22']
myTunnels = ['192.168.0.0/16'] # Say we use a subset of 10/8 or ...
[IspConfig]
# 'fq_codel' or 'cake diffserv4'
# 'cake diffserv4' is recommended
# sqm = 'fq_codel'
sqm = 'cake diffserv4'
sqm_in = 'why do we think in and out should be the same?'
sqm_out = 'why do we think in and out should be the same?'
# Used to passively monitor the network for before / after comparisons. Leave as False to
# ensure actual shaping. After changing this value, run "sudo systemctl restart LibreQoS.service"
monitorOnlyMode = False
# How many Mbps are available to the edge of this network
# Does this mean we are ALSO applying this as a shaped rate in or out of the network?
upstreamBandwidthCapacityDownloadMbps = 1000
upstreamBandwidthCapacityUploadMbps = 1000
# Devices in ShapedDevices.csv without a defined ParentNode will be placed under a generated
# parent node, evenly spread out across CPU cores. Here, define the bandwidth limit for each
# of those generated parent nodes.
# and if that is the case, why does this make sense?
generatedPNDownloadMbps = 1000
generatedPNUploadMbps = 1000
# These seem to be duplicate and incomplete from the other stuff above
# How does one (assuming we keep this file) use on a stick here?
# There should be one way only to configure on a stick mode
# We should retire these and just attach to the bridge per the rust
# Interface connected to core router
interfaceA = 'eth1'
# Interface connected to edge router
interfaceB = 'eth2'
# WORK IN PROGRESS. Note that interfaceA determines the "stick" interface
# I could only get scanning to work if I issued ethtool -K enp1s0f1 rxvlan off
OnAStick = False
# VLAN facing the core router
StickVlanA = 0
# VLAN facing the edge router
StickVlanB = 0
# Allow shell commands. False causes commands print to console only without being executed.
# MUST BE ENABLED FOR PROGRAM TO FUNCTION
enableActualShellCommands = True
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
# what happens when run from systemd, vs the command line?
runShellCommandsAsSudo = False
# Allows overriding queues / CPU cores used. When set to 0, the max possible queues / CPU cores are utilized. Please leave as 0. Why?
queuesAvailableOverride = 0
# Some networks are flat - where there are no Parent Nodes defined in ShapedDevices.csv
# For such flat networks, just define network.json as {} and enable this setting
# By default, it balances the subscribers across CPU cores, factoring in their max bandwidth rates
# Past 25,000 subsribers this algorithm becomes inefficient and is not advised
useBinPackingToBalanceCPU = True
[InfluxDB]
# Bandwidth & Latency Graphing
influxDBEnabled = True
influxDBurl = "http://localhost:8086"
influxDBBucket = "libreqos"
influxDBOrg = "Your ISP Name Here"
influxDBtoken = ""
[Splynx]
# Splynx Integration
automaticImportSplynx = False
splynx_api_key = ''
splynx_api_secret = ''
# Everything before /api/2.0/ on your Splynx instance
splynx_api_url = 'https://YOUR_URL.splynx.app'
# UISP integration
[UISP]
automaticImportUISP = False
uispAuthToken = ''
# Everything before /nms/ on your UISP instance
UISPbaseURL = 'https://examplesite.com'
# UISP Site - enter the name of the root site in your network tree
# to act as the starting point for the tree mapping
uispSite = ''
# Strategy:
# * "flat" - create all client sites directly off the top of the tree,
# provides maximum performance - at the expense of not offering AP,
# or site options.
# * "full" - build a complete network map
uispStrategy = "full"
# List any sites that should not be included, with each site name surrounded by ''
# and separated by commas
excludeSites = []
# If you use IPv6, this can be used to find associated IPv6 prefixes
# for your clients' IPv4 addresses, and match them
# to those devices
findIPv6usingMikrotik = False
# If you want to provide a safe cushion for speed test results to prevent customer complaints,
# you can set this to 1.15 (15% above plan rate). If not, you can leave as 1.0
bandwidthOverheadFactor = 1.0
# For edge cases, set the respective ParentNode for these CPEs
exceptionCPEs = {}
# exceptionCPEs = {
# 'CPE-SomeLocation1': 'AP-SomeLocation1',
# 'CPE-SomeLocation2': 'AP-SomeLocation2',
# }
# API Auth
apiUsername = "testUser"
apiPassword = "changeme8343486806"
apiHostIP = "127.0.0.1"
apiHostPost = 5000
httpRestIntegrationConfig = {
'enabled': False,
'baseURL': 'https://domain',
'networkURI': '/some/path',
'shaperURI': '/some/path/etc',
'requestsConfig': {
'verify': True, # Good for Dev if your dev env doesnt have cert
'params': { # params for query string ie uri?some-arg=some-value
'search': 'hold-my-beer'
},
#'headers': {
# 'Origin': 'SomeHeaderValue',
#},
},
# If you want to store a timestamped copy/backup of both network.json and Shaper.csv each time they are updated,
# provide a path
# 'logChanges': '/var/log/libreqos'
}

View File

@ -1,17 +1,15 @@
# This file *must* be installed in `/etc/lqos.conf`. version = "1.5"
# Change the values to match your setup. lqos_directory = "/opt/libreqos/src"
node_id = "0000-0000-0000"
# Where is LibreQoS installed? node_name = "Example Node"
lqos_directory = '/opt/libreqos/src' packet_capture_time = 10
queue_check_period_ms = 1000 queue_check_period_ms = 1000
packet_capture_time = 10 # Number of seconds to capture packets in an analysis session
[usage_stats] [usage_stats]
send_anonymous = true send_anonymous = true
anonymous_server = "127.0.0.1:9125" anonymous_server = "stats.libreqos.io:9125"
[tuning] [tuning]
# IRQ balance breaks XDP_Redirect, which we use. Recommended to leave as true.
stop_irq_balance = true stop_irq_balance = true
netdev_budget_usecs = 8000 netdev_budget_usecs = 8000
netdev_budget_packets = 300 netdev_budget_packets = 300
@ -19,27 +17,86 @@ rx_usecs = 8
tx_usecs = 8 tx_usecs = 8
disable_rxvlan = true disable_rxvlan = true
disable_txvlan = true disable_txvlan = true
# What offload types should be disabled on the NIC. The defaults are recommended here.
disable_offload = [ "gso", "tso", "lro", "sg", "gro" ] disable_offload = [ "gso", "tso", "lro", "sg", "gro" ]
# For a two interface setup, use the following - and replace # EITHER:
# "enp1s0f1" and "enp1s0f2" with your network card names (obtained
# from `ip link`):
[bridge] [bridge]
use_xdp_bridge = true use_xdp_bridge = true
interface_mapping = [ to_internet = "eth0"
{ name = "enp1s0f1", redirect_to = "enp1s0f2", scan_vlans = false }, to_network = "eth1"
{ name = "enp1s0f2", redirect_to = "enp1s0f1", scan_vlans = false }
]
vlan_mapping = []
# For "on a stick" (single interface mode): # OR:
# [bridge] #[single_interface]
# use_xdp_bridge = true #interface = "eth0"
# interface_mapping = [ #internet_vlan = 2
# { name = "enp1s0f1", redirect_to = "enp1s0f1", scan_vlans = true } #network_vlan = 3
# ]
# vlan_mapping = [ [queues]
# { parent = "enp1s0f1", tag = 3, redirect_to = 4 }, default_sqm = "cake diffserv4"
# { parent = "enp1s0f1", tag = 4, redirect_to = 3 } monitor_only = false
# ] uplink_bandwidth_mbps = 1000
downlink_bandwidth_mbps = 1000
generated_pn_download_mbps = 1000
generated_pn_upload_mbps = 1000
dry_run = false
sudo = false
#override_available_queues = 12 # This can be omitted and be 0 for Python
use_binpacking = false
[long_term_stats]
gather_stats = true
collation_period_seconds = 10
license_key = "(data)"
uisp_reporting_interval_seconds = 300
[ip_ranges]
ignore_subnets = []
allow_subnets = [ "172.16.0.0/12", "10.0.0.0/8", "100.64.0.0/16", "192.168.0.0/16" ]
[integration_common]
circuit_name_as_address = false
always_overwrite_network_json = false
queue_refresh_interval_mins = 30
[spylnx_integration]
enable_spylnx = false
api_key = ""
api_secret = ""
url = ""
[uisp_integration]
enable_uisp = false
token = ""
url = ""
site = ""
strategy = ""
suspended_strategy = ""
airmax_capacity = 0.65
ltu_capacity = 0.9
exclude_sites = []
ipv6_with_mikrotik = false
bandwidth_overhead_factor = 1.0
commit_bandwidth_multiplier = 0.98
exception_cpes = []
use_ptmp_as_parent = false
[powercode_integration]
enable_powercode = false
powercode_api_key = ""
powercode_api_url = ""
[sonar_integration]
enable_sonar = false
sonar_api_key = ""
sonar_api_url = ""
snmp_community = "public"
airmax_model_ids = [ "" ]
ltu_model_ids = [ "" ]
active_status_ids = [ "" ]
[influxdb]
enable_influxdb = false
url = "http://localhost:8086"
org = "libreqos"
bucket = "Your ISP Name Here"
token = ""

1109
src/rust/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -16,3 +16,6 @@ sha2 = "0"
uuid = { version = "1", features = ["v4", "fast-rng" ] } uuid = { version = "1", features = ["v4", "fast-rng" ] }
log = "0" log = "0"
dashmap = "5" dashmap = "5"
pyo3 = "0.20"
toml = "0.8.8"
once_cell = "1.19.0"

View File

@ -1,15 +1,23 @@
# LQosConfig # LQosConfig
`lqos_config` is designed to manage configuration of LibreQoS. `lqos_config` is designed to manage configuration of LibreQoS. Starting in 1.5, all configuration is
centralized into `/etc/lqos.conf`.
Since all of the parts of the system need to know where to find LibreQoS, it first looks for a file named `/etc/lqos.conf` and uses that to locate the LibreQoS installation. The `lqos_python` module contains functions that mirror each of these, using their original Python
names for integration purposes.
`/etc/lqos.conf` looks like this: You can find the full definitions of each configuration entry in `src/etc/v15`.
```toml ## Adding Configuration Items
lqos_directory = '/opt/libreqos'
```
The entries are: There are two ways to add a configuration:
* `lqos_directory`: where LibreQoS is installed (e.g. `/opt/libreqos`) 1. Declare a Major Version Break. This is a whole new setup that will require a new configuration and migration. We should avoid doing this very often.
1. You need to create a new folder, e.g. `src/etc/v16`.
2. You need to port as much of the old config as you are creating.
3. You need to update `src/etc/migration.rs` to include code to read a "v15" file and create a "v16" configuration.
4. *This is a lot of work and should be a planned effort!*
2. Declare an optional new version item. This is how you handle "oh, I needed to snoo the foo" - and add an *optional* configuration item - so nothing will snarl up because it isn't there.
1. Find the section you want to include it in in `src/etc/v15`. If there isn't one, create it using one of the others as a template and be sure to include the defaults. Add it into `top_config` as the type `Option<MySnooFoo>`.
2. Update `example.toml` to include what *should* go there.
3. Go into `lqos_python` and in `lib.rs` add a Python "getter" for the field. Remember to use `if let` to read the `Option` and return a default if it isn't present.

View File

@ -59,7 +59,7 @@ pub struct WebUsers {
impl WebUsers { impl WebUsers {
fn path() -> Result<PathBuf, AuthenticationError> { fn path() -> Result<PathBuf, AuthenticationError> {
let base_path = crate::EtcLqos::load() let base_path = crate::load_config()
.map_err(|_| AuthenticationError::UnableToLoadEtcLqos)? .map_err(|_| AuthenticationError::UnableToLoadEtcLqos)?
.lqos_directory; .lqos_directory;
let filename = Path::new(&base_path).join("lqusers.toml"); let filename = Path::new(&base_path).join("lqusers.toml");

View File

@ -163,7 +163,16 @@ impl EtcLqos {
return Err(EtcLqosError::ConfigDoesNotExist); return Err(EtcLqosError::ConfigDoesNotExist);
} }
if let Ok(raw) = std::fs::read_to_string("/etc/lqos.conf") { if let Ok(raw) = std::fs::read_to_string("/etc/lqos.conf") {
let document = raw.parse::<Document>(); Self::load_from_string(&raw)
} else {
error!("Unable to read contents of /etc/lqos.conf");
Err(EtcLqosError::CannotReadFile)
}
}
pub(crate) fn load_from_string(raw: &str) -> Result<Self, EtcLqosError> {
log::info!("Trying to load old TOML version from /etc/lqos.conf");
let document = raw.parse::<Document>();
match document { match document {
Err(e) => { Err(e) => {
error!("Unable to parse TOML from /etc/lqos.conf"); error!("Unable to parse TOML from /etc/lqos.conf");
@ -180,15 +189,12 @@ impl EtcLqos {
Err(e) => { Err(e) => {
error!("Unable to parse TOML from /etc/lqos.conf"); error!("Unable to parse TOML from /etc/lqos.conf");
error!("Full error: {:?}", e); error!("Full error: {:?}", e);
Err(EtcLqosError::CannotParseToml) panic!();
//Err(EtcLqosError::CannotParseToml)
} }
} }
} }
} }
} else {
error!("Unable to read contents of /etc/lqos.conf");
Err(EtcLqosError::CannotReadFile)
}
} }
/// Saves changes made to /etc/lqos.conf /// Saves changes made to /etc/lqos.conf
@ -214,6 +220,7 @@ impl EtcLqos {
/// Run this if you've received the OK from the licensing server, and been /// Run this if you've received the OK from the licensing server, and been
/// sent a license key. This appends a [long_term_stats] section to your /// sent a license key. This appends a [long_term_stats] section to your
/// config file - ONLY if one doesn't already exist. /// config file - ONLY if one doesn't already exist.
#[allow(dead_code)]
pub fn enable_long_term_stats(license_key: String) { pub fn enable_long_term_stats(license_key: String) {
if let Ok(raw) = std::fs::read_to_string("/etc/lqos.conf") { if let Ok(raw) = std::fs::read_to_string("/etc/lqos.conf") {
let document = raw.parse::<Document>(); let document = raw.parse::<Document>();
@ -228,14 +235,14 @@ pub fn enable_long_term_stats(license_key: String) {
match cfg { match cfg {
Ok(cfg) => { Ok(cfg) => {
// Now we enable LTS if its not present // Now we enable LTS if its not present
if let Ok(isp_config) = crate::LibreQoSConfig::load() { if let Ok(isp_config) = crate::load_config() {
if cfg.long_term_stats.is_none() { if cfg.long_term_stats.is_none() {
let mut new_section = toml_edit::table(); let mut new_section = toml_edit::table();
new_section["gather_stats"] = value(true); new_section["gather_stats"] = value(true);
new_section["collation_period_seconds"] = value(60); new_section["collation_period_seconds"] = value(60);
new_section["license_key"] = value(license_key); new_section["license_key"] = value(license_key);
if isp_config.automatic_import_uisp { if isp_config.uisp_integration.enable_uisp {
new_section["uisp_reporting_interval_seconds"] = value(300); new_section["uisp_reporting_interval_seconds"] = value(300);
} }
config_doc["long_term_stats"] = new_section; config_doc["long_term_stats"] = new_section;
@ -290,21 +297,19 @@ pub enum EtcLqosError {
CannotParseToml, CannotParseToml,
#[error("Unable to backup /etc/lqos.conf to /etc/lqos.conf.backup")] #[error("Unable to backup /etc/lqos.conf to /etc/lqos.conf.backup")]
BackupFail, BackupFail,
#[error("Unable to serialize new configuration")]
SerializeFail,
#[error("Unable to write to /etc/lqos.conf")] #[error("Unable to write to /etc/lqos.conf")]
WriteFail, WriteFail,
} }
#[cfg(test)] #[cfg(test)]
mod test { mod test {
const EXAMPLE_LQOS_CONF: &str = include_str!("../../../lqos.example"); const EXAMPLE_LQOS_CONF: &str = include_str!("../../../../lqos.example");
#[test] #[test]
fn round_trip_toml() { fn round_trip_toml() {
let doc = EXAMPLE_LQOS_CONF.parse::<toml_edit::Document>().unwrap(); let doc = EXAMPLE_LQOS_CONF.parse::<toml_edit::Document>().unwrap();
let reserialized = doc.to_string(); let reserialized = doc.to_string();
assert_eq!(EXAMPLE_LQOS_CONF, reserialized); assert_eq!(EXAMPLE_LQOS_CONF.trim(), reserialized.trim());
} }
#[test] #[test]

View File

@ -0,0 +1,300 @@
/// Provides support for migration from older versions of the configuration file.
use std::path::Path;
use super::{
python_migration::{PythonMigration, PythonMigrationError},
v15::{BridgeConfig, Config, SingleInterfaceConfig},
EtcLqosError, EtcLqos,
};
use thiserror::Error;
use toml_edit::Document;
#[derive(Debug, Error)]
pub enum MigrationError {
#[error("Failed to read configuration file: {0}")]
ReadError(#[from] std::io::Error),
#[error("Failed to parse configuration file: {0}")]
ParseError(#[from] toml_edit::TomlError),
#[error("Unknown Version: {0}")]
UnknownVersion(String),
#[error("Unable to load old version: {0}")]
LoadError(#[from] EtcLqosError),
#[error("Unable to load python version: {0}")]
PythonLoadError(#[from] PythonMigrationError),
}
pub fn migrate_if_needed() -> Result<(), MigrationError> {
log::info!("Checking config file version");
let raw =
std::fs::read_to_string("/etc/lqos.conf").map_err(|e| MigrationError::ReadError(e))?;
let doc = raw
.parse::<Document>()
.map_err(|e| MigrationError::ParseError(e))?;
if let Some((_key, version)) = doc.get_key_value("version") {
log::info!("Configuration file is at version {}", version.as_str().unwrap());
if version.as_str().unwrap().trim() == "1.5" {
log::info!("Configuration file is already at version 1.5, no migration needed");
return Ok(());
} else {
log::error!("Configuration file is at version {}, but this version of lqos only supports version 1.5", version.as_str().unwrap());
return Err(MigrationError::UnknownVersion(
version.as_str().unwrap().to_string(),
));
}
} else {
log::info!("No version found in configuration file, assuming 1.4x and migration is needed");
let new_config = migrate_14_to_15()?;
// Backup the old configuration
std::fs::rename("/etc/lqos.conf", "/etc/lqos.conf.backup14")
.map_err(|e| MigrationError::ReadError(e))?;
// Rename the old Python configuration
let from = Path::new(new_config.lqos_directory.as_str()).join("ispConfig.py");
let to = Path::new(new_config.lqos_directory.as_str()).join("ispConfig.py.backup14");
std::fs::rename(from, to).map_err(|e| MigrationError::ReadError(e))?;
// Save the configuration
let raw = toml::to_string_pretty(&new_config).unwrap();
std::fs::write("/etc/lqos.conf", raw).map_err(|e| MigrationError::ReadError(e))?;
}
Ok(())
}
fn migrate_14_to_15() -> Result<Config, MigrationError> {
// Load the 1.4 config file
let old_config = EtcLqos::load().map_err(|e| MigrationError::LoadError(e))?;
let python_config = PythonMigration::load().map_err(|e| MigrationError::PythonLoadError(e))?;
let new_config = do_migration_14_to_15(&old_config, &python_config)?;
Ok(new_config)
}
fn do_migration_14_to_15(
old_config: &EtcLqos,
python_config: &PythonMigration,
) -> Result<Config, MigrationError> {
// This is separated out to make unit testing easier
let mut new_config = Config::default();
migrate_top_level(old_config, &mut new_config)?;
migrate_usage_stats(old_config, &mut new_config)?;
migrate_tunables(old_config, &mut new_config)?;
migrate_bridge(old_config, &python_config, &mut new_config)?;
migrate_lts(old_config, &mut new_config)?;
migrate_ip_ranges(python_config, &mut new_config)?;
migrate_integration_common(python_config, &mut new_config)?;
migrate_spylnx(python_config, &mut new_config)?;
migrate_uisp(python_config, &mut new_config)?;
migrate_powercode(python_config, &mut new_config)?;
migrate_sonar(python_config, &mut new_config)?;
migrate_queues( python_config, &mut new_config)?;
migrate_influx(python_config, &mut new_config)?;
new_config.validate().unwrap(); // Left as an upwrap because this should *never* happen
Ok(new_config)
}
fn migrate_top_level(old_config: &EtcLqos, new_config: &mut Config) -> Result<(), MigrationError> {
new_config.version = "1.5".to_string();
new_config.lqos_directory = old_config.lqos_directory.clone();
new_config.packet_capture_time = old_config.packet_capture_time.unwrap_or(10);
if let Some(node_id) = &old_config.node_id {
new_config.node_id = node_id.clone();
} else {
new_config.node_id = Config::calculate_node_id();
}
if let Some(node_name) = &old_config.node_name {
new_config.node_name = node_name.clone();
} else {
new_config.node_name = "Set my name in /etc/lqos.conf".to_string();
}
Ok(())
}
fn migrate_usage_stats(
old_config: &EtcLqos,
new_config: &mut Config,
) -> Result<(), MigrationError> {
if let Some(usage_stats) = &old_config.usage_stats {
new_config.usage_stats.send_anonymous = usage_stats.send_anonymous;
new_config.usage_stats.anonymous_server = usage_stats.anonymous_server.clone();
} else {
new_config.usage_stats = Default::default();
}
Ok(())
}
fn migrate_tunables(old_config: &EtcLqos, new_config: &mut Config) -> Result<(), MigrationError> {
if let Some(tunables) = &old_config.tuning {
new_config.tuning.stop_irq_balance = tunables.stop_irq_balance;
new_config.tuning.netdev_budget_packets = tunables.netdev_budget_packets;
new_config.tuning.netdev_budget_usecs = tunables.netdev_budget_usecs;
new_config.tuning.rx_usecs = tunables.rx_usecs;
new_config.tuning.tx_usecs = tunables.tx_usecs;
new_config.tuning.disable_txvlan = tunables.disable_txvlan;
new_config.tuning.disable_rxvlan = tunables.disable_rxvlan;
new_config.tuning.disable_offload = tunables.disable_offload.clone();
} else {
new_config.tuning = Default::default();
}
Ok(())
}
fn migrate_bridge(
old_config: &EtcLqos,
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
if python_config.on_a_stick {
new_config.bridge = None;
new_config.single_interface = Some(SingleInterfaceConfig {
interface: python_config.interface_a.clone(),
internet_vlan: python_config.stick_vlan_a,
network_vlan: python_config.stick_vlan_b,
});
} else {
new_config.single_interface = None;
new_config.bridge = Some(BridgeConfig {
use_xdp_bridge: old_config.bridge.as_ref().unwrap().use_xdp_bridge,
to_internet: python_config.interface_b.clone(),
to_network: python_config.interface_a.clone(),
});
}
Ok(())
}
fn migrate_queues(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.queues.default_sqm = python_config.sqm.clone();
new_config.queues.monitor_only = python_config.monitor_only_mode;
new_config.queues.uplink_bandwidth_mbps = python_config.upstream_bandwidth_capacity_upload_mbps;
new_config.queues.downlink_bandwidth_mbps =
python_config.upstream_bandwidth_capacity_download_mbps;
new_config.queues.generated_pn_upload_mbps = python_config.generated_pn_upload_mbps;
new_config.queues.generated_pn_download_mbps = python_config.generated_pn_download_mbps;
new_config.queues.dry_run = !python_config.enable_actual_shell_commands;
new_config.queues.sudo = python_config.run_shell_commands_as_sudo;
if python_config.queues_available_override == 0 {
new_config.queues.override_available_queues = None;
} else {
new_config.queues.override_available_queues = Some(python_config.queues_available_override);
}
new_config.queues.use_binpacking = python_config.use_bin_packing_to_balance_cpu;
Ok(())
}
fn migrate_lts(old_config: &EtcLqos, new_config: &mut Config) -> Result<(), MigrationError> {
if let Some(lts) = &old_config.long_term_stats {
new_config.long_term_stats.gather_stats = lts.gather_stats;
new_config.long_term_stats.collation_period_seconds = lts.collation_period_seconds;
new_config.long_term_stats.license_key = lts.license_key.clone();
new_config.long_term_stats.uisp_reporting_interval_seconds =
lts.uisp_reporting_interval_seconds;
} else {
new_config.long_term_stats = super::v15::LongTermStats::default();
}
Ok(())
}
fn migrate_ip_ranges(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.ip_ranges.ignore_subnets = python_config.ignore_subnets.clone();
new_config.ip_ranges.allow_subnets = python_config.allowed_subnets.clone();
Ok(())
}
fn migrate_integration_common(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.integration_common.circuit_name_as_address = python_config.circuit_name_use_address;
new_config.integration_common.always_overwrite_network_json =
python_config.overwrite_network_json_always;
new_config.integration_common.queue_refresh_interval_mins =
python_config.queue_refresh_interval_mins;
Ok(())
}
fn migrate_spylnx(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.spylnx_integration.enable_spylnx = python_config.automatic_import_splynx;
new_config.spylnx_integration.api_key = python_config.splynx_api_key.clone();
new_config.spylnx_integration.api_secret = python_config.spylnx_api_secret.clone();
new_config.spylnx_integration.url = python_config.spylnx_api_url.clone();
Ok(())
}
fn migrate_powercode(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.powercode_integration.enable_powercode = python_config.automatic_import_powercode;
new_config.powercode_integration.powercode_api_url = python_config.powercode_api_url.clone();
new_config.powercode_integration.powercode_api_key = python_config.powercode_api_key.clone();
Ok(())
}
fn migrate_sonar(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.sonar_integration.enable_sonar = python_config.automatic_import_sonar;
new_config.sonar_integration.sonar_api_url = python_config.sonar_api_url.clone();
new_config.sonar_integration.sonar_api_key = python_config.sonar_api_key.clone();
new_config.sonar_integration.snmp_community = python_config.snmp_community.clone();
Ok(())
}
fn migrate_uisp(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.uisp_integration.enable_uisp = python_config.automatic_import_uisp;
new_config.uisp_integration.token = python_config.uisp_auth_token.clone();
new_config.uisp_integration.url = python_config.uisp_base_url.clone();
new_config.uisp_integration.site = python_config.uisp_site.clone();
new_config.uisp_integration.strategy = python_config.uisp_strategy.clone();
new_config.uisp_integration.suspended_strategy = python_config.uisp_suspended_strategy.clone();
new_config.uisp_integration.airmax_capacity = python_config.airmax_capacity;
new_config.uisp_integration.ltu_capacity = python_config.ltu_capacity;
new_config.uisp_integration.exclude_sites = python_config.exclude_sites.clone();
new_config.uisp_integration.ipv6_with_mikrotik = python_config.find_ipv6_using_mikrotik;
new_config.uisp_integration.bandwidth_overhead_factor = python_config.bandwidth_overhead_factor;
new_config.uisp_integration.commit_bandwidth_multiplier =
python_config.committed_bandwidth_multiplier;
// TODO: ExceptionCPEs is going to require some real work
Ok(())
}
fn migrate_influx(
python_config: &PythonMigration,
new_config: &mut Config,
) -> Result<(), MigrationError> {
new_config.influxdb.enable_influxdb = python_config.influx_db_enabled;
new_config.influxdb.url = python_config.influx_db_url.clone();
new_config.influxdb.bucket = python_config.infux_db_bucket.clone();
new_config.influxdb.org = python_config.influx_db_org.clone();
new_config.influxdb.token = python_config.influx_db_token.clone();
Ok(())
}
#[cfg(test)]
mod test {
use super::*;
use crate::etc::test_data::{OLD_CONFIG, PYTHON_CONFIG};
#[test]
fn test_migration() {
let old_config = EtcLqos::load_from_string(OLD_CONFIG).unwrap();
let python_config = PythonMigration::load_from_string(PYTHON_CONFIG).unwrap();
let new_config = do_migration_14_to_15(&old_config, &python_config).unwrap();
assert_eq!(new_config.version, "1.5");
}
}

View File

@ -0,0 +1,94 @@
//! Manages the `/etc/lqos.conf` file.
mod etclqos_migration;
use self::migration::migrate_if_needed;
pub use self::v15::Config;
pub use etclqos_migration::*;
use std::sync::Mutex;
use thiserror::Error;
mod migration;
mod python_migration;
#[cfg(test)]
pub mod test_data;
mod v15;
pub use v15::{Tunables, BridgeConfig};
static CONFIG: Mutex<Option<Config>> = Mutex::new(None);
/// Load the configuration from `/etc/lqos.conf`.
pub fn load_config() -> Result<Config, LibreQoSConfigError> {
let mut lock = CONFIG.lock().unwrap();
if lock.is_none() {
log::info!("Loading configuration file /etc/lqos.conf");
migrate_if_needed().map_err(|e| {
log::error!("Unable to migrate configuration: {:?}", e);
LibreQoSConfigError::FileNotFoud
})?;
let file_result = std::fs::read_to_string("/etc/lqos.conf");
if file_result.is_err() {
log::error!("Unable to open /etc/lqos.conf");
return Err(LibreQoSConfigError::FileNotFoud);
}
let raw = file_result.unwrap();
let config_result = Config::load_from_string(&raw);
if config_result.is_err() {
log::error!("Unable to parse /etc/lqos.conf");
log::error!("Error: {:?}", config_result);
return Err(LibreQoSConfigError::ParseError(format!(
"{:?}",
config_result
)));
}
log::info!("Set cached version of config file");
*lock = Some(config_result.unwrap());
}
log::info!("Returning cached config");
Ok(lock.as_ref().unwrap().clone())
}
/// Enables LTS reporting in the configuration file.
pub fn enable_long_term_stats(license_key: String) -> Result<(), LibreQoSConfigError> {
let mut config = load_config()?;
let mut lock = CONFIG.lock().unwrap();
config.long_term_stats.gather_stats = true;
config.long_term_stats.collation_period_seconds = 60;
config.long_term_stats.license_key = Some(license_key);
if config.uisp_integration.enable_uisp {
config.long_term_stats.uisp_reporting_interval_seconds = Some(300);
}
// Write the file
let raw = toml::to_string_pretty(&config).unwrap();
std::fs::write("/etc/lqos.conf", raw).map_err(|_| LibreQoSConfigError::CannotWrite)?;
// Write the lock
*lock = Some(config);
Ok(())
}
#[derive(Debug, Error)]
pub enum LibreQoSConfigError {
#[error("Unable to read /etc/lqos.conf. See other errors for details.")]
CannotOpenEtcLqos,
#[error("Unable to locate (path to LibreQoS)/ispConfig.py. Check your path and that you have configured it.")]
FileNotFoud,
#[error("Unable to read the contents of ispConfig.py. Check file permissions.")]
CannotReadFile,
#[error("Unable to parse ispConfig.py")]
ParseError(String),
#[error("Could not backup configuration")]
CannotCopy,
#[error("Could not remove the previous configuration.")]
CannotRemove,
#[error("Could not open ispConfig.py for write")]
CannotOpenForWrite,
#[error("Unable to write to ispConfig.py")]
CannotWrite,
#[error("Unable to read IP")]
CannotReadIP,
}

View File

@ -0,0 +1,254 @@
//! This module utilizes PyO3 to read an existing ispConfig.py file, and
//! provide conversion services for the new, unified configuration target
//! for version 1.5.
use super::EtcLqos;
use pyo3::{prepare_freethreaded_python, Python};
use std::{
collections::HashMap,
fs::read_to_string,
path::{Path, PathBuf},
};
use thiserror::Error;
#[derive(Debug, Error)]
pub enum PythonMigrationError {
#[error("The ispConfig.py file does not exist.")]
ConfigFileNotFound,
#[error("Unable to parse variable")]
ParseError,
#[error("Variable not found")]
VariableNotFound(String),
}
fn isp_config_py_path(cfg: &EtcLqos) -> PathBuf {
let base_path = Path::new(&cfg.lqos_directory);
let final_path = base_path.join("ispConfig.py");
final_path
}
/// Does thie ispConfig.py file exist?
fn config_exists(cfg: &EtcLqos) -> bool {
isp_config_py_path(&cfg).exists()
}
fn from_python<'a, T>(py: &'a Python, variable_name: &str) -> Result<T, PythonMigrationError>
where
T: pyo3::FromPyObject<'a>,
{
let result = py
.eval(variable_name, None, None)
.map_err(|_| PythonMigrationError::VariableNotFound(variable_name.to_string()))?
.extract::<T>()
.map_err(|_| PythonMigrationError::ParseError)?;
Ok(result)
}
#[derive(Default, Debug)]
pub struct PythonMigration {
pub sqm: String,
pub monitor_only_mode: bool,
pub upstream_bandwidth_capacity_download_mbps: u32,
pub upstream_bandwidth_capacity_upload_mbps: u32,
pub generated_pn_download_mbps: u32,
pub generated_pn_upload_mbps: u32,
pub interface_a: String,
pub interface_b: String,
pub queue_refresh_interval_mins: u32,
pub on_a_stick: bool,
pub stick_vlan_a: u32,
pub stick_vlan_b: u32,
pub enable_actual_shell_commands: bool,
pub run_shell_commands_as_sudo: bool,
pub queues_available_override: u32,
pub use_bin_packing_to_balance_cpu: bool,
pub influx_db_enabled: bool,
pub influx_db_url: String,
pub infux_db_bucket: String,
pub influx_db_org: String,
pub influx_db_token: String,
pub circuit_name_use_address: bool,
pub overwrite_network_json_always: bool,
pub ignore_subnets: Vec<String>,
pub allowed_subnets: Vec<String>,
pub automatic_import_splynx: bool,
pub splynx_api_key: String,
pub spylnx_api_secret: String,
pub spylnx_api_url: String,
pub automatic_import_uisp: bool,
pub uisp_auth_token: String,
pub uisp_base_url: String,
pub uisp_site: String,
pub uisp_strategy: String,
pub uisp_suspended_strategy: String,
pub airmax_capacity: f32,
pub ltu_capacity: f32,
pub exclude_sites: Vec<String>,
pub find_ipv6_using_mikrotik: bool,
pub bandwidth_overhead_factor: f32,
pub committed_bandwidth_multiplier: f32,
pub exception_cpes: HashMap<String, String>,
pub api_username: String,
pub api_password: String,
pub api_host_ip: String,
pub api_host_port: u32,
pub automatic_import_powercode: bool,
pub powercode_api_key: String,
pub powercode_api_url: String,
pub automatic_import_sonar: bool,
pub sonar_api_url: String,
pub sonar_api_key: String,
pub snmp_community: String,
pub sonar_airmax_ap_model_ids: Vec<String>,
pub sonar_ltu_ap_model_ids: Vec<String>,
pub sonar_active_status_ids: Vec<String>,
// TODO: httpRestIntegrationConfig
}
impl PythonMigration {
fn parse(cfg: &mut Self, py: &Python) -> Result<(), PythonMigrationError> {
cfg.sqm = from_python(&py, "sqm").unwrap_or("cake diffserv4".to_string());
cfg.monitor_only_mode = from_python(&py, "monitorOnlyMode").unwrap_or(false);
cfg.upstream_bandwidth_capacity_download_mbps =
from_python(&py, "upstreamBandwidthCapacityDownloadMbps").unwrap_or(1000);
cfg.upstream_bandwidth_capacity_upload_mbps =
from_python(&py, "upstreamBandwidthCapacityUploadMbps").unwrap_or(1000);
cfg.generated_pn_download_mbps = from_python(&py, "generatedPNDownloadMbps").unwrap_or(1000);
cfg.generated_pn_upload_mbps = from_python(&py, "generatedPNUploadMbps").unwrap_or(1000);
cfg.interface_a = from_python(&py, "interfaceA").unwrap_or("eth1".to_string());
cfg.interface_b = from_python(&py, "interfaceB").unwrap_or("eth2".to_string());
cfg.queue_refresh_interval_mins = from_python(&py, "queueRefreshIntervalMins").unwrap_or(15);
cfg.on_a_stick = from_python(&py, "OnAStick").unwrap_or(false);
cfg.stick_vlan_a = from_python(&py, "StickVlanA").unwrap_or(0);
cfg.stick_vlan_b = from_python(&py, "StickVlanB").unwrap_or(0);
cfg.enable_actual_shell_commands = from_python(&py, "enableActualShellCommands").unwrap_or(true);
cfg.run_shell_commands_as_sudo = from_python(&py, "runShellCommandsAsSudo").unwrap_or(false);
cfg.queues_available_override = from_python(&py, "queuesAvailableOverride").unwrap_or(0);
cfg.use_bin_packing_to_balance_cpu = from_python(&py, "useBinPackingToBalanceCPU").unwrap_or(false);
// Influx
cfg.influx_db_enabled = from_python(&py, "influxDBEnabled").unwrap_or(false);
cfg.influx_db_url = from_python(&py, "influxDBurl").unwrap_or("http://localhost:8086".to_string());
cfg.infux_db_bucket = from_python(&py, "influxDBBucket").unwrap_or("libreqos".to_string());
cfg.influx_db_org = from_python(&py, "influxDBOrg").unwrap_or("Your ISP Name Here".to_string());
cfg.influx_db_token = from_python(&py, "influxDBtoken").unwrap_or("".to_string());
// Common
cfg.circuit_name_use_address = from_python(&py, "circuitNameUseAddress").unwrap_or(true);
cfg.overwrite_network_json_always = from_python(&py, "overwriteNetworkJSONalways").unwrap_or(false);
cfg.ignore_subnets = from_python(&py, "ignoreSubnets").unwrap_or(vec!["192.168.0.0/16".to_string()]);
cfg.allowed_subnets = from_python(&py, "allowedSubnets").unwrap_or(vec!["100.64.0.0/10".to_string()]);
cfg.exclude_sites = from_python(&py, "excludeSites").unwrap_or(vec![]);
cfg.find_ipv6_using_mikrotik = from_python(&py, "findIPv6usingMikrotik").unwrap_or(false);
// Spylnx
cfg.automatic_import_splynx = from_python(&py, "automaticImportSplynx").unwrap_or(false);
cfg.splynx_api_key = from_python(&py, "splynx_api_key").unwrap_or("Your API Key Here".to_string());
cfg.spylnx_api_secret = from_python(&py, "splynx_api_secret").unwrap_or("Your API Secret Here".to_string());
cfg.spylnx_api_url = from_python(&py, "splynx_api_url").unwrap_or("https://your.splynx.url/api/v1".to_string());
// UISP
cfg.automatic_import_uisp = from_python(&py, "automaticImportUISP").unwrap_or(false);
cfg.uisp_auth_token = from_python(&py, "uispAuthToken").unwrap_or("Your API Token Here".to_string());
cfg.uisp_base_url = from_python(&py, "UISPbaseURL").unwrap_or("https://your.uisp.url".to_string());
cfg.uisp_site = from_python(&py, "uispSite").unwrap_or("Your parent site name here".to_string());
cfg.uisp_strategy = from_python(&py, "uispStrategy").unwrap_or("full".to_string());
cfg.uisp_suspended_strategy = from_python(&py, "uispSuspendedStrategy").unwrap_or("none".to_string());
cfg.airmax_capacity = from_python(&py, "airMax_capacity").unwrap_or(0.65);
cfg.ltu_capacity = from_python(&py, "ltu_capacity").unwrap_or(0.9);
cfg.bandwidth_overhead_factor = from_python(&py, "bandwidthOverheadFactor").unwrap_or(1.0);
cfg.committed_bandwidth_multiplier = from_python(&py, "committedBandwidthMultiplier").unwrap_or(0.98);
cfg.exception_cpes = from_python(&py, "exceptionCPEs").unwrap_or(HashMap::new());
// API
cfg.api_username = from_python(&py, "apiUsername").unwrap_or("testUser".to_string());
cfg.api_password = from_python(&py, "apiPassword").unwrap_or("testPassword".to_string());
cfg.api_host_ip = from_python(&py, "apiHostIP").unwrap_or("127.0.0.1".to_string());
cfg.api_host_port = from_python(&py, "apiHostPost").unwrap_or(5000);
// Powercode
cfg.automatic_import_powercode = from_python(&py, "automaticImportPowercode").unwrap_or(false);
cfg.powercode_api_key = from_python(&py,"powercode_api_key").unwrap_or("".to_string());
cfg.powercode_api_url = from_python(&py,"powercode_api_url").unwrap_or("".to_string());
// Sonar
cfg.automatic_import_sonar = from_python(&py, "automaticImportSonar").unwrap_or(false);
cfg.sonar_api_key = from_python(&py, "sonar_api_key").unwrap_or("".to_string());
cfg.sonar_api_url = from_python(&py, "sonar_api_url").unwrap_or("".to_string());
cfg.snmp_community = from_python(&py, "snmp_community").unwrap_or("public".to_string());
cfg.sonar_active_status_ids = from_python(&py, "sonar_active_status_ids").unwrap_or(vec![]);
cfg.sonar_airmax_ap_model_ids = from_python(&py, "sonar_airmax_ap_model_ids").unwrap_or(vec![]);
cfg.sonar_ltu_ap_model_ids = from_python(&py, "sonar_ltu_ap_model_ids").unwrap_or(vec![]);
// InfluxDB
cfg.influx_db_enabled = from_python(&py, "influxDBEnabled").unwrap_or(false);
cfg.influx_db_url = from_python(&py, "influxDBurl").unwrap_or("http://localhost:8086".to_string());
cfg.infux_db_bucket = from_python(&py, "influxDBBucket").unwrap_or("libreqos".to_string());
cfg.influx_db_org = from_python(&py, "influxDBOrg").unwrap_or("Your ISP Name Here".to_string());
cfg.influx_db_token = from_python(&py, "influxDBtoken").unwrap_or("".to_string());
Ok(())
}
pub fn load() -> Result<Self, PythonMigrationError> {
let mut old_config = Self::default();
if let Ok(cfg) = crate::etc::EtcLqos::load() {
if !config_exists(&cfg) {
return Err(PythonMigrationError::ConfigFileNotFound);
}
let code = read_to_string(isp_config_py_path(&cfg)).unwrap();
prepare_freethreaded_python();
Python::with_gil(|py| {
py.run(&code, None, None).unwrap();
let result = Self::parse(&mut old_config, &py);
if result.is_err() {
println!("Error parsing Python config: {:?}", result);
}
});
} else {
return Err(PythonMigrationError::ConfigFileNotFound);
}
Ok(old_config)
}
#[allow(dead_code)]
pub(crate) fn load_from_string(s: &str) -> Result<Self, PythonMigrationError> {
let mut old_config = Self::default();
prepare_freethreaded_python();
Python::with_gil(|py| {
py.run(s, None, None).unwrap();
let result = Self::parse(&mut old_config, &py);
if result.is_err() {
println!("Error parsing Python config: {:?}", result);
}
});
Ok(old_config)
}
}
#[cfg(test)]
mod test {
use super::super::test_data::*;
use super::*;
#[test]
fn test_parsing_the_default() {
let mut cfg = PythonMigration::default();
prepare_freethreaded_python();
let mut worked = true;
Python::with_gil(|py| {
py.run(PYTHON_CONFIG, None, None).unwrap();
let result = PythonMigration::parse(&mut cfg, &py);
if result.is_err() {
println!("Error parsing Python config: {:?}", result);
worked = false;
}
});
assert!(worked)
}
}

View File

@ -1,10 +1,60 @@
pub const OLD_CONFIG: &str = "
# This file *must* be installed in `/etc/lqos.conf`.
# Change the values to match your setup.
# Where is LibreQoS installed?
lqos_directory = '/home/herbert/Rust/LibreQoS/libreqos/LibreQoS/src'
queue_check_period_ms = 1000
packet_capture_time = 10 # Number of seconds to capture packets in an analysis session
node_id = \"aee0eb53606ef621d386ef5fcfa0d72a5ba64fccd36df2997695dfb6d418c64b\"
[usage_stats]
send_anonymous = true
anonymous_server = \"127.0.0.1:9125\"
[tuning]
# IRQ balance breaks XDP_Redirect, which we use. Recommended to leave as true.
stop_irq_balance = false
netdev_budget_usecs = 8000
netdev_budget_packets = 300
rx_usecs = 8
tx_usecs = 8
disable_rxvlan = true
disable_txvlan = true
# What offload types should be disabled on the NIC. The defaults are recommended here.
disable_offload = [ \"gso\", \"tso\", \"lro\", \"sg\", \"gro\" ]
# For a two interface setup, use the following - and replace
# \"enp1s0f1\" and \"enp1s0f2\" with your network card names (obtained
# from `ip link`):
[bridge]
use_xdp_bridge = false
interface_mapping = [
{ name = \"veth_toexternal\", redirect_to = \"veth_tointernal\", scan_vlans = false },
{ name = \"veth_tointernal\", redirect_to = \"veth_toexternal\", scan_vlans = false }
]
vlan_mapping = []
# For \"on a stick\" (single interface mode):
# [bridge]
# use_xdp_bridge = true
# interface_mapping = [
# { name = \"enp1s0f1\", redirect_to = \"enp1s0f1\", scan_vlans = true }
# ]
# vlan_mapping = [
# { parent = \"enp1s0f1\", tag = 3, redirect_to = 4 },
# { parent = \"enp1s0f1\", tag = 4, redirect_to = 3 }
# ]
";
pub const PYTHON_CONFIG : &str = "
# 'fq_codel' or 'cake diffserv4' # 'fq_codel' or 'cake diffserv4'
# 'cake diffserv4' is recommended # 'cake diffserv4' is recommended
# sqm = 'fq_codel' # sqm = 'fq_codel'
sqm = 'cake diffserv4' sqm = 'cake diffserv4'
# Used to passively monitor the network for before / after comparisons. Leave as False to # Used to passively monitor the network for before / after comparisons. Leave as False to
# ensure actual shaping. After changing this value, run "sudo systemctl restart LibreQoS.service" # ensure actual shaping. After changing this value, run \"sudo systemctl restart LibreQoS.service\"
monitorOnlyMode = False monitorOnlyMode = False
# How many Mbps are available to the edge of this network. # How many Mbps are available to the edge of this network.
@ -22,15 +72,15 @@ generatedPNDownloadMbps = 1000
generatedPNUploadMbps = 1000 generatedPNUploadMbps = 1000
# Interface connected to core router # Interface connected to core router
interfaceA = 'eth1' interfaceA = 'veth_tointernal'
# Interface connected to edge router # Interface connected to edge router
interfaceB = 'eth2' interfaceB = 'veth_toexternal'
# Queue refresh scheduler (lqos_scheduler). Minutes between reloads. # Queue refresh scheduler (lqos_scheduler). Minutes between reloads.
queueRefreshIntervalMins = 30 queueRefreshIntervalMins = 30
# WORK IN PROGRESS. Note that interfaceA determines the "stick" interface # WORK IN PROGRESS. Note that interfaceA determines the \"stick\" interface
# I could only get scanning to work if I issued ethtool -K enp1s0f1 rxvlan off # I could only get scanning to work if I issued ethtool -K enp1s0f1 rxvlan off
OnAStick = False OnAStick = False
# VLAN facing the core router # VLAN facing the core router
@ -60,10 +110,10 @@ useBinPackingToBalanceCPU = False
# Bandwidth & Latency Graphing # Bandwidth & Latency Graphing
influxDBEnabled = True influxDBEnabled = True
influxDBurl = "http://localhost:8086" influxDBurl = \"http://localhost:8086\"
influxDBBucket = "libreqos" influxDBBucket = \"libreqos\"
influxDBOrg = "Your ISP Name Here" influxDBOrg = \"Your ISP Name Here\"
influxDBtoken = "" influxDBtoken = \"\"
# NMS/CRM Integration # NMS/CRM Integration
@ -77,23 +127,6 @@ overwriteNetworkJSONalways = False
ignoreSubnets = ['192.168.0.0/16'] ignoreSubnets = ['192.168.0.0/16']
allowedSubnets = ['100.64.0.0/10'] allowedSubnets = ['100.64.0.0/10']
# Powercode Integration
automaticImportPowercode = False
powercode_api_key = ''
# Everything before :444/api/ in your Powercode instance URL
powercode_api_url = ''
# Sonar Integration
automaticImportSonar = False
sonar_api_key = ''
sonar_api_url = '' # ex 'https://company.sonar.software/api/graphql'
# If there are radios in these lists, we will try to get the clients using snmp. This requires snmpwalk to be install on the server. You can use "sudo apt-get install snmp" for that. You will also need to fill in the snmp_community.
sonar_airmax_ap_model_ids = [] # ex ['29','43']
sonar_ltu_ap_model_ids = [] # ex ['4']
snmp_community = ''
# This is for all account statuses where we should be applying QoS. If you leave it blank, we'll use any status in account marked with "Activates Account" in Sonar.
sonar_active_status_ids = []
# Splynx Integration # Splynx Integration
automaticImportSplynx = False automaticImportSplynx = False
splynx_api_key = '' splynx_api_key = ''
@ -101,18 +134,6 @@ splynx_api_secret = ''
# Everything before /api/2.0/ on your Splynx instance # Everything before /api/2.0/ on your Splynx instance
splynx_api_url = 'https://YOUR_URL.splynx.app' splynx_api_url = 'https://YOUR_URL.splynx.app'
#Sonar Integration
automaticImportSonar = False
sonar_api_key = ''
sonar_api_url = '' # ex 'https://company.sonar.software/api/graphql'
# If there are radios in these lists, we will try to get the clients using snmp. This requires snmpwalk to be install on the server. You can use "sudo apt-get install snmp" for that. You will also need to fill in the snmp_community.
sonar_airmax_ap_model_ids = [] # ex ['29','43']
sonar_ltu_ap_model_ids = [] # ex ['4']
snmp_community = ''
# This is for all account statuses where we should be applying QoS. If you leave it blank, we'll use any status in account marked with "Activates Account" in Sonar.
sonar_active_status_ids = []
# UISP integration # UISP integration
automaticImportUISP = False automaticImportUISP = False
uispAuthToken = '' uispAuthToken = ''
@ -122,16 +143,16 @@ UISPbaseURL = 'https://examplesite.com'
# to act as the starting point for the tree mapping # to act as the starting point for the tree mapping
uispSite = '' uispSite = ''
# Strategy: # Strategy:
# * "flat" - create all client sites directly off the top of the tree, # * \"flat\" - create all client sites directly off the top of the tree,
# provides maximum performance - at the expense of not offering AP, # provides maximum performance - at the expense of not offering AP,
# or site options. # or site options.
# * "full" - build a complete network map # * \"full\" - build a complete network map
uispStrategy = "full" uispStrategy = \"full\"
# Handling of UISP suspensions: # Handling of UISP suspensions:
# * "none" - do not handle suspensions # * \"none\" - do not handle suspensions
# * "ignore" - do not add suspended customers to the network map # * \"ignore\" - do not add suspended customers to the network map
# * "slow" - limit suspended customers to 1mbps # * \"slow\" - limit suspended customers to 1mbps
uispSuspendedStrategy = "none" uispSuspendedStrategy = \"none\"
# Assumed capacity of AirMax and LTU radios vs reported capacity by UISP. For example, 65% would be 0.65. # Assumed capacity of AirMax and LTU radios vs reported capacity by UISP. For example, 65% would be 0.65.
# For AirMax, this applies to flexible frame only. AirMax fixed frame will have capacity based on ratio. # For AirMax, this applies to flexible frame only. AirMax fixed frame will have capacity based on ratio.
airMax_capacity = 0.65 airMax_capacity = 0.65
@ -154,9 +175,9 @@ exceptionCPEs = {}
# } # }
# API Auth # API Auth
apiUsername = "testUser" apiUsername = \"testUser\"
apiPassword = "changeme8343486806" apiPassword = \"changeme8343486806\"
apiHostIP = "127.0.0.1" apiHostIP = \"127.0.0.1\"
apiHostPost = 5000 apiHostPost = 5000
@ -167,14 +188,15 @@ httpRestIntegrationConfig = {
'shaperURI': '/some/path/etc', 'shaperURI': '/some/path/etc',
'requestsConfig': { 'requestsConfig': {
'verify': True, # Good for Dev if your dev env doesnt have cert 'verify': True, # Good for Dev if your dev env doesnt have cert
'params': { # params for query string ie uri?some-arg=some-value 'params': { # params for query string ie uri?some-arg=some-value
'search': 'hold-my-beer' 'search': 'hold-my-beer'
}, },
#'headers': { #'headers': {
# 'Origin': 'SomeHeaderValue', # 'Origin': 'SomeHeaderValue',
#}, #},
}, },
# If you want to store a timestamped copy/backup of both network.json and Shaper.csv each time they are updated, # If you want to store a timestamped copy/backup of both network.json and Shaper.csv each time they are updated,
# provide a path # provide a path
# 'logChanges': '/var/log/libreqos' # 'logChanges': '/var/log/libreqos'
} }
";

View File

@ -0,0 +1,22 @@
//! Anonymous statistics section of the configuration
//! file.
use serde::{Deserialize, Serialize};
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct UsageStats {
/// Are we allowed to send stats at all?
pub send_anonymous: bool,
/// Where do we send them?
pub anonymous_server: String,
}
impl Default for UsageStats {
fn default() -> Self {
Self {
send_anonymous: true,
anonymous_server: "stats.libreqos.io:9125".to_string(),
}
}
}

View File

@ -0,0 +1,50 @@
//! Defines a two-interface bridge configuration.
//! A config file must contain EITHER this, or a `single_interface`
//! section, but not both.
use serde::{Deserialize, Serialize};
/// Represents a two-interface bridge configuration.
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct BridgeConfig {
/// Use the XDP-accelerated bridge?
pub use_xdp_bridge: bool,
/// The name of the first interface, facing the Internet
pub to_internet: String,
/// The name of the second interface, facing the LAN
pub to_network: String,
}
impl Default for BridgeConfig {
fn default() -> Self {
Self {
use_xdp_bridge: true,
to_internet: "eth0".to_string(),
to_network: "eth1".to_string(),
}
}
}
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct SingleInterfaceConfig {
/// The name of the interface
pub interface: String,
/// The VLAN ID facing the Internet
pub internet_vlan: u32,
/// The VLAN ID facing the LAN
pub network_vlan: u32,
}
impl Default for SingleInterfaceConfig {
fn default() -> Self {
Self {
interface: "eth0".to_string(),
internet_vlan: 2,
network_vlan: 3,
}
}
}

View File

@ -0,0 +1,102 @@
version = "1.5"
lqos_directory = "/opt/libreqos/src"
node_id = "0000-0000-0000"
node_name = "Example Node"
packet_capture_time = 10
queue_check_period_ms = 1000
[usage_stats]
send_anonymous = true
anonymous_server = "stats.libreqos.io:9125"
[tuning]
stop_irq_balance = true
netdev_budget_usecs = 8000
netdev_budget_packets = 300
rx_usecs = 8
tx_usecs = 8
disable_rxvlan = true
disable_txvlan = true
disable_offload = [ "gso", "tso", "lro", "sg", "gro" ]
# EITHER:
[bridge]
use_xdp_bridge = true
to_internet = "eth0"
to_network = "eth1"
# OR:
#[single_interface]
#interface = "eth0"
#internet_vlan = 2
#network_vlan = 3
[queues]
default_sqm = "cake diffserv4"
monitor_only = false
uplink_bandwidth_mbps = 1000
downlink_bandwidth_mbps = 1000
generated_pn_download_mbps = 1000
generated_pn_upload_mbps = 1000
dry_run = false
sudo = false
#override_available_queues = 12 # This can be omitted and be 0 for Python
use_binpacking = false
[long_term_stats]
gather_stats = true
collation_period_seconds = 10
license_key = "(data)"
uisp_reporting_interval_seconds = 300
[ip_ranges]
ignore_subnets = []
allow_subnets = [ "172.16.0.0/12", "10.0.0.0/8", "100.64.0.0/16", "192.168.0.0/16" ]
[integration_common]
circuit_name_as_address = false
always_overwrite_network_json = false
queue_refresh_interval_mins = 30
[spylnx_integration]
enable_spylnx = false
api_key = ""
api_secret = ""
url = ""
[uisp_integration]
enable_uisp = false
token = ""
url = ""
site = ""
strategy = ""
suspended_strategy = ""
airmax_capacity = 0.65
ltu_capacity = 0.9
exclude_sites = []
ipv6_with_mikrotik = false
bandwidth_overhead_factor = 1.0
commit_bandwidth_multiplier = 0.98
exception_cpes = []
use_ptmp_as_parent = false
[powercode_integration]
enable_powercode = false
powercode_api_key = ""
powercode_api_url = ""
[sonar_integration]
enable_sonar = false
sonar_api_key = ""
sonar_api_url = ""
snmp_community = "public"
airmax_model_ids = [ "" ]
ltu_model_ids = [ "" ]
active_status_ids = [ "" ]
[influxdb]
enable_influxdb = false
url = "http://localhost:8086"
org = "libreqos"
bucket = "Your ISP Name Here"
token = ""

View File

@ -0,0 +1,22 @@
use serde::{Serialize, Deserialize};
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct InfluxDbConfig {
pub enable_influxdb: bool,
pub url: String,
pub bucket: String,
pub org: String,
pub token: String,
}
impl Default for InfluxDbConfig {
fn default() -> Self {
Self {
enable_influxdb: false,
url: "http://localhost:8086".to_string(),
bucket: "libreqos".to_string(),
org: "Your ISP Name".to_string(),
token: "".to_string()
}
}
}

View File

@ -0,0 +1,25 @@
//! Common integration variables, shared between integrations
use serde::{Deserialize, Serialize};
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct IntegrationConfig {
/// Replace names with addresses?
pub circuit_name_as_address: bool,
/// Always overwrite network.json?
pub always_overwrite_network_json: bool,
/// Queue refresh interval in minutes
pub queue_refresh_interval_mins: u32,
}
impl Default for IntegrationConfig {
fn default() -> Self {
Self {
circuit_name_as_address: false,
always_overwrite_network_json: false,
queue_refresh_interval_mins: 30,
}
}
}

View File

@ -0,0 +1,21 @@
use serde::{Serialize, Deserialize};
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct IpRanges {
pub ignore_subnets: Vec<String>,
pub allow_subnets: Vec<String>,
}
impl Default for IpRanges {
fn default() -> Self {
Self {
ignore_subnets: vec![],
allow_subnets: vec![
"172.16.0.0/12".to_string(),
"10.0.0.0/8".to_string(),
"100.64.0.0/10".to_string(),
"192.168.0.0/16".to_string(),
],
}
}
}

View File

@ -0,0 +1,34 @@
//! Defines configuration for the LTS project
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct LongTermStats {
/// Should we store long-term stats at all?
pub gather_stats: bool,
/// How frequently should stats be accumulated into a long-term
/// min/max/avg format per tick?
pub collation_period_seconds: u32,
/// The license key for submitting stats to a LibreQoS hosted
/// statistics server
pub license_key: Option<String>,
/// UISP reporting period (in seconds). UISP queries can be slow,
/// so hitting it every second or 10 seconds is going to cause problems
/// for some people. A good default may be 5 minutes. Not specifying this
/// disabled UISP integration.
pub uisp_reporting_interval_seconds: Option<u64>,
}
impl Default for LongTermStats {
fn default() -> Self {
Self {
gather_stats: false,
collation_period_seconds: 10,
license_key: None,
uisp_reporting_interval_seconds: None,
}
}
}

View File

@ -0,0 +1,19 @@
//! Handles the 1.5.0 configuration file format.
mod top_config;
pub use top_config::Config;
mod anonymous_stats;
mod tuning;
mod bridge;
mod long_term_stats;
mod queues;
mod integration_common;
mod ip_ranges;
mod spylnx_integration;
mod uisp_integration;
mod powercode_integration;
mod sonar_integration;
mod influxdb;
pub use bridge::*;
pub use long_term_stats::LongTermStats;
pub use tuning::Tunables;

View File

@ -0,0 +1,18 @@
use serde::{Serialize, Deserialize};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct PowercodeIntegration {
pub enable_powercode: bool,
pub powercode_api_key: String,
pub powercode_api_url: String,
}
impl Default for PowercodeIntegration {
fn default() -> Self {
PowercodeIntegration {
enable_powercode: false,
powercode_api_key: "".to_string(),
powercode_api_url: "".to_string(),
}
}
}

View File

@ -0,0 +1,54 @@
//! Queue Generation definitions (originally from ispConfig.py)
use serde::{Serialize, Deserialize};
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct QueueConfig {
/// Which SQM to use by default
pub default_sqm: String,
/// Should we monitor only, and not shape traffic?
pub monitor_only: bool,
/// Upstream bandwidth total - download
pub uplink_bandwidth_mbps: u32,
/// Downstream bandwidth total - upload
pub downlink_bandwidth_mbps: u32,
/// Upstream bandwidth per interface queue
pub generated_pn_download_mbps: u32,
/// Downstream bandwidth per interface queue
pub generated_pn_upload_mbps: u32,
/// Should shell commands actually execute, or just be printed?
pub dry_run: bool,
/// Should `sudo` be prefixed on commands?
pub sudo: bool,
/// Should we override the number of available queues?
pub override_available_queues: Option<u32>,
/// Should we invoke the binpacking algorithm to optimize flat
/// networks?
pub use_binpacking: bool,
}
impl Default for QueueConfig {
fn default() -> Self {
Self {
default_sqm: "cake diffserv4".to_string(),
monitor_only: false,
uplink_bandwidth_mbps: 1_000,
downlink_bandwidth_mbps: 1_000,
generated_pn_download_mbps: 1_000,
generated_pn_upload_mbps: 1_000,
dry_run: false,
sudo: false,
override_available_queues: None,
use_binpacking: false,
}
}
}

View File

@ -0,0 +1,26 @@
use serde::{Serialize, Deserialize};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct SonarIntegration {
pub enable_sonar: bool,
pub sonar_api_url: String,
pub sonar_api_key: String,
pub snmp_community: String,
pub airmax_model_ids: Vec<String>,
pub ltu_model_ids: Vec<String>,
pub active_status_ids: Vec<String>,
}
impl Default for SonarIntegration {
fn default() -> Self {
SonarIntegration {
enable_sonar: false,
sonar_api_url: "".to_string(),
sonar_api_key: "".to_string(),
snmp_community: "public".to_string(),
airmax_model_ids: vec![],
ltu_model_ids: vec![],
active_status_ids: vec![],
}
}
}

View File

@ -0,0 +1,20 @@
use serde::{Serialize, Deserialize};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct SplynxIntegration {
pub enable_spylnx: bool,
pub api_key: String,
pub api_secret: String,
pub url: String,
}
impl Default for SplynxIntegration {
fn default() -> Self {
SplynxIntegration {
enable_spylnx: false,
api_key: "".to_string(),
api_secret: "".to_string(),
url: "".to_string(),
}
}
}

View File

@ -0,0 +1,187 @@
//! Top-level configuration file for LibreQoS.
use super::anonymous_stats::UsageStats;
use super::tuning::Tunables;
use serde::{Deserialize, Serialize};
use sha2::digest::Update;
use sha2::Digest;
use uuid::Uuid;
/// Top-level configuration file for LibreQoS.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct Config {
/// Version number for the configuration file.
/// This will be set to "1.5". Versioning will make
/// it easier to handle schema updates moving forward.
pub version: String,
/// Directory in which LibreQoS is installed
pub lqos_directory: String,
/// Node ID - uniquely identifies this shaper.
pub node_id: String,
/// Node name - human-readable name for this shaper.
pub node_name: String,
/// Packet capture time
pub packet_capture_time: usize,
/// Queue refresh interval
pub queue_check_period_ms: u64,
/// Anonymous usage statistics
pub usage_stats: UsageStats,
/// Tuning instructions
pub tuning: Tunables,
/// Bridge configuration
pub bridge: Option<super::bridge::BridgeConfig>,
/// Single-interface configuration
pub single_interface: Option<super::bridge::SingleInterfaceConfig>,
/// Queue Definition data (originally from ispConfig.py)
pub queues: super::queues::QueueConfig,
/// Long-term stats configuration
pub long_term_stats: super::long_term_stats::LongTermStats,
/// IP Range definitions
pub ip_ranges: super::ip_ranges::IpRanges,
/// Integration Common Variables
pub integration_common: super::integration_common::IntegrationConfig,
/// Spylnx Integration
pub spylnx_integration: super::spylnx_integration::SplynxIntegration,
/// UISP Integration
pub uisp_integration: super::uisp_integration::UispIntegration,
/// Powercode Integration
pub powercode_integration: super::powercode_integration::PowercodeIntegration,
/// Sonar Integration
pub sonar_integration: super::sonar_integration::SonarIntegration,
/// InfluxDB Configuration
pub influxdb: super::influxdb::InfluxDbConfig,
}
impl Config {
/// Calculate a node ID based on the machine ID. If Machine ID is unavailable,
/// generate a random UUID.
pub fn calculate_node_id() -> String {
if let Ok(machine_id) = std::fs::read_to_string("/etc/machine-id") {
let hash = sha2::Sha256::new().chain(machine_id).finalize();
format!("{:x}", hash)
} else {
Uuid::new_v4().to_string()
}
}
/// Test is a configuration is valid.
pub fn validate(&self) -> Result<(), String> {
if self.bridge.is_some() && self.single_interface.is_some() {
return Err(
"Configuration file may not contain both a bridge and a single-interface section."
.to_string(),
);
}
if self.version.trim() != "1.5" {
return Err(format!(
"Configuration file is at version [{}], but this version of lqos only supports version 1.5.0",
self.version
));
}
if self.node_id.is_empty() {
return Err("Node ID must be set".to_string());
}
Ok(())
}
/// Loads a config file from a string (used for testing only)
#[allow(dead_code)]
pub fn load_from_string(s: &str) -> Result<Self, String> {
let config: Config = toml::from_str(s).map_err(|e| format!("Error parsing config: {}", e))?;
config.validate()?;
Ok(config)
}
}
impl Default for Config {
fn default() -> Self {
Self {
version: "1.5".to_string(),
lqos_directory: "/opt/libreqos/src".to_string(),
node_id: Self::calculate_node_id(),
node_name: "LibreQoS".to_string(),
usage_stats: UsageStats::default(),
tuning: Tunables::default(),
bridge: Some(super::bridge::BridgeConfig::default()),
single_interface: None,
queues: super::queues::QueueConfig::default(),
long_term_stats: super::long_term_stats::LongTermStats::default(),
ip_ranges: super::ip_ranges::IpRanges::default(),
integration_common: super::integration_common::IntegrationConfig::default(),
spylnx_integration: super::spylnx_integration::SplynxIntegration::default(),
uisp_integration: super::uisp_integration::UispIntegration::default(),
powercode_integration: super::powercode_integration::PowercodeIntegration::default(),
sonar_integration: super::sonar_integration::SonarIntegration::default(),
influxdb: super::influxdb::InfluxDbConfig::default(),
packet_capture_time: 10,
queue_check_period_ms: 1000,
}
}
}
impl Config {
/// Calculate the unterface facing the Internet
pub fn internet_interface(&self) -> String {
if let Some(bridge) = &self.bridge {
bridge.to_internet.clone()
} else if let Some(single_interface) = &self.single_interface {
single_interface.interface.clone()
} else {
panic!("No internet interface configured")
}
}
/// Calculate the interface facing the ISP
pub fn isp_interface(&self) -> String {
if let Some(bridge) = &self.bridge {
bridge.to_network.clone()
} else if let Some(single_interface) = &self.single_interface {
single_interface.interface.clone()
} else {
panic!("No ISP interface configured")
}
}
/// Are we in single-interface mode?
pub fn on_a_stick_mode(&self) -> bool {
self.bridge.is_none()
}
/// Get the VLANs for the stick interface
pub fn stick_vlans(&self) -> (u32, u32) {
if let Some(stick) = &self.single_interface {
(stick.network_vlan, stick.internet_vlan)
} else {
(0, 0)
}
}
}
#[cfg(test)]
mod test {
use super::Config;
#[test]
fn load_example() {
let config = Config::load_from_string(include_str!("example.toml")).unwrap();
assert_eq!(config.version, "1.5");
}
}

View File

@ -0,0 +1,54 @@
//! Interface tuning instructions
use serde::{Deserialize, Serialize};
/// Represents a set of `sysctl` and `ethtool` tweaks that may be
/// applied (in place of the previous version's offload service)
#[derive(Clone, Serialize, Deserialize, Debug, PartialEq, Eq)]
pub struct Tunables {
/// Should the `irq_balance` system service be stopped?
pub stop_irq_balance: bool,
/// Set the netdev budget (usecs)
pub netdev_budget_usecs: u32,
/// Set the netdev budget (packets)
pub netdev_budget_packets: u32,
/// Set the RX side polling frequency
pub rx_usecs: u32,
/// Set the TX side polling frequency
pub tx_usecs: u32,
/// Disable RXVLAN offloading? You generally want to do this.
pub disable_rxvlan: bool,
/// Disable TXVLAN offloading? You generally want to do this.
pub disable_txvlan: bool,
/// A list of `ethtool` offloads to be disabled.
/// The default list is: [ "gso", "tso", "lro", "sg", "gro" ]
pub disable_offload: Vec<String>,
}
impl Default for Tunables {
fn default() -> Self {
Self {
stop_irq_balance: true,
netdev_budget_usecs: 8000,
netdev_budget_packets: 300,
rx_usecs: 8,
tx_usecs: 8,
disable_rxvlan: true,
disable_txvlan: true,
disable_offload: vec![
"gso".to_string(),
"tso".to_string(),
"lro".to_string(),
"sg".to_string(),
"gro".to_string(),
],
}
}
}

View File

@ -0,0 +1,46 @@
use serde::{Serialize, Deserialize};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct UispIntegration {
pub enable_uisp: bool,
pub token: String,
pub url: String,
pub site: String,
pub strategy: String,
pub suspended_strategy: String,
pub airmax_capacity: f32,
pub ltu_capacity: f32,
pub exclude_sites: Vec<String>,
pub ipv6_with_mikrotik: bool,
pub bandwidth_overhead_factor: f32,
pub commit_bandwidth_multiplier: f32,
pub exception_cpes: Vec<ExceptionCpe>,
pub use_ptmp_as_parent: bool,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct ExceptionCpe {
pub cpe: String,
pub parent: String,
}
impl Default for UispIntegration {
fn default() -> Self {
UispIntegration {
enable_uisp: false,
token: "".to_string(),
url: "".to_string(),
site: "".to_string(),
strategy: "".to_string(),
suspended_strategy: "".to_string(),
airmax_capacity: 0.0,
ltu_capacity: 0.0,
exclude_sites: vec![],
ipv6_with_mikrotik: false,
bandwidth_overhead_factor: 1.0,
commit_bandwidth_multiplier: 1.0,
exception_cpes: vec![],
use_ptmp_as_parent: false,
}
}
}

View File

@ -8,14 +8,12 @@
#![warn(missing_docs)] #![warn(missing_docs)]
mod authentication; mod authentication;
mod etc; mod etc;
mod libre_qos_config;
mod network_json; mod network_json;
mod program_control; mod program_control;
mod shaped_devices; mod shaped_devices;
pub use authentication::{UserRole, WebUsers}; pub use authentication::{UserRole, WebUsers};
pub use etc::{BridgeConfig, BridgeInterface, BridgeVlan, EtcLqos, Tunables, enable_long_term_stats}; pub use etc::{load_config, Config, enable_long_term_stats, Tunables, BridgeConfig};
pub use libre_qos_config::LibreQoSConfig;
pub use network_json::{NetworkJson, NetworkJsonNode, NetworkJsonTransport}; pub use network_json::{NetworkJson, NetworkJsonNode, NetworkJsonTransport};
pub use program_control::load_libreqos; pub use program_control::load_libreqos;
pub use shaped_devices::{ConfigShapedDevices, ShapedDevice}; pub use shaped_devices::{ConfigShapedDevices, ShapedDevice};

View File

@ -1,541 +0,0 @@
//! `ispConfig.py` is part of the Python side of LibreQoS. This module
//! reads, writes and maps values from the Python file.
use crate::etc;
use ip_network::IpNetwork;
use log::error;
use serde::{Deserialize, Serialize};
use std::{
fs::{self, read_to_string, remove_file, OpenOptions},
io::Write,
net::IpAddr,
path::{Path, PathBuf},
};
use thiserror::Error;
/// Represents the contents of an `ispConfig.py` file.
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LibreQoSConfig {
/// Interface facing the Internet
pub internet_interface: String,
/// Interface facing the ISP Core Router
pub isp_interface: String,
/// Are we in "on a stick" (single interface) mode?
pub on_a_stick_mode: bool,
/// If we are, which VLAN represents which direction?
/// In (internet, ISP) order.
pub stick_vlans: (u16, u16),
/// The value of the SQM field from `ispConfig.py`
pub sqm: String,
/// Are we in monitor-only mode (not shaping)?
pub monitor_mode: bool,
/// Total available download (in Mbps)
pub total_download_mbps: u32,
/// Total available upload (in Mbps)
pub total_upload_mbps: u32,
/// If a node is generated, how much download (Mbps) should it offer?
pub generated_download_mbps: u32,
/// If a node is generated, how much upload (Mbps) should it offer?
pub generated_upload_mbps: u32,
/// Should the Python queue builder use the bin packing strategy to
/// try to optimize CPU assignment?
pub use_binpacking: bool,
/// Should the Python program use actual shell commands (and execute)
/// them?
pub enable_shell_commands: bool,
/// Should every issued command be prefixed with `sudo`?
pub run_as_sudo: bool,
/// WARNING: generally don't touch this.
pub override_queue_count: u32,
/// Is UISP integration enabled?
pub automatic_import_uisp: bool,
/// UISP Authentication Token
pub uisp_auth_token: String,
/// UISP Base URL (e.g. billing.myisp.com)
pub uisp_base_url: String,
/// Root site for UISP tree generation
pub uisp_root_site: String,
/// Circuit names use address?
pub circuit_name_use_address: bool,
/// UISP Strategy
pub uisp_strategy: String,
/// UISP Suspension Strategy
pub uisp_suspended_strategy: String,
/// Bandwidth Overhead Factor
pub bandwidth_overhead_factor: f32,
/// Subnets allowed to be included in device lists
pub allowed_subnets: String,
/// Subnets explicitly ignored from device lists
pub ignored_subnets: String,
/// Overwrite network.json even if it exists
pub overwrite_network_json_always: bool,
}
impl LibreQoSConfig {
/// Does the ispConfig.py file exist?
pub fn config_exists() -> bool {
if let Ok(cfg) = etc::EtcLqos::load() {
let base_path = Path::new(&cfg.lqos_directory);
let final_path = base_path.join("ispConfig.py");
final_path.exists()
} else {
false
}
}
/// Loads `ispConfig.py` into a management object.
pub fn load() -> Result<Self, LibreQoSConfigError> {
if let Ok(cfg) = etc::EtcLqos::load() {
let base_path = Path::new(&cfg.lqos_directory);
let final_path = base_path.join("ispConfig.py");
Ok(Self::load_from_path(&final_path)?)
} else {
error!("Unable to read LibreQoS config from /etc/lqos.conf");
Err(LibreQoSConfigError::CannotOpenEtcLqos)
}
}
fn load_from_path(path: &PathBuf) -> Result<Self, LibreQoSConfigError> {
let path = Path::new(path);
if !path.exists() {
error!("Unable to find ispConfig.py");
return Err(LibreQoSConfigError::FileNotFoud);
}
// Read the config
let mut result = Self {
internet_interface: String::new(),
isp_interface: String::new(),
on_a_stick_mode: false,
stick_vlans: (0, 0),
sqm: String::new(),
monitor_mode: false,
total_download_mbps: 0,
total_upload_mbps: 0,
generated_download_mbps: 0,
generated_upload_mbps: 0,
use_binpacking: false,
enable_shell_commands: true,
run_as_sudo: false,
override_queue_count: 0,
automatic_import_uisp: false,
uisp_auth_token: "".to_string(),
uisp_base_url: "".to_string(),
uisp_root_site: "".to_string(),
circuit_name_use_address: false,
uisp_strategy: "".to_string(),
uisp_suspended_strategy: "".to_string(),
bandwidth_overhead_factor: 1.0,
allowed_subnets: "".to_string(),
ignored_subnets: "".to_string(),
overwrite_network_json_always: false,
};
result.parse_isp_config(path)?;
Ok(result)
}
fn parse_isp_config(
&mut self,
path: &Path,
) -> Result<(), LibreQoSConfigError> {
let read_result = fs::read_to_string(path);
match read_result {
Err(e) => {
error!("Unable to read contents of ispConfig.py. Check permissions.");
error!("{:?}", e);
return Err(LibreQoSConfigError::CannotReadFile);
}
Ok(content) => {
for line in content.split('\n') {
if line.starts_with("interfaceA") {
self.isp_interface = split_at_equals(line);
}
if line.starts_with("interfaceB") {
self.internet_interface = split_at_equals(line);
}
if line.starts_with("OnAStick") {
let mode = split_at_equals(line);
if mode == "True" {
self.on_a_stick_mode = true;
}
}
if line.starts_with("StickVlanA") {
let vlan_string = split_at_equals(line);
if let Ok(vlan) = vlan_string.parse() {
self.stick_vlans.0 = vlan;
} else {
error!(
"Unable to parse contents of StickVlanA from ispConfig.py"
);
error!("{line}");
return Err(LibreQoSConfigError::ParseError(line.to_string()));
}
}
if line.starts_with("StickVlanB") {
let vlan_string = split_at_equals(line);
if let Ok(vlan) = vlan_string.parse() {
self.stick_vlans.1 = vlan;
} else {
error!(
"Unable to parse contents of StickVlanB from ispConfig.py"
);
error!("{line}");
return Err(LibreQoSConfigError::ParseError(line.to_string()));
}
}
if line.starts_with("sqm") {
self.sqm = split_at_equals(line);
}
if line.starts_with("upstreamBandwidthCapacityDownloadMbps") {
if let Ok(mbps) = split_at_equals(line).parse() {
self.total_download_mbps = mbps;
} else {
error!("Unable to parse contents of upstreamBandwidthCapacityDownloadMbps from ispConfig.py");
error!("{line}");
return Err(LibreQoSConfigError::ParseError(line.to_string()));
}
}
if line.starts_with("upstreamBandwidthCapacityUploadMbps") {
if let Ok(mbps) = split_at_equals(line).parse() {
self.total_upload_mbps = mbps;
} else {
error!("Unable to parse contents of upstreamBandwidthCapacityUploadMbps from ispConfig.py");
error!("{line}");
return Err(LibreQoSConfigError::ParseError(line.to_string()));
}
}
if line.starts_with("monitorOnlyMode ") {
let mode = split_at_equals(line);
if mode == "True" {
self.monitor_mode = true;
}
}
if line.starts_with("generatedPNDownloadMbps") {
if let Ok(mbps) = split_at_equals(line).parse() {
self.generated_download_mbps = mbps;
} else {
error!("Unable to parse contents of generatedPNDownloadMbps from ispConfig.py");
error!("{line}");
return Err(LibreQoSConfigError::ParseError(line.to_string()));
}
}
if line.starts_with("generatedPNUploadMbps") {
if let Ok(mbps) = split_at_equals(line).parse() {
self.generated_upload_mbps = mbps;
} else {
error!("Unable to parse contents of generatedPNUploadMbps from ispConfig.py");
error!("{line}");
return Err(LibreQoSConfigError::ParseError(line.to_string()));
}
}
if line.starts_with("useBinPackingToBalanceCPU") {
let mode = split_at_equals(line);
if mode == "True" {
self.use_binpacking = true;
}
}
if line.starts_with("enableActualShellCommands") {
let mode = split_at_equals(line);
if mode == "True" {
self.enable_shell_commands = true;
}
}
if line.starts_with("runShellCommandsAsSudo") {
let mode = split_at_equals(line);
if mode == "True" {
self.run_as_sudo = true;
}
}
if line.starts_with("queuesAvailableOverride") {
self.override_queue_count =
split_at_equals(line).parse().unwrap_or(0);
}
if line.starts_with("automaticImportUISP") {
let mode = split_at_equals(line);
if mode == "True" {
self.automatic_import_uisp = true;
}
}
if line.starts_with("uispAuthToken") {
self.uisp_auth_token = split_at_equals(line);
}
if line.starts_with("UISPbaseURL") {
self.uisp_base_url = split_at_equals(line);
}
if line.starts_with("uispSite") {
self.uisp_root_site = split_at_equals(line);
}
if line.starts_with("circuitNameUseAddress") {
let mode = split_at_equals(line);
if mode == "True" {
self.circuit_name_use_address = true;
}
}
if line.starts_with("uispStrategy") {
self.uisp_strategy = split_at_equals(line);
}
if line.starts_with("uispSuspendedStrategy") {
self.uisp_suspended_strategy = split_at_equals(line);
}
if line.starts_with("bandwidthOverheadFactor") {
self.bandwidth_overhead_factor =
split_at_equals(line).parse().unwrap_or(1.0);
}
if line.starts_with("allowedSubnets") {
self.allowed_subnets = split_at_equals(line);
}
if line.starts_with("ignoreSubnets") {
self.ignored_subnets = split_at_equals(line);
}
if line.starts_with("overwriteNetworkJSONalways") {
let mode = split_at_equals(line);
if mode == "True" {
self.overwrite_network_json_always = true;
}
}
}
}
}
Ok(())
}
/// Saves the current values to `ispConfig.py` and store the
/// previous settings in `ispConfig.py.backup`.
///
pub fn save(&self) -> Result<(), LibreQoSConfigError> {
// Find the config
let cfg = etc::EtcLqos::load().map_err(|_| {
crate::libre_qos_config::LibreQoSConfigError::CannotOpenEtcLqos
})?;
let base_path = Path::new(&cfg.lqos_directory);
let final_path = base_path.join("ispConfig.py");
let backup_path = base_path.join("ispConfig.py.backup");
if std::fs::copy(&final_path, &backup_path).is_err() {
error!(
"Unable to copy {} to {}.",
final_path.display(),
backup_path.display()
);
return Err(LibreQoSConfigError::CannotCopy);
}
// Load existing file
let original = read_to_string(&final_path);
if original.is_err() {
error!("Unable to read ispConfig.py");
return Err(LibreQoSConfigError::CannotReadFile);
}
let original = original.unwrap();
// Temporary
//let final_path = base_path.join("ispConfig.py.test");
// Update config entries line by line
let mut config = String::new();
for line in original.split('\n') {
let mut line = line.to_string();
if line.starts_with("interfaceA") {
line = format!("interfaceA = '{}'", self.isp_interface);
}
if line.starts_with("interfaceB") {
line = format!("interfaceB = '{}'", self.internet_interface);
}
if line.starts_with("OnAStick") {
line = format!(
"OnAStick = {}",
if self.on_a_stick_mode { "True" } else { "False" }
);
}
if line.starts_with("StickVlanA") {
line = format!("StickVlanA = {}", self.stick_vlans.0);
}
if line.starts_with("StickVlanB") {
line = format!("StickVlanB = {}", self.stick_vlans.1);
}
if line.starts_with("sqm") {
line = format!("sqm = '{}'", self.sqm);
}
if line.starts_with("upstreamBandwidthCapacityDownloadMbps") {
line = format!(
"upstreamBandwidthCapacityDownloadMbps = {}",
self.total_download_mbps
);
}
if line.starts_with("upstreamBandwidthCapacityUploadMbps") {
line = format!(
"upstreamBandwidthCapacityUploadMbps = {}",
self.total_upload_mbps
);
}
if line.starts_with("monitorOnlyMode") {
line = format!(
"monitorOnlyMode = {}",
if self.monitor_mode { "True" } else { "False" }
);
}
if line.starts_with("generatedPNDownloadMbps") {
line = format!(
"generatedPNDownloadMbps = {}",
self.generated_download_mbps
);
}
if line.starts_with("generatedPNUploadMbps") {
line =
format!("generatedPNUploadMbps = {}", self.generated_upload_mbps);
}
if line.starts_with("useBinPackingToBalanceCPU") {
line = format!(
"useBinPackingToBalanceCPU = {}",
if self.use_binpacking { "True" } else { "False" }
);
}
if line.starts_with("enableActualShellCommands") {
line = format!(
"enableActualShellCommands = {}",
if self.enable_shell_commands { "True" } else { "False" }
);
}
if line.starts_with("runShellCommandsAsSudo") {
line = format!(
"runShellCommandsAsSudo = {}",
if self.run_as_sudo { "True" } else { "False" }
);
}
if line.starts_with("queuesAvailableOverride") {
line =
format!("queuesAvailableOverride = {}", self.override_queue_count);
}
config += &format!("{line}\n");
}
// Actually save to disk
if final_path.exists() {
remove_file(&final_path)
.map_err(|_| LibreQoSConfigError::CannotRemove)?;
}
if let Ok(mut file) =
OpenOptions::new().write(true).create_new(true).open(&final_path)
{
if file.write_all(config.as_bytes()).is_err() {
error!("Unable to write to ispConfig.py");
return Err(LibreQoSConfigError::CannotWrite);
}
} else {
error!("Unable to open ispConfig.py for writing.");
return Err(LibreQoSConfigError::CannotOpenForWrite);
}
Ok(())
}
/// Convert the Allowed Subnets list into a Trie for fast search
pub fn allowed_subnets_trie(&self) -> ip_network_table::IpNetworkTable<usize> {
let ip_list = ip_list_to_ips(&self.allowed_subnets).unwrap();
//println!("{ip_list:#?}");
ip_list_to_trie(&ip_list)
}
/// Convert the Ignored Subnets list into a Trie for fast search
pub fn ignored_subnets_trie(&self) -> ip_network_table::IpNetworkTable<usize> {
let ip_list = ip_list_to_ips(&self.ignored_subnets).unwrap();
//println!("{ip_list:#?}");
ip_list_to_trie(&ip_list)
}
}
fn split_at_equals(line: &str) -> String {
line.split('=').nth(1).unwrap_or("").trim().replace(['\"', '\''], "")
}
#[derive(Debug, Error)]
pub enum LibreQoSConfigError {
#[error("Unable to read /etc/lqos.conf. See other errors for details.")]
CannotOpenEtcLqos,
#[error("Unable to locate (path to LibreQoS)/ispConfig.py. Check your path and that you have configured it.")]
FileNotFoud,
#[error(
"Unable to read the contents of ispConfig.py. Check file permissions."
)]
CannotReadFile,
#[error("Unable to parse ispConfig.py")]
ParseError(String),
#[error("Could not backup configuration")]
CannotCopy,
#[error("Could not remove the previous configuration.")]
CannotRemove,
#[error("Could not open ispConfig.py for write")]
CannotOpenForWrite,
#[error("Unable to write to ispConfig.py")]
CannotWrite,
#[error("Unable to read IP")]
CannotReadIP,
}
fn ip_list_to_ips(
source: &str,
) -> Result<Vec<(IpAddr, u8)>, LibreQoSConfigError> {
// Remove any square brackets, spaces
let source = source.replace(['[', ']', ' '], "");
// Split at commas
Ok(
source
.split(',')
.map(|raw| {
let split: Vec<&str> = raw.split('/').collect();
let cidr = split[1].parse::<u8>().unwrap();
let addr = split[0].parse::<IpAddr>().unwrap();
(addr, cidr)
})
.collect(),
)
}
fn ip_list_to_trie(
source: &[(IpAddr, u8)],
) -> ip_network_table::IpNetworkTable<usize> {
let mut table = ip_network_table::IpNetworkTable::new();
source
.iter()
.map(|(ip, subnet)| {
(
match ip {
IpAddr::V4(ip) => ip.to_ipv6_mapped(),
IpAddr::V6(ip) => *ip,
},
match ip {
IpAddr::V4(..) => *subnet + 96,
IpAddr::V6(..) => *subnet
},
)
})
.map(|(ip, cidr)| IpNetwork::new(ip, cidr).unwrap())
.enumerate()
.for_each(|(id, net)| {
table.insert(net, id);
});
table
}

View File

@ -1,4 +1,3 @@
use crate::etc;
use dashmap::DashSet; use dashmap::DashSet;
use log::{error, info, warn}; use log::{error, info, warn};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -105,7 +104,7 @@ impl NetworkJson {
/// file. /// file.
pub fn path() -> Result<PathBuf, NetworkJsonError> { pub fn path() -> Result<PathBuf, NetworkJsonError> {
let cfg = let cfg =
etc::EtcLqos::load().map_err(|_| NetworkJsonError::ConfigLoadError)?; crate::load_config().map_err(|_| NetworkJsonError::ConfigLoadError)?;
let base_path = Path::new(&cfg.lqos_directory); let base_path = Path::new(&cfg.lqos_directory);
Ok(base_path.join("network.json")) Ok(base_path.join("network.json"))
} }

View File

@ -1,7 +1,5 @@
use log::error; use log::error;
use thiserror::Error; use thiserror::Error;
use crate::etc;
use std::{ use std::{
path::{Path, PathBuf}, path::{Path, PathBuf},
process::Command, process::Command,
@ -11,14 +9,14 @@ const PYTHON_PATH: &str = "/usr/bin/python3";
fn path_to_libreqos() -> Result<PathBuf, ProgramControlError> { fn path_to_libreqos() -> Result<PathBuf, ProgramControlError> {
let cfg = let cfg =
etc::EtcLqos::load().map_err(|_| ProgramControlError::ConfigLoadError)?; crate::load_config().map_err(|_| ProgramControlError::ConfigLoadError)?;
let base_path = Path::new(&cfg.lqos_directory); let base_path = Path::new(&cfg.lqos_directory);
Ok(base_path.join("LibreQoS.py")) Ok(base_path.join("LibreQoS.py"))
} }
fn working_directory() -> Result<PathBuf, ProgramControlError> { fn working_directory() -> Result<PathBuf, ProgramControlError> {
let cfg = let cfg =
etc::EtcLqos::load().map_err(|_| ProgramControlError::ConfigLoadError)?; crate::load_config().map_err(|_| ProgramControlError::ConfigLoadError)?;
let base_path = Path::new(&cfg.lqos_directory); let base_path = Path::new(&cfg.lqos_directory);
Ok(base_path.to_path_buf()) Ok(base_path.to_path_buf())
} }

View File

@ -1,6 +1,6 @@
mod serializable; mod serializable;
mod shaped_device; mod shaped_device;
use crate::{etc, SUPPORTED_CUSTOMERS}; use crate::SUPPORTED_CUSTOMERS;
use csv::{QuoteStyle, ReaderBuilder, WriterBuilder}; use csv::{QuoteStyle, ReaderBuilder, WriterBuilder};
use log::error; use log::error;
use serializable::SerializableShapedDevice; use serializable::SerializableShapedDevice;
@ -34,9 +34,11 @@ impl ConfigShapedDevices {
/// file. /// file.
pub fn path() -> Result<PathBuf, ShapedDevicesError> { pub fn path() -> Result<PathBuf, ShapedDevicesError> {
let cfg = let cfg =
etc::EtcLqos::load().map_err(|_| ShapedDevicesError::ConfigLoadError)?; crate::load_config().map_err(|_| ShapedDevicesError::ConfigLoadError)?;
let base_path = Path::new(&cfg.lqos_directory); let base_path = Path::new(&cfg.lqos_directory);
Ok(base_path.join("ShapedDevices.csv")) let full_path = base_path.join("ShapedDevices.csv");
log::info!("ShapedDevices.csv path: {:?}", full_path);
Ok(full_path)
} }
/// Does ShapedDevices.csv exist? /// Does ShapedDevices.csv exist?
@ -146,7 +148,7 @@ impl ConfigShapedDevices {
/// Saves the current shaped devices list to `ShapedDevices.csv` /// Saves the current shaped devices list to `ShapedDevices.csv`
pub fn write_csv(&self, filename: &str) -> Result<(), ShapedDevicesError> { pub fn write_csv(&self, filename: &str) -> Result<(), ShapedDevicesError> {
let cfg = let cfg =
etc::EtcLqos::load().map_err(|_| ShapedDevicesError::ConfigLoadError)?; crate::load_config().map_err(|_| ShapedDevicesError::ConfigLoadError)?;
let base_path = Path::new(&cfg.lqos_directory); let base_path = Path::new(&cfg.lqos_directory);
let path = base_path.join(filename); let path = base_path.join(filename);
let csv = self.to_csv_string()?; let csv = self.to_csv_string()?;

View File

@ -7,7 +7,6 @@ use crate::{
}; };
use dashmap::{DashMap, DashSet}; use dashmap::{DashMap, DashSet};
use lqos_bus::{tos_parser, PacketHeader}; use lqos_bus::{tos_parser, PacketHeader};
use lqos_config::EtcLqos;
use lqos_utils::{unix_time::time_since_boot, XdpIpAddress}; use lqos_utils::{unix_time::time_since_boot, XdpIpAddress};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::{ use std::{
@ -110,8 +109,8 @@ pub fn hyperfocus_on_target(ip: XdpIpAddress) -> Option<(usize, usize)> {
{ {
// If explicitly set, obtain the capture time. Otherwise, default to // If explicitly set, obtain the capture time. Otherwise, default to
// a reasonable 10 seconds. // a reasonable 10 seconds.
let capture_time = if let Ok(cfg) = EtcLqos::load() { let capture_time = if let Ok(cfg) = lqos_config::load_config() {
cfg.packet_capture_time.unwrap_or(10) cfg.packet_capture_time
} else { } else {
10 10
}; };

View File

@ -46,7 +46,7 @@ impl<'r> FromRequest<'r> for AuthGuard {
return Outcome::Success(AuthGuard::ReadOnly) return Outcome::Success(AuthGuard::ReadOnly)
} }
_ => { _ => {
return Outcome::Failure(( return Outcome::Error((
Status::Unauthorized, Status::Unauthorized,
Error::msg("Invalid token"), Error::msg("Invalid token"),
)) ))
@ -60,7 +60,7 @@ impl<'r> FromRequest<'r> for AuthGuard {
} }
} }
Outcome::Failure((Status::Unauthorized, Error::msg("Access Denied"))) Outcome::Error((Status::Unauthorized, Error::msg("Access Denied")))
} }
} }

View File

@ -1,7 +1,7 @@
use crate::{auth_guard::AuthGuard, cache_control::NoCache}; use crate::{auth_guard::AuthGuard, cache_control::NoCache};
use default_net::get_interfaces; use default_net::get_interfaces;
use lqos_bus::{bus_request, BusRequest, BusResponse}; use lqos_bus::{bus_request, BusRequest, BusResponse};
use lqos_config::{EtcLqos, LibreQoSConfig, Tunables}; use lqos_config::{Tunables, Config};
use rocket::{fs::NamedFile, serde::{json::Json, Serialize}}; use rocket::{fs::NamedFile, serde::{json::Json, Serialize}};
// Note that NoCache can be replaced with a cache option // Note that NoCache can be replaced with a cache option
@ -30,33 +30,15 @@ pub async fn get_nic_list<'a>(
NoCache::new(Json(result)) NoCache::new(Json(result))
} }
#[get("/api/python_config")] #[get("/api/config")]
pub async fn get_current_python_config(
_auth: AuthGuard,
) -> NoCache<Json<LibreQoSConfig>> {
let config = lqos_config::LibreQoSConfig::load().unwrap();
println!("{config:#?}");
NoCache::new(Json(config))
}
#[get("/api/lqosd_config")]
pub async fn get_current_lqosd_config( pub async fn get_current_lqosd_config(
_auth: AuthGuard, _auth: AuthGuard,
) -> NoCache<Json<EtcLqos>> { ) -> NoCache<Json<Config>> {
let config = lqos_config::EtcLqos::load().unwrap(); let config = lqos_config::load_config().unwrap();
println!("{config:#?}"); println!("{config:#?}");
NoCache::new(Json(config)) NoCache::new(Json(config))
} }
#[post("/api/python_config", data = "<config>")]
pub async fn update_python_config(
_auth: AuthGuard,
config: Json<LibreQoSConfig>,
) -> Json<String> {
config.save().unwrap();
Json("OK".to_string())
}
#[post("/api/lqos_tuning/<period>", data = "<tuning>")] #[post("/api/lqos_tuning/<period>", data = "<tuning>")]
pub async fn update_lqos_tuning( pub async fn update_lqos_tuning(
auth: AuthGuard, auth: AuthGuard,

View File

@ -80,9 +80,9 @@ fn rocket() -> _ {
queue_info::request_analysis, queue_info::request_analysis,
queue_info::dns_query, queue_info::dns_query,
config_control::get_nic_list, config_control::get_nic_list,
config_control::get_current_python_config, //config_control::get_current_python_config,
config_control::get_current_lqosd_config, config_control::get_current_lqosd_config,
config_control::update_python_config, //config_control::update_python_config,
config_control::update_lqos_tuning, config_control::update_lqos_tuning,
auth_guard::create_first_user, auth_guard::create_first_user,
auth_guard::login, auth_guard::login,

View File

@ -1,4 +1,4 @@
use lqos_config::EtcLqos; use lqos_config::load_config;
use lqos_utils::unix_time::unix_now; use lqos_utils::unix_time::unix_now;
use rocket::serde::json::Json; use rocket::serde::json::Json;
use rocket::serde::{Deserialize, Serialize}; use rocket::serde::{Deserialize, Serialize};
@ -22,12 +22,12 @@ pub struct VersionCheckResponse {
} }
async fn send_version_check() -> anyhow::Result<VersionCheckResponse> { async fn send_version_check() -> anyhow::Result<VersionCheckResponse> {
if let Ok(cfg) = EtcLqos::load() { if let Ok(cfg) = load_config() {
let current_hash = env!("GIT_HASH"); let current_hash = env!("GIT_HASH");
let request = VersionCheckRequest { let request = VersionCheckRequest {
current_git_hash: current_hash.to_string(), current_git_hash: current_hash.to_string(),
version_string: VERSION_STRING.to_string(), version_string: VERSION_STRING.to_string(),
node_id: cfg.node_id.unwrap_or("(not configured)".to_string()), node_id: cfg.node_id.to_string(),
}; };
let response = reqwest::Client::new() let response = reqwest::Client::new()
.post("https://stats.libreqos.io/api/version_check") .post("https://stats.libreqos.io/api/version_check")
@ -87,24 +87,17 @@ pub async fn stats_check() -> Json<StatsCheckAction> {
node_id: String::new(), node_id: String::new(),
}; };
if let Ok(cfg) = EtcLqos::load() { if let Ok(cfg) = load_config() {
if let Some(lts) = &cfg.long_term_stats { if !cfg.long_term_stats.gather_stats {
if !lts.gather_stats {
response = StatsCheckAction {
action: StatsCheckResponse::Disabled,
node_id: cfg.node_id.unwrap_or("(not configured)".to_string()),
};
} else {
// Stats are enabled
response = StatsCheckAction {
action: StatsCheckResponse::GoodToGo,
node_id: cfg.node_id.unwrap_or("(not configured)".to_string()),
};
}
} else {
response = StatsCheckAction { response = StatsCheckAction {
action: StatsCheckResponse::NotSetup, action: StatsCheckResponse::Disabled,
node_id: cfg.node_id.unwrap_or("(not configured)".to_string()), node_id: cfg.node_id.to_string(),
};
} else {
// Stats are enabled
response = StatsCheckAction {
action: StatsCheckResponse::GoodToGo,
node_id: cfg.node_id.to_string(),
}; };
} }
} }

View File

@ -18,9 +18,7 @@ use std::{sync::atomic::AtomicBool, time::Duration};
/// it runs as part of start-up - and keeps running. /// it runs as part of start-up - and keeps running.
/// Designed to never return or fail on error. /// Designed to never return or fail on error.
pub async fn update_tracking() { pub async fn update_tracking() {
use sysinfo::CpuExt;
use sysinfo::System; use sysinfo::System;
use sysinfo::SystemExt;
let mut sys = System::new_all(); let mut sys = System::new_all();
spawn_blocking(|| { spawn_blocking(|| {

View File

@ -12,6 +12,7 @@ crate-type = ["cdylib"]
pyo3 = "0" pyo3 = "0"
lqos_bus = { path = "../lqos_bus" } lqos_bus = { path = "../lqos_bus" }
lqos_utils = { path = "../lqos_utils" } lqos_utils = { path = "../lqos_utils" }
lqos_config = { path = "../lqos_config" }
tokio = { version = "1", features = [ "full" ] } tokio = { version = "1", features = [ "full" ] }
anyhow = "1" anyhow = "1"
sysinfo = "0" sysinfo = "0"

View File

@ -13,7 +13,7 @@ use std::{
mod blocking; mod blocking;
use anyhow::{Error, Result}; use anyhow::{Error, Result};
use blocking::run_query; use blocking::run_query;
use sysinfo::{ProcessExt, System, SystemExt}; use sysinfo::System;
const LOCK_FILE: &str = "/run/lqos/libreqos.lock"; const LOCK_FILE: &str = "/run/lqos/libreqos.lock";
@ -23,6 +23,7 @@ const LOCK_FILE: &str = "/run/lqos/libreqos.lock";
fn liblqos_python(_py: Python, m: &PyModule) -> PyResult<()> { fn liblqos_python(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<PyIpMapping>()?; m.add_class::<PyIpMapping>()?;
m.add_class::<BatchedCommands>()?; m.add_class::<BatchedCommands>()?;
m.add_class::<PyExceptionCpe>()?;
m.add_wrapped(wrap_pyfunction!(is_lqosd_alive))?; m.add_wrapped(wrap_pyfunction!(is_lqosd_alive))?;
m.add_wrapped(wrap_pyfunction!(list_ip_mappings))?; m.add_wrapped(wrap_pyfunction!(list_ip_mappings))?;
m.add_wrapped(wrap_pyfunction!(clear_ip_mappings))?; m.add_wrapped(wrap_pyfunction!(clear_ip_mappings))?;
@ -32,6 +33,60 @@ fn liblqos_python(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_wrapped(wrap_pyfunction!(is_libre_already_running))?; m.add_wrapped(wrap_pyfunction!(is_libre_already_running))?;
m.add_wrapped(wrap_pyfunction!(create_lock_file))?; m.add_wrapped(wrap_pyfunction!(create_lock_file))?;
m.add_wrapped(wrap_pyfunction!(free_lock_file))?; m.add_wrapped(wrap_pyfunction!(free_lock_file))?;
// Unified configuration items
m.add_wrapped(wrap_pyfunction!(check_config))?;
m.add_wrapped(wrap_pyfunction!(sqm))?;
m.add_wrapped(wrap_pyfunction!(upstream_bandwidth_capacity_download_mbps))?;
m.add_wrapped(wrap_pyfunction!(upstream_bandwidth_capacity_upload_mbps))?;
m.add_wrapped(wrap_pyfunction!(interface_a))?;
m.add_wrapped(wrap_pyfunction!(interface_b))?;
m.add_wrapped(wrap_pyfunction!(enable_actual_shell_commands))?;
m.add_wrapped(wrap_pyfunction!(use_bin_packing_to_balance_cpu))?;
m.add_wrapped(wrap_pyfunction!(monitor_mode_only))?;
m.add_wrapped(wrap_pyfunction!(run_shell_commands_as_sudo))?;
m.add_wrapped(wrap_pyfunction!(generated_pn_download_mbps))?;
m.add_wrapped(wrap_pyfunction!(generated_pn_upload_mbps))?;
m.add_wrapped(wrap_pyfunction!(queues_available_override))?;
m.add_wrapped(wrap_pyfunction!(on_a_stick))?;
m.add_wrapped(wrap_pyfunction!(overwrite_network_json_always))?;
m.add_wrapped(wrap_pyfunction!(allowed_subnets))?;
m.add_wrapped(wrap_pyfunction!(ignore_subnets))?;
m.add_wrapped(wrap_pyfunction!(circuit_name_use_address))?;
m.add_wrapped(wrap_pyfunction!(find_ipv6_using_mikrotik))?;
m.add_wrapped(wrap_pyfunction!(exclude_sites))?;
m.add_wrapped(wrap_pyfunction!(bandwidth_overhead_factor))?;
m.add_wrapped(wrap_pyfunction!(committed_bandwidth_multiplier))?;
m.add_wrapped(wrap_pyfunction!(exception_cpes))?;
m.add_wrapped(wrap_pyfunction!(uisp_site))?;
m.add_wrapped(wrap_pyfunction!(uisp_strategy))?;
m.add_wrapped(wrap_pyfunction!(uisp_suspended_strategy))?;
m.add_wrapped(wrap_pyfunction!(airmax_capacity))?;
m.add_wrapped(wrap_pyfunction!(ltu_capacity))?;
m.add_wrapped(wrap_pyfunction!(use_ptmp_as_parent))?;
m.add_wrapped(wrap_pyfunction!(uisp_base_url))?;
m.add_wrapped(wrap_pyfunction!(uisp_auth_token))?;
m.add_wrapped(wrap_pyfunction!(splynx_api_key))?;
m.add_wrapped(wrap_pyfunction!(splynx_api_secret))?;
m.add_wrapped(wrap_pyfunction!(splynx_api_url))?;
m.add_wrapped(wrap_pyfunction!(automatic_import_uisp))?;
m.add_wrapped(wrap_pyfunction!(automatic_import_splynx))?;
m.add_wrapped(wrap_pyfunction!(queue_refresh_interval_mins))?;
m.add_wrapped(wrap_pyfunction!(automatic_import_powercode))?;
m.add_wrapped(wrap_pyfunction!(powercode_api_key))?;
m.add_wrapped(wrap_pyfunction!(powercode_api_url))?;
m.add_wrapped(wrap_pyfunction!(automatic_import_sonar))?;
m.add_wrapped(wrap_pyfunction!(sonar_api_url))?;
m.add_wrapped(wrap_pyfunction!(sonar_api_key))?;
m.add_wrapped(wrap_pyfunction!(snmp_community))?;
m.add_wrapped(wrap_pyfunction!(sonar_airmax_ap_model_ids))?;
m.add_wrapped(wrap_pyfunction!(sonar_ltu_ap_model_ids))?;
m.add_wrapped(wrap_pyfunction!(sonar_active_status_ids))?;
m.add_wrapped(wrap_pyfunction!(influx_db_enabled))?;
m.add_wrapped(wrap_pyfunction!(influx_db_bucket))?;
m.add_wrapped(wrap_pyfunction!(influx_db_org))?;
m.add_wrapped(wrap_pyfunction!(influx_db_token))?;
m.add_wrapped(wrap_pyfunction!(influx_db_url))?;
Ok(()) Ok(())
} }
@ -254,3 +309,332 @@ fn free_lock_file() -> PyResult<()> {
let _ = remove_file(LOCK_FILE); // Ignore result let _ = remove_file(LOCK_FILE); // Ignore result
Ok(()) Ok(())
} }
#[pyfunction]
fn check_config() -> PyResult<bool> {
let config = lqos_config::load_config();
if let Err(e) = config {
println!("Error loading config: {e}");
return Ok(false);
}
Ok(true)
}
#[pyfunction]
fn sqm() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.default_sqm.clone())
}
#[pyfunction]
fn upstream_bandwidth_capacity_download_mbps() -> PyResult<u32> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.uplink_bandwidth_mbps)
}
#[pyfunction]
fn upstream_bandwidth_capacity_upload_mbps() -> PyResult<u32> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.uplink_bandwidth_mbps)
}
#[pyfunction]
fn interface_a() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.isp_interface())
}
#[pyfunction]
fn interface_b() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.internet_interface())
}
#[pyfunction]
fn enable_actual_shell_commands() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(!config.queues.dry_run)
}
#[pyfunction]
fn use_bin_packing_to_balance_cpu() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.use_binpacking)
}
#[pyfunction]
fn monitor_mode_only() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.monitor_only)
}
#[pyfunction]
fn run_shell_commands_as_sudo() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.sudo)
}
#[pyfunction]
fn generated_pn_download_mbps() -> PyResult<u32> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.generated_pn_download_mbps)
}
#[pyfunction]
fn generated_pn_upload_mbps() -> PyResult<u32> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.generated_pn_upload_mbps)
}
#[pyfunction]
fn queues_available_override() -> PyResult<u32> {
let config = lqos_config::load_config().unwrap();
Ok(config.queues.override_available_queues.unwrap_or(0))
}
#[pyfunction]
fn on_a_stick() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.on_a_stick_mode())
}
#[pyfunction]
fn overwrite_network_json_always() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.integration_common.always_overwrite_network_json)
}
#[pyfunction]
fn allowed_subnets() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.ip_ranges.allow_subnets.clone())
}
#[pyfunction]
fn ignore_subnets() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.ip_ranges.ignore_subnets.clone())
}
#[pyfunction]
fn circuit_name_use_address() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.integration_common.circuit_name_as_address)
}
#[pyfunction]
fn find_ipv6_using_mikrotik() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.ipv6_with_mikrotik)
}
#[pyfunction]
fn exclude_sites() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.exclude_sites.clone())
}
#[pyfunction]
fn bandwidth_overhead_factor() -> PyResult<f32> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.bandwidth_overhead_factor)
}
#[pyfunction]
fn committed_bandwidth_multiplier() -> PyResult<f32> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.commit_bandwidth_multiplier)
}
#[pyclass]
pub struct PyExceptionCpe {
pub cpe: String,
pub parent: String,
}
#[pyfunction]
fn exception_cpes() -> PyResult<Vec<PyExceptionCpe>> {
let config = lqos_config::load_config().unwrap();
let mut result = Vec::new();
for cpe in config.uisp_integration.exception_cpes.iter() {
result.push(PyExceptionCpe {
cpe: cpe.cpe.clone(),
parent: cpe.parent.clone(),
});
}
Ok(result)
}
#[pyfunction]
fn uisp_site() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.site)
}
#[pyfunction]
fn uisp_strategy() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.strategy)
}
#[pyfunction]
fn uisp_suspended_strategy() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.suspended_strategy)
}
#[pyfunction]
fn airmax_capacity() -> PyResult<f32> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.airmax_capacity)
}
#[pyfunction]
fn ltu_capacity() -> PyResult<f32> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.ltu_capacity)
}
#[pyfunction]
fn use_ptmp_as_parent() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.use_ptmp_as_parent)
}
#[pyfunction]
fn uisp_base_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.url)
}
#[pyfunction]
fn uisp_auth_token() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.token)
}
#[pyfunction]
fn splynx_api_key() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.spylnx_integration.api_key)
}
#[pyfunction]
fn splynx_api_secret() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.spylnx_integration.api_secret)
}
#[pyfunction]
fn splynx_api_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.spylnx_integration.url)
}
#[pyfunction]
fn automatic_import_uisp() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.enable_uisp)
}
#[pyfunction]
fn automatic_import_splynx() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.uisp_integration.enable_uisp)
}
#[pyfunction]
fn queue_refresh_interval_mins() -> PyResult<u32> {
let config = lqos_config::load_config().unwrap();
Ok(config.integration_common.queue_refresh_interval_mins)
}
#[pyfunction]
fn automatic_import_powercode() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.powercode_integration.enable_powercode)
}
#[pyfunction]
fn powercode_api_key() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.powercode_integration.powercode_api_key)
}
#[pyfunction]
fn powercode_api_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.powercode_integration.powercode_api_url)
}
#[pyfunction]
fn automatic_import_sonar() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.enable_sonar)
}
#[pyfunction]
fn sonar_api_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.sonar_api_url)
}
#[pyfunction]
fn sonar_api_key() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.sonar_api_key)
}
#[pyfunction]
fn snmp_community() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.snmp_community)
}
#[pyfunction]
fn sonar_airmax_ap_model_ids() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.airmax_model_ids)
}
#[pyfunction]
fn sonar_ltu_ap_model_ids() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.ltu_model_ids)
}
#[pyfunction]
fn sonar_active_status_ids() -> PyResult<Vec<String>> {
let config = lqos_config::load_config().unwrap();
Ok(config.sonar_integration.active_status_ids)
}
#[pyfunction]
fn influx_db_enabled() -> PyResult<bool> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.enable_influxdb)
}
#[pyfunction]
fn influx_db_bucket() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.bucket)
}
#[pyfunction]
fn influx_db_org() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.org)
}
#[pyfunction]
fn influx_db_token() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.token)
}
#[pyfunction]
fn influx_db_url() -> PyResult<String> {
let config = lqos_config::load_config().unwrap();
Ok(config.influxdb.url)
}

View File

@ -1,6 +1,5 @@
use super::{queue_node::QueueNode, QueueStructureError}; use super::{queue_node::QueueNode, QueueStructureError};
use log::error; use log::error;
use lqos_config::EtcLqos;
use serde_json::Value; use serde_json::Value;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
@ -10,7 +9,7 @@ pub struct QueueNetwork {
impl QueueNetwork { impl QueueNetwork {
pub fn path() -> Result<PathBuf, QueueStructureError> { pub fn path() -> Result<PathBuf, QueueStructureError> {
let cfg = EtcLqos::load(); let cfg = lqos_config::load_config();
if cfg.is_err() { if cfg.is_err() {
error!("unable to read /etc/lqos.conf"); error!("unable to read /etc/lqos.conf");
return Err(QueueStructureError::LqosConf); return Err(QueueStructureError::LqosConf);

View File

@ -3,7 +3,6 @@ use crate::{
queue_store::QueueStore, tracking::reader::read_named_queue_from_interface, queue_store::QueueStore, tracking::reader::read_named_queue_from_interface,
}; };
use log::info; use log::info;
use lqos_config::LibreQoSConfig;
use lqos_utils::fdtimer::periodic; use lqos_utils::fdtimer::periodic;
mod reader; mod reader;
mod watched_queues; mod watched_queues;
@ -16,7 +15,7 @@ fn track_queues() {
//info!("No queues marked for read."); //info!("No queues marked for read.");
return; // There's nothing to do - bail out fast return; // There's nothing to do - bail out fast
} }
let config = LibreQoSConfig::load(); let config = lqos_config::load_config();
if config.is_err() { if config.is_err() {
//warn!("Unable to read LibreQoS config. Skipping queue collection cycle."); //warn!("Unable to read LibreQoS config. Skipping queue collection cycle.");
return; return;
@ -25,22 +24,22 @@ fn track_queues() {
WATCHED_QUEUES.iter_mut().for_each(|q| { WATCHED_QUEUES.iter_mut().for_each(|q| {
let (circuit_id, download_class, upload_class) = q.get(); let (circuit_id, download_class, upload_class) = q.get();
let (download, upload) = if config.on_a_stick_mode { let (download, upload) = if config.on_a_stick_mode() {
( (
read_named_queue_from_interface( read_named_queue_from_interface(
&config.internet_interface, &config.internet_interface(),
download_class, download_class,
), ),
read_named_queue_from_interface( read_named_queue_from_interface(
&config.internet_interface, &config.internet_interface(),
upload_class, upload_class,
), ),
) )
} else { } else {
( (
read_named_queue_from_interface(&config.isp_interface, download_class), read_named_queue_from_interface(&config.isp_interface(), download_class),
read_named_queue_from_interface( read_named_queue_from_interface(
&config.internet_interface, &config.internet_interface(),
download_class, download_class,
), ),
) )
@ -83,7 +82,7 @@ pub fn spawn_queue_monitor() {
std::thread::spawn(|| { std::thread::spawn(|| {
// Setup the queue monitor loop // Setup the queue monitor loop
info!("Starting Queue Monitor Thread."); info!("Starting Queue Monitor Thread.");
let interval_ms = if let Ok(config) = lqos_config::EtcLqos::load() { let interval_ms = if let Ok(config) = lqos_config::load_config() {
config.queue_check_period_ms config.queue_check_period_ms
} else { } else {
1000 1000

View File

@ -8,3 +8,5 @@ license = "GPL-2.0-only"
colored = "2" colored = "2"
default-net = "0" # For obtaining an easy-to-use NIC list default-net = "0" # For obtaining an easy-to-use NIC list
uuid = { version = "1", features = ["v4", "fast-rng" ] } uuid = { version = "1", features = ["v4", "fast-rng" ] }
lqos_config = { path = "../lqos_config" }
toml = "0.8.8"

View File

@ -52,7 +52,6 @@ pub fn read_line_as_number() -> u32 {
} }
const LQOS_CONF: &str = "/etc/lqos.conf"; const LQOS_CONF: &str = "/etc/lqos.conf";
const ISP_CONF: &str = "/opt/libreqos/src/ispConfig.py";
const NETWORK_JSON: &str = "/opt/libreqos/src/network.json"; const NETWORK_JSON: &str = "/opt/libreqos/src/network.json";
const SHAPED_DEVICES: &str = "/opt/libreqos/src/ShapedDevices.csv"; const SHAPED_DEVICES: &str = "/opt/libreqos/src/ShapedDevices.csv";
const LQUSERS: &str = "/opt/libreqos/src/lqusers.toml"; const LQUSERS: &str = "/opt/libreqos/src/lqusers.toml";
@ -116,82 +115,6 @@ fn get_bandwidth(up: bool) -> u32 {
} }
} }
const ETC_LQOS_CONF: &str = "lqos_directory = '/opt/libreqos/src'
queue_check_period_ms = 1000
node_id = \"{NODE_ID}\"
[tuning]
stop_irq_balance = true
netdev_budget_usecs = 8000
netdev_budget_packets = 300
rx_usecs = 8
tx_usecs = 8
disable_rxvlan = true
disable_txvlan = true
disable_offload = [ \"gso\", \"tso\", \"lro\", \"sg\", \"gro\" ]
[bridge]
use_xdp_bridge = true
interface_mapping = [
{ name = \"{INTERNET}\", redirect_to = \"{ISP}\", scan_vlans = false },
{ name = \"{ISP}\", redirect_to = \"{INTERNET}\", scan_vlans = false }
]
vlan_mapping = []
[usage_stats]
send_anonymous = {ALLOW_ANONYMOUS}
anonymous_server = \"stats.libreqos.io:9125\"
";
fn write_etc_lqos_conf(internet: &str, isp: &str, allow_anonymous: bool) {
let new_id = Uuid::new_v4().to_string();
let output =
ETC_LQOS_CONF.replace("{INTERNET}", internet).replace("{ISP}", isp)
.replace("{NODE_ID}", &new_id)
.replace("{ALLOW_ANONYMOUS}", &allow_anonymous.to_string());
fs::write(LQOS_CONF, output).expect("Unable to write file");
}
pub fn write_isp_config_py(
dir: &str,
download: u32,
upload: u32,
lan: &str,
internet: &str,
) {
// Copy ispConfig.example.py to ispConfig.py
let orig = format!("{dir}ispConfig.example.py");
let dest = format!("{dir}ispConfig.py");
std::fs::copy(orig, &dest).unwrap();
let config_file = std::fs::read_to_string(&dest).unwrap();
let mut new_config_file = String::new();
config_file.split('\n').for_each(|line| {
if line.starts_with('#') {
new_config_file += line;
new_config_file += "\n";
} else if line.contains("upstreamBandwidthCapacityDownloadMbps") {
new_config_file +=
&format!("upstreamBandwidthCapacityDownloadMbps = {download}\n");
} else if line.contains("upstreamBandwidthCapacityUploadMbps") {
new_config_file +=
&format!("upstreamBandwidthCapacityUploadMbps = {upload}\n");
} else if line.contains("interfaceA") {
new_config_file += &format!("interfaceA = \"{lan}\"\n");
} else if line.contains("interfaceB") {
new_config_file += &format!("interfaceB = \"{internet}\"\n");
} else if line.contains("generatedPNDownloadMbps") {
new_config_file += &format!("generatedPNDownloadMbps = {download}\n");
} else if line.contains("generatedPNUploadMbps") {
new_config_file += &format!("generatedPNUploadMbps = {upload}\n");
} else {
new_config_file += line;
new_config_file += "\n";
}
});
std::fs::write(&dest, new_config_file).unwrap();
}
fn write_network_json() { fn write_network_json() {
let output = "{}\n"; let output = "{}\n";
fs::write(NETWORK_JSON, output).expect("Unable to write file"); fs::write(NETWORK_JSON, output).expect("Unable to write file");
@ -223,6 +146,26 @@ fn anonymous() -> bool {
} }
} }
fn write_combined_config(
to_internet: &str,
to_network: &str,
download: u32,
upload: u32,
allow_anonymous: bool,
) {
let mut config = lqos_config::Config::default();
config.node_id = lqos_config::Config::calculate_node_id();
config.single_interface = None;
config.bridge = Some(lqos_config::BridgeConfig { use_xdp_bridge:true, to_internet: to_internet.to_string(), to_network: to_network.to_string() });
config.queues.downlink_bandwidth_mbps = download;
config.queues.uplink_bandwidth_mbps = upload;
config.queues.generated_pn_download_mbps = download;
config.queues.generated_pn_upload_mbps = upload;
config.usage_stats.send_anonymous = allow_anonymous;
let raw = toml::to_string_pretty(&config).unwrap();
std::fs::write("/etc/lqos.conf", raw).unwrap();
}
fn main() { fn main() {
println!("{:^80}", "LibreQoS 1.4 Setup Assistant".yellow().on_blue()); println!("{:^80}", "LibreQoS 1.4 Setup Assistant".yellow().on_blue());
println!(); println!();
@ -237,26 +180,11 @@ fn main() {
); );
get_internet_interface(&interfaces, &mut if_internet); get_internet_interface(&interfaces, &mut if_internet);
get_isp_interface(&interfaces, &mut if_isp); get_isp_interface(&interfaces, &mut if_isp);
let allow_anonymous = anonymous();
if let (Some(internet), Some(isp)) = (&if_internet, &if_isp) {
write_etc_lqos_conf(internet, isp, allow_anonymous);
}
}
if should_build(ISP_CONF) {
println!("{}{}", ISP_CONF.cyan(), "does not exist, building one.".white());
get_internet_interface(&interfaces, &mut if_internet);
get_isp_interface(&interfaces, &mut if_isp);
let upload = get_bandwidth(true); let upload = get_bandwidth(true);
let download = get_bandwidth(false); let download = get_bandwidth(false);
let allow_anonymous = anonymous();
if let (Some(internet), Some(isp)) = (&if_internet, &if_isp) { if let (Some(internet), Some(isp)) = (&if_internet, &if_isp) {
write_isp_config_py( write_combined_config(internet, isp, download, upload, allow_anonymous);
"/opt/libreqos/src/",
download,
upload,
isp,
internet,
)
} }
} }

View File

@ -1,7 +1,6 @@
use crate::{bpf_map::BpfMap, lqos_kernel::interface_name_to_index}; use crate::{bpf_map::BpfMap, lqos_kernel::interface_name_to_index};
use anyhow::Result; use anyhow::Result;
use log::info; use log::info;
use lqos_config::{BridgeInterface, BridgeVlan};
#[repr(C)] #[repr(C)]
#[derive(Default, Clone, Debug)] #[derive(Default, Clone, Debug)]
@ -31,10 +30,86 @@ pub(crate) fn clear_bifrost() -> Result<()> {
Ok(()) Ok(())
} }
pub(crate) fn map_interfaces(mappings: &[BridgeInterface]) -> Result<()> { pub(crate) fn map_multi_interface_mode(
to_internet: &str,
to_lan: &str,
) -> Result<()> {
info!("Interface maps (multi-interface)");
let mut interface_map =
BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?;
// Internet
let mut from = interface_name_to_index(to_internet)?;
let redirect_to = interface_name_to_index(to_lan)?;
let mut mapping = BifrostInterface {
redirect_to,
scan_vlans: 0,
};
interface_map.insert(&mut from, &mut mapping)?;
info!("Mapped bifrost interface {}->{}", from, redirect_to);
// LAN
let mut from = interface_name_to_index(to_lan)?;
let redirect_to = interface_name_to_index(to_internet)?;
let mut mapping = BifrostInterface {
redirect_to,
scan_vlans: 0,
};
interface_map.insert(&mut from, &mut mapping)?;
info!("Mapped bifrost interface {}->{}", from, redirect_to);
Ok(())
}
pub(crate) fn map_single_interface_mode(
interface: &str,
internet_vlan: u32,
lan_vlan: u32,
) -> Result<()> {
info!("Interface maps (single interface)");
let mut interface_map =
BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?;
let mut vlan_map = BpfMap::<u32, BifrostVlan>::from_path(VLAN_PATH)?;
// Internet
let mut from = interface_name_to_index(interface)?;
let redirect_to = interface_name_to_index(interface)?;
let mut mapping = BifrostInterface {
redirect_to,
scan_vlans: 1,
};
interface_map.insert(&mut from, &mut mapping)?;
info!("Mapped bifrost interface {}->{}", from, redirect_to);
// VLANs - Internet
let mut key: u32 = (interface_name_to_index(&interface)? << 16) | internet_vlan;
let mut val = BifrostVlan { redirect_to: mapping.redirect_to };
vlan_map.insert(&mut key, &mut val)?;
info!(
"Mapped bifrost VLAN: {}:{} => {}",
interface, internet_vlan, lan_vlan
);
info!("{key}");
// VLANs - LAN
let mut key: u32 = (interface_name_to_index(&interface)? << 16) | lan_vlan;
let mut val = BifrostVlan { redirect_to: mapping.redirect_to };
vlan_map.insert(&mut key, &mut val)?;
info!(
"Mapped bifrost VLAN: {}:{} => {}",
interface, lan_vlan, internet_vlan
);
info!("{key}");
Ok(())
}
/*pub(crate) fn map_interfaces(mappings: &[&str]) -> Result<()> {
info!("Interface maps"); info!("Interface maps");
let mut interface_map = let mut interface_map =
BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?; BpfMap::<u32, BifrostInterface>::from_path(INTERFACE_PATH)?;
for mapping in mappings.iter() { for mapping in mappings.iter() {
// Key is the parent interface // Key is the parent interface
let mut from = interface_name_to_index(&mapping.name)?; let mut from = interface_name_to_index(&mapping.name)?;
@ -67,4 +142,4 @@ pub(crate) fn map_vlans(mappings: &[BridgeVlan]) -> Result<()> {
info!("{key}"); info!("{key}");
} }
Ok(()) Ok(())
} }*/

View File

@ -205,21 +205,22 @@ pub fn attach_xdp_and_tc_to_interface(
} }
// Attach to the ingress IF it is configured // Attach to the ingress IF it is configured
if let Ok(etc) = lqos_config::EtcLqos::load() { if let Ok(etc) = lqos_config::load_config() {
if let Some(bridge) = &etc.bridge { if let Some(bridge) = &etc.bridge {
if bridge.use_xdp_bridge { if bridge.use_xdp_bridge {
// Enable "promiscuous" mode on interfaces // Enable "promiscuous" mode on interfaces
for mapping in bridge.interface_mapping.iter() { info!("Enabling promiscuous mode on {}", &bridge.to_internet);
info!("Enabling promiscuous mode on {}", &mapping.name); std::process::Command::new("/bin/ip")
std::process::Command::new("/bin/ip") .args(["link", "set", &bridge.to_internet, "promisc", "on"])
.args(["link", "set", &mapping.name, "promisc", "on"]) .output()?;
.output()?; info!("Enabling promiscuous mode on {}", &bridge.to_network);
} std::process::Command::new("/bin/ip")
.args(["link", "set", &bridge.to_network, "promisc", "on"])
.output()?;
// Build the interface and vlan map entries // Build the interface and vlan map entries
crate::bifrost_maps::clear_bifrost()?; crate::bifrost_maps::clear_bifrost()?;
crate::bifrost_maps::map_interfaces(&bridge.interface_mapping)?; crate::bifrost_maps::map_multi_interface_mode(&bridge.to_internet, &bridge.to_network)?;
crate::bifrost_maps::map_vlans(&bridge.vlan_mapping)?;
// Actually attach the TC ingress program // Actually attach the TC ingress program
let error = unsafe { let error = unsafe {
@ -230,6 +231,26 @@ pub fn attach_xdp_and_tc_to_interface(
} }
} }
} }
if let Some(stick) = &etc.single_interface {
// Enable "promiscuous" mode on interface
info!("Enabling promiscuous mode on {}", &stick.interface);
std::process::Command::new("/bin/ip")
.args(["link", "set", &stick.interface, "promisc", "on"])
.output()?;
// Build the interface and vlan map entries
crate::bifrost_maps::clear_bifrost()?;
crate::bifrost_maps::map_single_interface_mode(&stick.interface, stick.internet_vlan as u32, stick.network_vlan as u32)?;
// Actually attach the TC ingress program
let error = unsafe {
bpf::tc_attach_ingress(interface_index as i32, false, skeleton)
};
if error != 0 {
return Err(Error::msg("Unable to attach TC Ingress to interface"));
}
}
} }

View File

@ -3,7 +3,7 @@
**The LibreQoS Daemon** is designed to run as a `systemd` service at all times. It provides: **The LibreQoS Daemon** is designed to run as a `systemd` service at all times. It provides:
* Load/Unload the XDP/TC programs (they unload when the program exits) * Load/Unload the XDP/TC programs (they unload when the program exits)
* Configure XDP/TC, based on the content of `ispConfig.py`. * Configure XDP/TC, based on the content of `/etc/lqos.conf`.
* Includes support for "on a stick" mode, using `OnAStick = True, StickVlanA = 1, StickVlanB = 2`. * Includes support for "on a stick" mode, using `OnAStick = True, StickVlanA = 1, StickVlanB = 2`.
* Hosts a lightweight server offering "bus" queries for clients (such as `lqtop` and `xdp_iphash_to_cpu_cmdline`). * Hosts a lightweight server offering "bus" queries for clients (such as `lqtop` and `xdp_iphash_to_cpu_cmdline`).
* See the `lqos_bus` sub-project for bus details. * See the `lqos_bus` sub-project for bus details.

View File

@ -2,26 +2,23 @@ mod lshw;
mod version; mod version;
use std::{time::Duration, net::TcpStream, io::Write}; use std::{time::Duration, net::TcpStream, io::Write};
use lqos_bus::anonymous::{AnonymousUsageV1, build_stats}; use lqos_bus::anonymous::{AnonymousUsageV1, build_stats};
use lqos_config::{EtcLqos, LibreQoSConfig};
use lqos_sys::num_possible_cpus; use lqos_sys::num_possible_cpus;
use sysinfo::{System, SystemExt, CpuExt}; use sysinfo::System;
use crate::{shaped_devices_tracker::{SHAPED_DEVICES, NETWORK_JSON}, stats::{HIGH_WATERMARK_DOWN, HIGH_WATERMARK_UP}}; use crate::{shaped_devices_tracker::{SHAPED_DEVICES, NETWORK_JSON}, stats::{HIGH_WATERMARK_DOWN, HIGH_WATERMARK_UP}};
const SLOW_START_SECS: u64 = 1; const SLOW_START_SECS: u64 = 1;
const INTERVAL_SECS: u64 = 60 * 60 * 24; const INTERVAL_SECS: u64 = 60 * 60 * 24;
pub async fn start_anonymous_usage() { pub async fn start_anonymous_usage() {
if let Ok(cfg) = EtcLqos::load() { if let Ok(cfg) = lqos_config::load_config() {
if let Some(usage) = cfg.usage_stats { if cfg.usage_stats.send_anonymous {
if usage.send_anonymous { std::thread::spawn(|| {
std::thread::spawn(|| { std::thread::sleep(Duration::from_secs(SLOW_START_SECS));
std::thread::sleep(Duration::from_secs(SLOW_START_SECS)); loop {
loop { let _ = anonymous_usage_dump();
let _ = anonymous_usage_dump(); std::thread::sleep(Duration::from_secs(INTERVAL_SECS));
std::thread::sleep(Duration::from_secs(INTERVAL_SECS)); }
} });
});
}
} }
} }
} }
@ -33,7 +30,7 @@ fn anonymous_usage_dump() -> anyhow::Result<()> {
sys.refresh_all(); sys.refresh_all();
data.total_memory = sys.total_memory(); data.total_memory = sys.total_memory();
data.available_memory = sys.available_memory(); data.available_memory = sys.available_memory();
if let Some(kernel) = sys.kernel_version() { if let Some(kernel) = sysinfo::System::kernel_version() {
data.kernel_version = kernel; data.kernel_version = kernel;
} }
data.usable_cores = num_possible_cpus().unwrap_or(0); data.usable_cores = num_possible_cpus().unwrap_or(0);
@ -52,30 +49,24 @@ fn anonymous_usage_dump() -> anyhow::Result<()> {
data.distro = pv.trim().to_string(); data.distro = pv.trim().to_string();
} }
if let Ok(cfg) = LibreQoSConfig::load() { if let Ok(cfg) = lqos_config::load_config() {
data.sqm = cfg.sqm; data.sqm = cfg.queues.default_sqm.clone();
data.monitor_mode = cfg.monitor_mode; data.monitor_mode = cfg.queues.monitor_only;
data.total_capacity = ( data.total_capacity = (
cfg.total_download_mbps, cfg.queues.downlink_bandwidth_mbps,
cfg.total_upload_mbps, cfg.queues.uplink_bandwidth_mbps,
); );
data.generated_pdn_capacity = ( data.generated_pdn_capacity = (
cfg.generated_download_mbps, cfg.queues.generated_pn_download_mbps,
cfg.generated_upload_mbps, cfg.queues.generated_pn_upload_mbps,
); );
data.on_a_stick = cfg.on_a_stick_mode; data.on_a_stick = cfg.on_a_stick_mode();
}
if let Ok(cfg) = EtcLqos::load() { data.node_id = cfg.node_id.clone();
if let Some(node_id) = cfg.node_id { if let Some(bridge) = cfg.bridge {
data.node_id = node_id; data.using_xdp_bridge = bridge.use_xdp_bridge;
if let Some(bridge) = cfg.bridge {
data.using_xdp_bridge = bridge.use_xdp_bridge;
}
}
if let Some(anon) = cfg.usage_stats {
server = anon.anonymous_server;
} }
server = cfg.usage_stats.anonymous_server;
} }
data.git_hash = env!("GIT_HASH").to_string(); data.git_hash = env!("GIT_HASH").to_string();

View File

@ -6,7 +6,7 @@ use std::{
io::{Read, Write}, io::{Read, Write},
path::Path, path::Path,
}; };
use sysinfo::{ProcessExt, System, SystemExt}; use sysinfo::System;
const LOCK_PATH: &str = "/run/lqos/lqosd.lock"; const LOCK_PATH: &str = "/run/lqos/lqosd.lock";
const LOCK_DIR: &str = "/run/lqos"; const LOCK_DIR: &str = "/run/lqos";

View File

@ -17,7 +17,6 @@ use crate::{
use anyhow::Result; use anyhow::Result;
use log::{info, warn}; use log::{info, warn};
use lqos_bus::{BusRequest, BusResponse, UnixSocketServer, StatsRequest}; use lqos_bus::{BusRequest, BusResponse, UnixSocketServer, StatsRequest};
use lqos_config::LibreQoSConfig;
use lqos_heimdall::{n_second_packet_dump, perf_interface::heimdall_handle_events, start_heimdall}; use lqos_heimdall::{n_second_packet_dump, perf_interface::heimdall_handle_events, start_heimdall};
use lqos_queue_tracker::{ use lqos_queue_tracker::{
add_watched_queue, get_raw_circuit_data, spawn_queue_monitor, add_watched_queue, get_raw_circuit_data, spawn_queue_monitor,
@ -57,19 +56,19 @@ async fn main() -> Result<()> {
} }
info!("LibreQoS Daemon Starting"); info!("LibreQoS Daemon Starting");
let config = LibreQoSConfig::load()?; let config = lqos_config::load_config()?;
tuning::tune_lqosd_from_config_file(&config)?; tuning::tune_lqosd_from_config_file()?;
// Start the XDP/TC kernels // Start the XDP/TC kernels
let kernels = if config.on_a_stick_mode { let kernels = if config.on_a_stick_mode() {
LibreQoSKernels::on_a_stick_mode( LibreQoSKernels::on_a_stick_mode(
&config.internet_interface, &config.internet_interface(),
config.stick_vlans.1, config.stick_vlans().1 as u16,
config.stick_vlans.0, config.stick_vlans().0 as u16,
Some(heimdall_handle_events), Some(heimdall_handle_events),
)? )?
} else { } else {
LibreQoSKernels::new(&config.internet_interface, &config.isp_interface, Some(heimdall_handle_events))? LibreQoSKernels::new(&config.internet_interface(), &config.isp_interface(), Some(heimdall_handle_events))?
}; };
// Spawn tracking sub-systems // Spawn tracking sub-systems
@ -107,13 +106,9 @@ async fn main() -> Result<()> {
} }
SIGHUP => { SIGHUP => {
warn!("Reloading configuration because of SIGHUP"); warn!("Reloading configuration because of SIGHUP");
if let Ok(config) = LibreQoSConfig::load() { let result = tuning::tune_lqosd_from_config_file();
let result = tuning::tune_lqosd_from_config_file(&config); if let Err(err) = result {
if let Err(err) = result { warn!("Unable to HUP tunables: {:?}", err)
warn!("Unable to HUP tunables: {:?}", err)
}
} else {
warn!("Unable to reload configuration");
} }
} }
_ => warn!("No handler for signal: {sig}"), _ => warn!("No handler for signal: {sig}"),

View File

@ -1,26 +1,23 @@
mod offloads; mod offloads;
use anyhow::Result; use anyhow::Result;
use lqos_bus::{BusRequest, BusResponse}; use lqos_bus::{BusRequest, BusResponse};
use lqos_config::{EtcLqos, LibreQoSConfig};
use lqos_queue_tracker::set_queue_refresh_interval; use lqos_queue_tracker::set_queue_refresh_interval;
pub fn tune_lqosd_from_config_file(config: &LibreQoSConfig) -> Result<()> { pub fn tune_lqosd_from_config_file() -> Result<()> {
let etc_lqos = EtcLqos::load()?; let config = lqos_config::load_config()?;
// Disable offloading // Disable offloading
if let Some(tuning) = &etc_lqos.tuning { offloads::bpf_sysctls();
offloads::bpf_sysctls(); if config.tuning.stop_irq_balance {
if tuning.stop_irq_balance { offloads::stop_irq_balance();
offloads::stop_irq_balance();
}
offloads::netdev_budget(
tuning.netdev_budget_usecs,
tuning.netdev_budget_packets,
);
offloads::ethtool_tweaks(&config.internet_interface, tuning);
offloads::ethtool_tweaks(&config.isp_interface, tuning);
} }
let interval = etc_lqos.queue_check_period_ms; offloads::netdev_budget(
config.tuning.netdev_budget_usecs,
config.tuning.netdev_budget_packets,
);
offloads::ethtool_tweaks(&config.internet_interface(), &config.tuning);
offloads::ethtool_tweaks(&config.isp_interface(), &config.tuning);
let interval = config.queue_check_period_ms;
set_queue_refresh_interval(interval); set_queue_refresh_interval(interval);
Ok(()) Ok(())
} }
@ -29,7 +26,7 @@ pub fn tune_lqosd_from_bus(request: &BusRequest) -> BusResponse {
match request { match request {
BusRequest::UpdateLqosDTuning(interval, tuning) => { BusRequest::UpdateLqosDTuning(interval, tuning) => {
// Real-time tuning changes. Probably dangerous. // Real-time tuning changes. Probably dangerous.
if let Ok(config) = LibreQoSConfig::load() { if let Ok(config) = lqos_config::load_config() {
if tuning.stop_irq_balance { if tuning.stop_irq_balance {
offloads::stop_irq_balance(); offloads::stop_irq_balance();
} }
@ -37,8 +34,8 @@ pub fn tune_lqosd_from_bus(request: &BusRequest) -> BusResponse {
tuning.netdev_budget_usecs, tuning.netdev_budget_usecs,
tuning.netdev_budget_packets, tuning.netdev_budget_packets,
); );
offloads::ethtool_tweaks(&config.internet_interface, tuning); offloads::ethtool_tweaks(&config.internet_interface(), &config.tuning);
offloads::ethtool_tweaks(&config.isp_interface, tuning); offloads::ethtool_tweaks(&config.isp_interface(), &config.tuning);
} }
set_queue_refresh_interval(*interval); set_queue_refresh_interval(*interval);
lqos_bus::BusResponse::Ack lqos_bus::BusResponse::Ack

View File

@ -1,11 +1,10 @@
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use sysinfo::{System, SystemExt}; use sysinfo::System;
use tokio::sync::Mutex; use tokio::sync::Mutex;
static SYS: Lazy<Mutex<System>> = Lazy::new(|| Mutex::new(System::new_all())); static SYS: Lazy<Mutex<System>> = Lazy::new(|| Mutex::new(System::new_all()));
pub(crate) async fn get_cpu_ram() -> (Vec<u32>, u32) { pub(crate) async fn get_cpu_ram() -> (Vec<u32>, u32) {
use sysinfo::CpuExt;
let mut lock = SYS.lock().await; let mut lock = SYS.lock().await;
lock.refresh_cpu(); lock.refresh_cpu();
lock.refresh_memory(); lock.refresh_memory();

View File

@ -7,12 +7,22 @@
//! lose min/max values. //! lose min/max values.
use super::StatsUpdateMessage; use super::StatsUpdateMessage;
use crate::{collector::{collation::{collate_stats, StatsSession}, SESSION_BUFFER, uisp_ext::gather_uisp_data}, submission_queue::{enqueue_shaped_devices_if_allowed, comm_channel::{SenderChannelMessage, start_communication_channel}}}; use crate::{
use lqos_config::EtcLqos; collector::{
collation::{collate_stats, StatsSession},
uisp_ext::gather_uisp_data,
SESSION_BUFFER,
},
submission_queue::{
comm_channel::{start_communication_channel, SenderChannelMessage},
enqueue_shaped_devices_if_allowed,
},
};
use dashmap::DashSet;
use lqos_config::load_config;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::{sync::atomic::AtomicU64, time::Duration}; use std::{sync::atomic::AtomicU64, time::Duration};
use tokio::sync::mpsc::{self, Receiver, Sender}; use tokio::sync::mpsc::{self, Receiver, Sender};
use dashmap::DashSet;
static STATS_COUNTER: AtomicU64 = AtomicU64::new(0); static STATS_COUNTER: AtomicU64 = AtomicU64::new(0);
pub(crate) static DEVICE_ID_LIST: Lazy<DashSet<String>> = Lazy::new(DashSet::new); pub(crate) static DEVICE_ID_LIST: Lazy<DashSet<String>> = Lazy::new(DashSet::new);
@ -22,21 +32,21 @@ pub(crate) static DEVICE_ID_LIST: Lazy<DashSet<String>> = Lazy::new(DashSet::new
/// ///
/// Returns a channel that may be used to notify of data availability. /// Returns a channel that may be used to notify of data availability.
pub async fn start_long_term_stats() -> Sender<StatsUpdateMessage> { pub async fn start_long_term_stats() -> Sender<StatsUpdateMessage> {
let (update_tx, mut update_rx): (Sender<StatsUpdateMessage>, Receiver<StatsUpdateMessage>) = mpsc::channel(10); let (update_tx, mut update_rx): (Sender<StatsUpdateMessage>, Receiver<StatsUpdateMessage>) =
let (comm_tx, comm_rx): (Sender<SenderChannelMessage>, Receiver<SenderChannelMessage>) = mpsc::channel(10); mpsc::channel(10);
let (comm_tx, comm_rx): (Sender<SenderChannelMessage>, Receiver<SenderChannelMessage>) =
mpsc::channel(10);
if let Ok(cfg) = lqos_config::EtcLqos::load() { if let Ok(cfg) = load_config() {
if let Some(lts) = cfg.long_term_stats { if !cfg.long_term_stats.gather_stats {
if !lts.gather_stats { // Wire up a null recipient to the channel, so it receives messages
// Wire up a null recipient to the channel, so it receives messages // but doesn't do anything with them.
// but doesn't do anything with them. tokio::spawn(async move {
tokio::spawn(async move { while let Some(_msg) = update_rx.recv().await {
while let Some(_msg) = update_rx.recv().await { // Do nothing
// Do nothing }
} });
}); return update_tx;
return update_tx;
}
} }
} }
@ -85,7 +95,10 @@ async fn lts_manager(mut rx: Receiver<StatsUpdateMessage>, comm_tx: Sender<Sende
shaped_devices.iter().for_each(|d| { shaped_devices.iter().for_each(|d| {
DEVICE_ID_LIST.insert(d.device_id.clone()); DEVICE_ID_LIST.insert(d.device_id.clone());
}); });
tokio::spawn(enqueue_shaped_devices_if_allowed(shaped_devices, comm_tx.clone())); tokio::spawn(enqueue_shaped_devices_if_allowed(
shaped_devices,
comm_tx.clone(),
));
} }
Some(StatsUpdateMessage::CollationTime) => { Some(StatsUpdateMessage::CollationTime) => {
log::info!("Collation time reached"); log::info!("Collation time reached");
@ -108,20 +121,18 @@ async fn lts_manager(mut rx: Receiver<StatsUpdateMessage>, comm_tx: Sender<Sende
} }
fn get_collation_period() -> Duration { fn get_collation_period() -> Duration {
if let Ok(cfg) = EtcLqos::load() { if let Ok(cfg) = load_config() {
if let Some(lts) = &cfg.long_term_stats { return Duration::from_secs(cfg.long_term_stats.collation_period_seconds.into());
return Duration::from_secs(lts.collation_period_seconds.into());
}
} }
Duration::from_secs(60) Duration::from_secs(60)
} }
fn get_uisp_collation_period() -> Option<Duration> { fn get_uisp_collation_period() -> Option<Duration> {
if let Ok(cfg) = EtcLqos::load() { if let Ok(cfg) = load_config() {
if let Some(lts) = &cfg.long_term_stats { return Some(Duration::from_secs(
return Some(Duration::from_secs(lts.uisp_reporting_interval_seconds.unwrap_or(300))); cfg.long_term_stats.uisp_reporting_interval_seconds.unwrap_or(300),
} ));
} }
None None
@ -136,7 +147,10 @@ async fn uisp_collection_manager(control_tx: Sender<StatsUpdateMessage>) {
if let Some(period) = get_uisp_collation_period() { if let Some(period) = get_uisp_collation_period() {
log::info!("Starting UISP poller with period {:?}", period); log::info!("Starting UISP poller with period {:?}", period);
loop { loop {
control_tx.send(StatsUpdateMessage::UispCollationTime).await.unwrap(); control_tx
.send(StatsUpdateMessage::UispCollationTime)
.await
.unwrap();
tokio::time::sleep(period).await; tokio::time::sleep(period).await;
} }
} else { } else {

View File

@ -1,6 +1,6 @@
use super::{queue_node::QueueNode, QueueStructureError}; use super::{queue_node::QueueNode, QueueStructureError};
use log::error; use log::error;
use lqos_config::EtcLqos; use lqos_config::load_config;
use serde_json::Value; use serde_json::Value;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
@ -10,7 +10,7 @@ pub struct QueueNetwork {
impl QueueNetwork { impl QueueNetwork {
pub fn path() -> Result<PathBuf, QueueStructureError> { pub fn path() -> Result<PathBuf, QueueStructureError> {
let cfg = EtcLqos::load(); let cfg = load_config();
if cfg.is_err() { if cfg.is_err() {
error!("unable to read /etc/lqos.conf"); error!("unable to read /etc/lqos.conf");
return Err(QueueStructureError::LqosConf); return Err(QueueStructureError::LqosConf);

View File

@ -1,3 +1,4 @@
use lqos_config::load_config;
use tokio::sync::Mutex; use tokio::sync::Mutex;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use super::CakeStats; use super::CakeStats;
@ -23,10 +24,10 @@ impl CakeTracker {
} }
pub(crate) async fn update(&mut self) -> Option<(Vec<CakeStats>, Vec<CakeStats>)> { pub(crate) async fn update(&mut self) -> Option<(Vec<CakeStats>, Vec<CakeStats>)> {
if let Ok(cfg) = lqos_config::LibreQoSConfig::load() { if let Ok(cfg) = load_config() {
let outbound = &cfg.internet_interface; let outbound = &cfg.internet_interface();
let inbound = &cfg.isp_interface; let inbound = &cfg.isp_interface();
if cfg.on_a_stick_mode { if cfg.on_a_stick_mode() {
let reader = super::AsyncQueueReader::new(outbound); let reader = super::AsyncQueueReader::new(outbound);
if let Ok((Some(up), Some(down))) = reader.run_on_a_stick().await { if let Ok((Some(up), Some(down))) = reader.run_on_a_stick().await {
return self.read_up_down(up, down); return self.read_up_down(up, down);

View File

@ -1,3 +1,4 @@
use lqos_config::load_config;
use lqos_utils::unix_time::unix_now; use lqos_utils::unix_time::unix_now;
use tokio::sync::mpsc::Sender; use tokio::sync::mpsc::Sender;
use crate::{submission_queue::{comm_channel::SenderChannelMessage, new_submission}, transport_data::{StatsSubmission, UispExtDevice}, collector::collection_manager::DEVICE_ID_LIST}; use crate::{submission_queue::{comm_channel::SenderChannelMessage, new_submission}, transport_data::{StatsSubmission, UispExtDevice}, collector::collection_manager::DEVICE_ID_LIST};
@ -9,7 +10,7 @@ pub(crate) async fn gather_uisp_data(comm_tx: Sender<SenderChannelMessage>) {
return; // We're not ready return; // We're not ready
} }
if let Ok(config) = lqos_config::LibreQoSConfig::load() { if let Ok(config) = load_config() {
if let Ok(devices) = uisp::load_all_devices_with_interfaces(config).await { if let Ok(devices) = uisp::load_all_devices_with_interfaces(config).await {
log::info!("Loaded {} UISP devices", devices.len()); log::info!("Loaded {} UISP devices", devices.len());

View File

@ -1,5 +1,5 @@
use dryoc::{dryocbox::{Nonce, DryocBox}, types::{NewByteArray, ByteArray}}; use dryoc::{dryocbox::{Nonce, DryocBox}, types::{NewByteArray, ByteArray}};
use lqos_config::EtcLqos; use lqos_config::load_config;
use thiserror::Error; use thiserror::Error;
use crate::{transport_data::{LtsCommand, NodeIdAndLicense, HelloVersion2}, submission_queue::queue::QueueError}; use crate::{transport_data::{LtsCommand, NodeIdAndLicense, HelloVersion2}, submission_queue::queue::QueueError};
use super::keys::{SERVER_PUBLIC_KEY, KEYPAIR}; use super::keys::{SERVER_PUBLIC_KEY, KEYPAIR};
@ -104,17 +104,13 @@ pub(crate) async fn encode_submission(submission: &LtsCommand) -> Result<Vec<u8>
} }
fn get_license_key_and_node_id(nonce: &Nonce) -> Result<NodeIdAndLicense, QueueError> { fn get_license_key_and_node_id(nonce: &Nonce) -> Result<NodeIdAndLicense, QueueError> {
let cfg = EtcLqos::load().map_err(|_| QueueError::SendFail)?; let cfg = load_config().map_err(|_| QueueError::SendFail)?;
if let Some(node_id) = cfg.node_id { if let Some(license_key) = &cfg.long_term_stats.license_key {
if let Some(lts) = &cfg.long_term_stats { return Ok(NodeIdAndLicense {
if let Some(license_key) = &lts.license_key { node_id: cfg.node_id.clone(),
return Ok(NodeIdAndLicense { license_key: license_key.clone(),
node_id, nonce: *nonce.as_array(),
license_key: license_key.clone(), });
nonce: *nonce.as_array(),
});
}
}
} }
Err(QueueError::SendFail) Err(QueueError::SendFail)
} }

View File

@ -1,5 +1,5 @@
use crate::{pki::generate_new_keypair, dryoc::dryocbox::{KeyPair, PublicKey}, transport_data::{exchange_keys_with_license_server, LicenseReply}}; use crate::{pki::generate_new_keypair, dryoc::dryocbox::{KeyPair, PublicKey}, transport_data::{exchange_keys_with_license_server, LicenseReply}};
use lqos_config::EtcLqos; use lqos_config::load_config;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use tokio::sync::RwLock; use tokio::sync::RwLock;
@ -11,14 +11,14 @@ pub(crate) async fn store_server_public_key(key: &PublicKey) {
} }
pub(crate) async fn key_exchange() -> bool { pub(crate) async fn key_exchange() -> bool {
let cfg = EtcLqos::load().unwrap(); let cfg = load_config().unwrap();
let node_id = cfg.node_id.unwrap(); let node_id = cfg.node_id.clone();
let node_name = if let Some(node_name) = cfg.node_name { let node_name = if !cfg.node_name.is_empty() {
node_name cfg.node_name
} else { } else {
node_id.clone() node_id.clone()
}; };
let license_key = cfg.long_term_stats.unwrap().license_key.unwrap(); let license_key = cfg.long_term_stats.license_key.unwrap();
let keypair = (KEYPAIR.read().await).clone(); let keypair = (KEYPAIR.read().await).clone();
match exchange_keys_with_license_server(node_id, node_name, license_key, keypair.public_key.clone()).await { match exchange_keys_with_license_server(node_id, node_name, license_key, keypair.public_key.clone()).await {
Ok(LicenseReply::MyPublicKey { public_key }) => { Ok(LicenseReply::MyPublicKey { public_key }) => {

View File

@ -1,5 +1,5 @@
use std::time::Duration; use std::time::Duration;
use lqos_config::EtcLqos; use lqos_config::load_config;
use tokio::{sync::mpsc::Receiver, time::sleep, net::TcpStream, io::{AsyncWriteExt, AsyncReadExt}}; use tokio::{sync::mpsc::Receiver, time::sleep, net::TcpStream, io::{AsyncWriteExt, AsyncReadExt}};
use crate::submission_queue::comm_channel::keys::store_server_public_key; use crate::submission_queue::comm_channel::keys::store_server_public_key;
use self::encode::encode_submission_hello; use self::encode::encode_submission_hello;
@ -49,24 +49,17 @@ pub(crate) async fn start_communication_channel(mut rx: Receiver<SenderChannelMe
async fn connect_if_permitted() -> Result<TcpStream, QueueError> { async fn connect_if_permitted() -> Result<TcpStream, QueueError> {
log::info!("Connecting to stats.libreqos.io"); log::info!("Connecting to stats.libreqos.io");
// Check that we have a local license key and are enabled // Check that we have a local license key and are enabled
let cfg = EtcLqos::load().map_err(|_| { let cfg = load_config().map_err(|_| {
log::error!("Unable to load config file."); log::error!("Unable to load config file.");
QueueError::NoLocalLicenseKey QueueError::NoLocalLicenseKey
})?; })?;
let node_id = cfg.node_id.ok_or_else(|| { let node_id = cfg.node_id.clone();
log::warn!("No node ID configured."); let node_name = cfg.node_name.clone();
QueueError::NoLocalLicenseKey if !cfg.long_term_stats.gather_stats {
})?;
let node_name = cfg.node_name.unwrap_or(node_id.clone());
let usage_cfg = cfg.long_term_stats.ok_or_else(|| {
log::warn!("Long-term stats are not configured.");
QueueError::NoLocalLicenseKey
})?;
if !usage_cfg.gather_stats {
log::warn!("Gathering long-term stats is disabled."); log::warn!("Gathering long-term stats is disabled.");
return Err(QueueError::StatsDisabled); return Err(QueueError::StatsDisabled);
} }
let license_key = usage_cfg.license_key.ok_or_else(|| { let license_key = cfg.long_term_stats.license_key.ok_or_else(|| {
log::warn!("No license key configured."); log::warn!("No license key configured.");
QueueError::NoLocalLicenseKey QueueError::NoLocalLicenseKey
})?; })?;

View File

@ -1,5 +1,5 @@
use crate::transport_data::{ask_license_server, LicenseReply, ask_license_server_for_new_account}; use crate::transport_data::{ask_license_server, LicenseReply, ask_license_server_for_new_account};
use lqos_config::EtcLqos; use lqos_config::load_config;
use lqos_utils::unix_time::unix_now; use lqos_utils::unix_time::unix_now;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use tokio::sync::RwLock; use tokio::sync::RwLock;
@ -45,12 +45,12 @@ const MISERLY_NO_KEY: &str = "IDontSupportDevelopersAndShouldFeelBad";
async fn check_license(unix_time: u64) -> LicenseState { async fn check_license(unix_time: u64) -> LicenseState {
log::info!("Checking LTS stats license"); log::info!("Checking LTS stats license");
if let Ok(cfg) = EtcLqos::load() { if let Ok(cfg) = load_config() {
// The config file is good. Is LTS enabled? // The config file is good. Is LTS enabled?
// If it isn't, we need to try very gently to see if a pending // If it isn't, we need to try very gently to see if a pending
// request has been submitted. // request has been submitted.
if let Some(cfg) = cfg.long_term_stats { if cfg.long_term_stats.gather_stats {
if let Some(key) = cfg.license_key { if let Some(key) = cfg.long_term_stats.license_key {
if key == MISERLY_NO_KEY { if key == MISERLY_NO_KEY {
log::warn!("You are using the self-hosting license key. We'd be happy to sell you a real one."); log::warn!("You are using the self-hosting license key. We'd be happy to sell you a real one.");
return LicenseState::Valid { expiry: 0, stats_host: "192.168.100.11:9127".to_string() } return LicenseState::Valid { expiry: 0, stats_host: "192.168.100.11:9127".to_string() }
@ -90,20 +90,15 @@ async fn check_license(unix_time: u64) -> LicenseState {
// So we need to check if we have a pending request. // So we need to check if we have a pending request.
// If a license key has been assigned, then we'll setup // If a license key has been assigned, then we'll setup
// LTS. If it hasn't, we'll just return Unknown. // LTS. If it hasn't, we'll just return Unknown.
if let Some(node_id) = &cfg.node_id { if let Ok(result) = ask_license_server_for_new_account(cfg.node_id.to_string()).await {
if let Ok(result) = ask_license_server_for_new_account(node_id.to_string()).await { if let LicenseReply::NewActivation { license_key } = result {
if let LicenseReply::NewActivation { license_key } = result { // We have a new license!
// We have a new license! let _ = lqos_config::enable_long_term_stats(license_key);
let _ = lqos_config::enable_long_term_stats(license_key); // Note that we're not doing anything beyond this - the next cycle
// Note that we're not doing anything beyond this - the next cycle // will pick up on there actually being a license
// will pick up on there actually being a license } else {
} else { log::info!("No pending LTS license found");
log::info!("No pending LTS license found");
}
} }
} else {
// There's no node ID either - we can't talk to this
log::warn!("No NodeID is configured. No online services are possible.");
} }
} }
} else { } else {

View File

@ -1,36 +1,46 @@
mod data_link;
mod device; // UISP data definition for a device, including interfaces
/// UISP Data Structures /// UISP Data Structures
/// ///
/// Strong-typed implementation of the UISP API system. Used by long-term /// Strong-typed implementation of the UISP API system. Used by long-term
/// stats to attach device information, possibly in the future used to /// stats to attach device information, possibly in the future used to
/// accelerate the UISP integration. /// accelerate the UISP integration.
mod rest; // REST HTTP services mod rest; // REST HTTP services
mod site; // UISP data definition for a site, pulled from the JSON mod site; // UISP data definition for a site, pulled from the JSON
mod device; // UISP data definition for a device, including interfaces use lqos_config::Config;
mod data_link; // UISP data link definitions // UISP data link definitions
use lqos_config::LibreQoSConfig;
pub use site::Site;
pub use device::Device;
pub use data_link::DataLink;
use self::rest::nms_request_get_vec; use self::rest::nms_request_get_vec;
use anyhow::Result; use anyhow::Result;
pub use data_link::DataLink;
pub use device::Device;
pub use site::Site;
/// Loads a complete list of all sites from UISP /// Loads a complete list of all sites from UISP
pub async fn load_all_sites(config: LibreQoSConfig) -> Result<Vec<Site>> { pub async fn load_all_sites(config: Config) -> Result<Vec<Site>> {
Ok(nms_request_get_vec("sites", &config.uisp_auth_token, &config.uisp_base_url).await?) Ok(nms_request_get_vec(
"sites",
&config.uisp_integration.token,
&config.uisp_integration.url,
)
.await?)
} }
/// Load all devices from UISP that are authorized, and include their full interface definitions /// Load all devices from UISP that are authorized, and include their full interface definitions
pub async fn load_all_devices_with_interfaces(config: LibreQoSConfig) -> Result<Vec<Device>> { pub async fn load_all_devices_with_interfaces(config: Config) -> Result<Vec<Device>> {
Ok(nms_request_get_vec( Ok(nms_request_get_vec(
"devices?withInterfaces=true&authorized=true", "devices?withInterfaces=true&authorized=true",
&config.uisp_auth_token, &config.uisp_integration.token,
&config.uisp_base_url, &config.uisp_integration.url,
) )
.await?) .await?)
} }
/// Loads all data links from UISP (including links in client sites) /// Loads all data links from UISP (including links in client sites)
pub async fn load_all_data_links(config: LibreQoSConfig) -> Result<Vec<DataLink>> { pub async fn load_all_data_links(config: Config) -> Result<Vec<DataLink>> {
Ok(nms_request_get_vec("data-links", &config.uisp_auth_token, &config.uisp_base_url).await?) Ok(nms_request_get_vec(
"data-links",
&config.uisp_integration.token,
&config.uisp_integration.url,
)
.await?)
} }

View File

@ -1,27 +1,16 @@
import time import time
import datetime import datetime
from LibreQoS import refreshShapers, refreshShapersUpdateOnly from LibreQoS import refreshShapers, refreshShapersUpdateOnly
from graphInfluxDB import refreshBandwidthGraphs, refreshLatencyGraphs #from graphInfluxDB import refreshBandwidthGraphs, refreshLatencyGraphs
from ispConfig import influxDBEnabled, automaticImportUISP, automaticImportSplynx from liblqos_python import automatic_import_uisp, automatic_import_splynx, queue_refresh_interval_mins, \
try: automatic_import_powercode, automatic_import_sonar
from ispConfig import queueRefreshIntervalMins if automatic_import_uisp():
except:
queueRefreshIntervalMins = 30
if automaticImportUISP:
from integrationUISP import importFromUISP from integrationUISP import importFromUISP
if automaticImportSplynx: if automatic_import_splynx():
from integrationSplynx import importFromSplynx from integrationSplynx import importFromSplynx
try: if automatic_import_powercode():
from ispConfig import automaticImportPowercode
except:
automaticImportPowercode = False
if automaticImportPowercode:
from integrationPowercode import importFromPowercode from integrationPowercode import importFromPowercode
try: if automatic_import_sonar():
from ispConfig import automaticImportSonar
except:
automaticImportSonar = False
if automaticImportSonar:
from integrationSonar import importFromSonar from integrationSonar import importFromSonar
from apscheduler.schedulers.background import BlockingScheduler from apscheduler.schedulers.background import BlockingScheduler
from apscheduler.executors.pool import ThreadPoolExecutor from apscheduler.executors.pool import ThreadPoolExecutor
@ -29,36 +18,36 @@ from apscheduler.executors.pool import ThreadPoolExecutor
ads = BlockingScheduler(executors={'default': ThreadPoolExecutor(1)}) ads = BlockingScheduler(executors={'default': ThreadPoolExecutor(1)})
def importFromCRM(): def importFromCRM():
if automaticImportUISP: if automatic_import_uisp():
try: try:
importFromUISP() importFromUISP()
except: except:
print("Failed to import from UISP") print("Failed to import from UISP")
elif automaticImportSplynx: elif automatic_import_splynx():
try: try:
importFromSplynx() importFromSplynx()
except: except:
print("Failed to import from Splynx") print("Failed to import from Splynx")
elif automaticImportPowercode: elif automatic_import_powercode():
try: try:
importFromPowercode() importFromPowercode()
except: except:
print("Failed to import from Powercode") print("Failed to import from Powercode")
elif automaticImportSonar: elif automatic_import_sonar():
try: try:
importFromSonar() importFromSonar()
except: except:
print("Failed to import from Sonar") print("Failed to import from Sonar")
def graphHandler(): #def graphHandler():
try: # try:
refreshBandwidthGraphs() # refreshBandwidthGraphs()
except: # except:
print("Failed to update bandwidth graphs") # print("Failed to update bandwidth graphs")
try: # try:
refreshLatencyGraphs() # refreshLatencyGraphs()
except: # except:
print("Failed to update latency graphs") # print("Failed to update latency graphs")
def importAndShapeFullReload(): def importAndShapeFullReload():
importFromCRM() importFromCRM()
@ -71,9 +60,9 @@ def importAndShapePartialReload():
if __name__ == '__main__': if __name__ == '__main__':
importAndShapeFullReload() importAndShapeFullReload()
ads.add_job(importAndShapePartialReload, 'interval', minutes=queueRefreshIntervalMins, max_instances=1) ads.add_job(importAndShapePartialReload, 'interval', minutes=queue_refresh_interval_mins(), max_instances=1)
if influxDBEnabled: #if influxDBEnabled:
ads.add_job(graphHandler, 'interval', seconds=10, max_instances=1) # ads.add_job(graphHandler, 'interval', seconds=10, max_instances=1)
ads.start() ads.start()