This commit is contained in:
Herbert Wolverson
2023-01-16 16:04:05 +00:00
9 changed files with 220 additions and 2446 deletions

View File

@@ -13,7 +13,7 @@ jobs:
flake8-comprehensions isort mypy pytest pyupgrade safety
- run: bandit --recursive --skip B105,B110,B404,B602,B603,B605 .
- run: black --check . || true
- run: codespell --skip="./.*,./old" # --ignore-words-list=""
# - run: codespell --skip="./.*,./old" # --ignore-words-list=""
- run: flake8 . --count --exclude="./.*,./old/" --select=E9,F63,F7,F82 --show-source --statistics
- run: flake8 . --count --exit-zero --max-complexity=10 --max-line-length=88
--show-source --statistics

View File

@@ -528,6 +528,38 @@ def refreshShapers():
minDownload, minUpload = findBandwidthMins(network, 0)
logging.info("Found the bandwidth minimums for each node")
# Compress network.json. HTB only supports 8 levels of HTB depth. Compress to 8 layers if beyond 8.
def flattenB(data):
newDict = {}
for node in data:
if isinstance(node, str):
if (isinstance(data[node], dict)) and (node != 'children'):
newDict[node] = dict(data[node])
if 'children' in data[node]:
result = flattenB(data[node]['children'])
del newDict[node]['children']
newDict.update(result)
return newDict
def flattenA(data, depth):
newDict = {}
for node in data:
if isinstance(node, str):
if (isinstance(data[node], dict)) and (node != 'children'):
newDict[node] = dict(data[node])
if 'children' in data[node]:
result = flattenA(data[node]['children'], depth+2)
del newDict[node]['children']
if depth <= 8:
newDict[node]['children'] = result
else:
flattened = flattenB(data[node]['children'])
if 'children' in newDict[node]:
newDict[node]['children'].update(flattened)
else:
newDict[node]['children'] = flattened
return newDict
network = flattenA(network, 1)
# Parse network structure and add devices from ShapedDevices.csv
parentNodes = []
minorByCPUpreloaded = {}

View File

@@ -1,46 +1,5 @@
# v1.3 (IPv4 + IPv6)
# v1.4 (Alpha)
![image](https://user-images.githubusercontent.com/22501920/202913336-256b591b-f372-44fe-995c-5e08ec08a925.png)
![image](https://i0.wp.com/libreqos.io/wp-content/uploads/2023/01/v1.4-alpha-2.png?w=3664&ssl=1)
## Features
### Fast TCP Latency Tracking
[@thebracket](https://github.com/thebracket/) has created [cpumap-pping](https://github.com/thebracket/cpumap-pping) which merges the functionality of the [xdp-cpumap-tc](https://github.com/xdp-project/xdp-cpumap-tc) and [ePPing](https://github.com/xdp-project/bpf-examples/tree/master/pping) projects, while keeping CPU use within ~1% of xdp-cpumap-tc.
### Integrations
- Added Splynx integration
- UISP integration overhaul by [@thebracket](https://github.com/thebracket/)
- [LMS integration](https://github.com/interduo/LMSLibreQoS) for Polish ISPs by [@interduo](https://github.com/interduo)
### Partial Queue Reload
In v1.2 and prior, the the entire queue structure had to be reloaded to make any changes. This led to a few milliseconds of packet loss for some clients each time that reload happened. The scheduled.py was set to reload all queues each morning at 4AM to avoid any potential disruptions that could theoretically cause.
Starting with v1.3 - LibreQoS tracks the state of the queues, and can do incremental changes without a full reload of all queues. Every 30 minutes - scheduler.py runs the CRM import, and runs a partial reload affecting just the queues that have changed. It still runs a full reload at 4AM.
### v1.3 Improvements to help scale
#### HTB major:minor handle
HTB uses a hex handle for classes. It is two 16-bit hex values joined by a colon - major:minor (<u16>:<u16>). In LibreQoS, each CPU core uses a different major handle.
In v1.2 and prior, the minor handle was unique across all CPUs, meaning only 30k subscribers could be added total.
Starting with LibreQoS v1.3 - minor handles are counted independently by CPU core. With this change, the maximum possible subscriber qdiscs/classes goes from a hard limit of 30k to instead be 30k x CPU core count. So for a higher end system with a 64 core processor such as the AMD EPYC™ 7713P, that would mean ~1.9 million possible subscriber classes. Of course CPU use will be the bottleneck well before class handles are in that scenario. But at least we have that arbitrary 30k limit out of the way.
#### "Circuit ID" Unique Identifier
In order to improve queue reload time in v1.3, it was necessary to use a unique identifier for each circuit. We went with Circuit ID. It can be a number or string, it just needs to be unique between circuits, and the same for multiple devices in the same circuit. This allows us to avoid costly lookups when sorting through the queue structure.
If you have your own script creating ShapedDevices.csv - you could use your CRM's unique identifier for customer services / circuits to serve as this Circuit ID. The UISP and Splynx integrations already do this automatically.
## Compatibility Notes
The most major changes are the renaming of the fqorCake variable to "sqm",
and the addition of the Circuit identifier field.
Also after upgrading to LibreQos v1.3, a reboot is required to clear out the
old ebpf code.
See [wiki here](https://github.com/LibreQoE/LibreQoS/wiki/v1.4)

View File

@@ -2,6 +2,10 @@
Version 1.4 is still undergoing active development, but if you'd like to benefit from it right now (or help us test/develop it!), here's a guide.
## Updating from v1.3
### Remove cron tasks from v1.3
Run ```sudo crontab -e``` and remove any entries pertaining to LibreQoS from v1.3.
## Clone the repo
> My preferred install location is `/opt/libreqos` - but you can put it wherever you want.
@@ -25,7 +29,7 @@ git checkout v1.4-pre-alpha-rust-integration
You need to have a few packages from `apt` installed:
```
apt-get install -y python3-pip clang gcc gcc-multilib llvm libelf-dev git nano graphviz curl screen llvm pkg-config linux-tools-common linux-tools-`uname r` libbpf-dev
apt-get install -y python3-pip clang gcc gcc-multilib llvm libelf-dev git nano graphviz curl screen llvm pkg-config linux-tools-common linux-tools-`uname -r` libbpf-dev
```
Then you need to install some Python dependencies:

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,79 @@
import csv
import os
import shutil
from datetime import datetime
from requests import get
from ispConfig import automaticImportRestHttp as restconf
from pydash import objects
requestsBaseConfig = {
'verify': True,
'headers': {
'accept': 'application/json'
}
}
def createShaper():
# shutil.copy('Shaper.csv', 'Shaper.csv.bak')
ts = datetime.now().strftime('%Y-%m-%d.%H-%M-%S')
devicesURL = restconf.get('baseURL') + '/' + restconf.get('devicesURI').strip('/')
requestConfig = objects.defaults_deep({'params': {}}, restconf.get('requestsConfig'), requestsBaseConfig)
raw = get(devicesURL, **requestConfig)
if raw.status_code != 200:
print('Failed to request ' + devicesURL + ', got ' + str(raw.status_code))
return False
devicesCsvFP = os.path.dirname(os.path.realpath(__file__)) + '/ShapedDevices.csv'
with open(devicesCsvFP, 'w') as csvfile:
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
wr.writerow(
['Circuit ID', 'Circuit Name', 'Device ID', 'Device Name', 'Parent Node', 'MAC', 'IPv4', 'IPv6',
'Download Min Mbps', 'Upload Min Mbps', 'Download Max Mbps', 'Upload Max Mbps', 'Comment'])
for row in raw.json():
wr.writerow(row.values())
if restconf['logChanges']:
devicesBakFilePath = restconf['logChanges'].rstrip('/') + '/ShapedDevices.' + ts + '.csv'
try:
shutil.copy(devicesCsvFP, devicesBakFilePath)
except:
os.makedirs(restconf['logChanges'], exist_ok=True)
shutil.copy(devicesCsvFP, devicesBakFilePath)
networkURL = restconf['baseURL'] + '/' + restconf['networkURI'].strip('/')
raw = get(networkURL, **requestConfig)
if raw.status_code != 200:
print('Failed to request ' + networkURL + ', got ' + str(raw.status_code))
return False
networkJsonFP = os.path.dirname(os.path.realpath(__file__)) + '/network.json'
with open(networkJsonFP, 'w') as handler:
handler.write(raw.text)
if restconf['logChanges']:
networkBakFilePath = restconf['logChanges'].rstrip('/') + '/network.' + ts + '.json'
try:
shutil.copy(networkJsonFP, networkBakFilePath)
except:
os.makedirs(restconf['logChanges'], exist_ok=True)
shutil.copy(networkJsonFP, networkBakFilePath)
def importFromRestHttp():
createShaper()
if __name__ == '__main__':
importFromRestHttp()

View File

@@ -23,8 +23,8 @@ interfaceA = 'eth1'
# Interface connected to edge router
interfaceB = 'eth2'
## WORK IN PROGRESS. Note that interfaceA determines the "stick" interface
## I could only get scanning to work if I issued ethtool -K enp1s0f1 rxvlan off
# WORK IN PROGRESS. Note that interfaceA determines the "stick" interface
# I could only get scanning to work if I issued ethtool -K enp1s0f1 rxvlan off
OnAStick = False
# VLAN facing the core router
StickVlanA = 0
@@ -38,7 +38,8 @@ enableActualShellCommands = True
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
runShellCommandsAsSudo = False
# Allows overriding queues / CPU cores used. When set to 0, the max possible queues / CPU cores are utilized. Please leave as 0.
# Allows overriding queues / CPU cores used. When set to 0, the max possible queues / CPU cores are utilized. Please
# leave as 0.
queuesAvailableOverride = 0
# Some networks are flat - where there are no Parent Nodes defined in ShapedDevices.csv
@@ -83,19 +84,41 @@ uispSite = ''
uispStrategy = "full"
# List any sites that should not be included, with each site name surrounded by '' and separated by commas
excludeSites = []
# If you use IPv6, this can be used to find associated IPv6 prefixes for your clients' IPv4 addresses, and match them to those devices
# If you use IPv6, this can be used to find associated IPv6 prefixes for your clients' IPv4 addresses, and match them
# to those devices
findIPv6usingMikrotik = False
# If you want to provide a safe cushion for speed test results to prevent customer complains, you can set this to 1.15 (15% above plan rate).
# If not, you can leave as 1.0
# If you want to provide a safe cushion for speed test results to prevent customer complains, you can set this to
# 1.15 (15% above plan rate). If not, you can leave as 1.0
bandwidthOverheadFactor = 1.0
# For edge cases, set the respective ParentNode for these CPEs
exceptionCPEs = {}
# 'CPE-SomeLocation1': 'AP-SomeLocation1',
# 'CPE-SomeLocation2': 'AP-SomeLocation2',
#}
# exceptionCPEs = {
# 'CPE-SomeLocation1': 'AP-SomeLocation1',
# 'CPE-SomeLocation2': 'AP-SomeLocation2',
# }
# API Auth
apiUsername = "testUser"
apiPassword = "changeme8343486806"
apiHostIP = "127.0.0.1"
apiHostPost = 5000
httpRestIntegrationConfig = {
'enabled': False,
'baseURL': 'https://domain',
'networkURI': '/some/path',
'shaperURI': '/some/path/etc',
'requestsConfig': {
'verify': True, # Good for Dev if your dev env doesnt have cert
'params': { # params for query string ie uri?some-arg=some-value
'search': 'hold-my-beer'
},
#'headers': {
# 'Origin': 'SomeHeaderValue',
#},
},
# If you want to store a timestamped copy/backup of both network.json and Shaper.csv each time they are updated,
# provide a path
# 'logChanges': '/var/log/libreqos'
}

67
src/schedulerAdvanced.py Normal file
View File

@@ -0,0 +1,67 @@
import time
from LibreQoS import refreshShapers, refreshShapersUpdateOnly
from graphInfluxDB import refreshBandwidthGraphs, refreshLatencyGraphs
from ispConfig import influxDBEnabled, automaticImportUISP, automaticImportSplynx, httpRestIntegrationConfig
if automaticImportUISP:
from integrationUISP import importFromUISP
if automaticImportSplynx:
from integrationSplynx import importFromSplynx
if httpRestIntegrationConfig['enabled']:
from integrationRestHttp import importFromRestHttp
from apscheduler.schedulers.background import BlockingScheduler
ads = BlockingScheduler()
def importFromCRM():
if automaticImportUISP:
try:
importFromUISP()
except:
print("Failed to import from UISP")
elif automaticImportSplynx:
try:
importFromSplynx()
except:
print("Failed to import from Splynx")
elif httpRestIntegrationConfig['enabled']:
try:
importFromRestHttp()
except:
print("Failed to import from RestHttp")
def importAndShapeFullReload():
importFromCRM()
refreshShapers()
def importAndShapePartialReload():
importFromCRM()
refreshShapersUpdateOnly()
if __name__ == '__main__':
importAndShapeFullReload()
# schedule.every().day.at("04:00").do(importAndShapeFullReload)
ads.add_job(importAndShapeFullReload, 'cron', hour=4)
# schedule.every(30).minutes.do(importAndShapePartialReload)
ads.add_job(importAndShapePartialReload, 'interval', minutes=30)
if influxDBEnabled:
# schedule.every(10).seconds.do(refreshBandwidthGraphs)
ads.add_job(refreshBandwidthGraphs, 'interval', seconds=10)
# schedule.every(30).seconds.do(refreshLatencyGraphs)
# Commented out until refreshLatencyGraphs works in v.14
# ads.add_job(refreshLatencyGraphs, 'interval', seconds=30)
# while True:
# schedule.run_pending()
# time.sleep(1)
ads.start()