Add back v1.3 directory

This commit is contained in:
rchac 2023-02-06 18:41:23 -07:00
parent 9fb0e9d6ca
commit cd147708fd
21 changed files with 5863 additions and 0 deletions

3
.gitmodules vendored
View File

@ -14,3 +14,6 @@
path = old/v1.2/xdp-cpumap-tc
url = https://github.com/xdp-project/xdp-cpumap-tc.git
[submodule "old/v1.3/cpumap-pping"]
path = old/v1.3/cpumap-pping
url = https://github.com/thebracket/cpumap-pping

View File

@ -0,0 +1,149 @@
# LibreQoS Integrations
If you need to create an integration for your network, we've tried to give you the tools you need. We currently ship integrations for UISP and Spylnx. We'd love to include more.
### Overall Concept
LibreQoS enforces customer bandwidth limits, and applies CAKE-based optimizations at several levels:
* Per-user Cake flows are created. These require the maximum bandwidth permitted per customer.
* Customers can have more than one device that share a pool of bandwidth. Customers are grouped into "circuits"
* *Optional* Access points can have a speed limit/queue, applied to all customers associated with the access point.
* *Optional* Sites can contain access points, and apply a speed limit/queue to all access points (and associated circuits).
* *Optional* Sites can be nested beneath other sites and access point, providing for a queue hierarchy that represents physical limitations of backhaul connections.
Additionally, you might grow to have more than one shaper - and need to express your network topology from the perspective of different parts of your network. (For example, if *Site A* and *Site B* both have Internet connections - you want to generate an efficient topology for both sites. It's helpful if you can derive this from the same overall topology)
LibreQoS's network modeling accomplishes this by modeling your network as a *graph*: a series of interconnected nodes, each featuring a "parent". Any "node" (entry) in the graph can be turned into a "root" node, allowing you to generate the `network.json` and `ShapedDevices.csv` files required to manage your customers from the perspective of that root node.
### Flat Shaping
The simplest form of integration produces a "flat" network. This is the highest performance model in terms of raw throughput, but lacks the ability to provide shaping at the access point or site level: every customer site is parented directly off the root.
> For an integration, it's recommended that you fetch the customer/device data from your management system rather than type them all in Python.
A flat integration is relatively simple. Start by importing the common API:
```python
from integrationCommon import isIpv4Permitted, fixSubnet, NetworkGraph, NetworkNode, NodeType
```
Then create an empty network graph (it will grow to represent your network):
```python
net = NetworkGraph()
```
Once you have your `NetworkGraph` object, you start adding customers and devices. Customers may have any number of devices. You can add a single customer with one device as follows:
```python
# Add the customer
customer = NetworkNode(
id="Unique Customer ID",
displayName="The Doe Family",
type=NodeType.client,
download=100, # Download is in Mbit/second
upload=20, # Upload is in Mbit/second
address="1 My Road, My City, My State")
net.addRawNode(customer) # Insert the customer ID
# Give them a device
device = NetworkNode(
id="Unique Device ID",
displayName="Doe Family CPE",
parentId="Unique Customer ID", # must match the customer's ID
type=NodeType.device,
ipv4=["100.64.1.5/32"], # As many as you need, express networks as the network ID - e.g. 192.168.100.0/24
ipv6=["feed:beef::12/64"], # Same again. May be [] for none.
mac="00:00:5e:00:53:af"
)
net.addRawNode(device)
```
If the customer has multiple devices, you can add as many as you want - with `ParentId` continuing to match the parent customer's `id`.
Once you have entered all of your customers, you can finish the integration:
```python
net.prepareTree() # This is required, and builds parent-child relationships.
net.createNetworkJson() # Create `network.json`
net.createShapedDevices() # Create the `ShapedDevices.csv` file.
```
### Detailed Hierarchies
Creating a full hierarchy (with as many levels as you want) uses a similar strategy to flat networks---we recommend that you start by reading the "flat shaping" section above.
Start by importing the common API:
```python
from integrationCommon import isIpv4Permitted, fixSubnet, NetworkGraph, NetworkNode, NodeType
```
Then create an empty network graph (it will grow to represent your network):
```python
net = NetworkGraph()
```
Now you can start to insert sites and access points. Sites and access points are inserted like customer or device nodes: they have a unique ID, and a `ParentId`. Customers can then use a `ParentId` of the site or access point beneath which they should be located.
For example, let's create `Site_1` and `Site_2` - at the top of the tree:
```python
net.addRawNode(NetworkNode(id="Site_1", displayName="Site_1", parentId="", type=NodeType.site, download=1000, upload=1000))
net.addRawNode(NetworkNode(id="Site_2", displayName="Site_2", parentId="", type=NodeType.site, download=500, upload=500))
```
Let's attach some access points and point-of-presence sites:
```python
net.addRawNode(NetworkNode(id="AP_A", displayName="AP_A", parentId="Site_1", type=NodeType.ap, download=500, upload=500))
net.addRawNode(NetworkNode(id="Site_3", displayName="Site_3", parentId="Site_1", type=NodeType.site, download=500, upload=500))
net.addRawNode(NetworkNode(id="PoP_5", displayName="PoP_5", parentId="Site_3", type=NodeType.site, download=200, upload=200))
net.addRawNode(NetworkNode(id="AP_9", displayName="AP_9", parentId="PoP_5", type=NodeType.ap, download=120, upload=120))
net.addRawNode(NetworkNode(id="PoP_6", displayName="PoP_6", parentId="PoP_5", type=NodeType.site, download=60, upload=60))
net.addRawNode(NetworkNode(id="AP_11", displayName="AP_11", parentId="PoP_6", type=NodeType.ap, download=30, upload=30))
net.addRawNode(NetworkNode(id="PoP_1", displayName="PoP_1", parentId="Site_2", type=NodeType.site, download=200, upload=200))
net.addRawNode(NetworkNode(id="AP_7", displayName="AP_7", parentId="PoP_1", type=NodeType.ap, download=100, upload=100))
net.addRawNode(NetworkNode(id="AP_1", displayName="AP_1", parentId="Site_2", type=NodeType.ap, download=150, upload=150))
```
When you attach a customer, you can specify a tree entry (e.g. `PoP_5`) as a parent:
```python
# Add the customer
customer = NetworkNode(
id="Unique Customer ID",
displayName="The Doe Family",
parentId="PoP_5",
type=NodeType.client,
download=100, # Download is in Mbit/second
upload=20, # Upload is in Mbit/second
address="1 My Road, My City, My State")
net.addRawNode(customer) # Insert the customer ID
# Give them a device
device = NetworkNode(
id="Unique Device ID",
displayName="Doe Family CPE",
parentId="Unique Customer ID", # must match the customer's ID
type=NodeType.device,
ipv4=["100.64.1.5/32"], # As many as you need, express networks as the network ID - e.g. 192.168.100.0/24
ipv6=["feed:beef::12/64"], # Same again. May be [] for none.
mac="00:00:5e:00:53:af"
)
net.addRawNode(device)
```
Once you have entered all of your network topology and customers, you can finish the integration:
```python
net.prepareTree() # This is required, and builds parent-child relationships.
net.createNetworkJson() # Create `network.json`
net.createShapedDevices() # Create the `ShapedDevices.csv` file.
```
You can also add a call to `net.plotNetworkGraph(False)` (use `True` to also include every customer; this can make for a HUGE file) to create a PDF file (currently named `network.pdf.pdf`) displaying your topology. The example shown here looks like this:
![](testdata/sample_layout.png)

1255
old/v1.3/LibreQoS.py Executable file

File diff suppressed because it is too large Load Diff

46
old/v1.3/README.md Normal file
View File

@ -0,0 +1,46 @@
# v1.3 (IPv4 + IPv6)
![image](https://user-images.githubusercontent.com/22501920/202913336-256b591b-f372-44fe-995c-5e08ec08a925.png)
## Features
### Fast TCP Latency Tracking
[@thebracket](https://github.com/thebracket/) has created [cpumap-pping](https://github.com/thebracket/cpumap-pping) which merges the functionality of the [xdp-cpumap-tc](https://github.com/xdp-project/xdp-cpumap-tc) and [ePPing](https://github.com/xdp-project/bpf-examples/tree/master/pping) projects, while keeping CPU use within ~1% of xdp-cpumap-tc.
### Integrations
- Added Splynx integration
- UISP integration overhaul by [@thebracket](https://github.com/thebracket/)
- [LMS integation](https://github.com/interduo/LMSLibreQoS) for Polish ISPs by [@interduo](https://github.com/interduo)
### Partial Queue Reload
In v1.2 and prior, the the entire queue structure had to be reloaded to make any changes. This led to a few milliseconds of packet loss for some clients each time that reload happened. The scheduled.py was set to reload all queues each morning at 4AM to avoid any potential disruptions that could theoretically cause.
Starting with v1.3 - LibreQoS tracks the state of the queues, and can do incremental changes without a full reload of all queues. Every 30 minutes - scheduler.py runs the CRM import, and runs a partial reload affecting just the queues that have changed. It still runs a full reload at 4AM.
### v1.3 Improvements to help scale
#### HTB major:minor handle
HTB uses a hex handle for classes. It is two 16-bit hex values joined by a colon - major:minor (<u16>:<u16>). In LibreQoS, each CPU core uses a different major handle.
In v1.2 and prior, the minor handle was unique across all CPUs, meaning only 30k subscribers could be added total.
Starting with LibreQoS v1.3 - minor handles are counted independently by CPU core. With this change, the maximum possible subscriber qdiscs/classes goes from a hard limit of 30k to instead be 30k x CPU core count. So for a higher end system with a 64 core processor such as the AMD EPYC™ 7713P, that would mean ~1.9 million possible subscriber classes. Of course CPU use will be the bottleneck well before class handles are in that scenario. But at least we have that arbitrary 30k limit out of the way.
#### "Circuit ID" Unique Identifier
In order to improve queue reload time in v1.3, it was necessary to use a unique identifier for each circuit. We went with Circuit ID. It can be a number or string, it just needs to be unique between circuits, and the same for multiple devices in the same circuit. This allows us to avoid costly lookups when sorting through the queue structure.
If you have your own script creating ShapedDevices.csv - you could use your CRM's unique identifier for customer services / circuits to serve as this Circuit ID. The UISP and Splynx integrations already do this automatically.
## Compatability Notes
The most major changes are the renaming of the fqorCake variable to "sqm",
and the addition of the Circuit identifier field.
Also after upgrading to LibreQos v1.3, a reboot is required to clear out the
old ebpf code.

View File

@ -0,0 +1,14 @@
#LibreQoS - autogenerated file - START
Circuit ID,Circuit Name,Device ID,Device Name,Parent Node,MAC,IPv4,IPv6,Download Min Mbps,Upload Min Mbps,Download Max Mbps,Upload Max Mbps,Comment
1,"968 Circle St., Gurnee, IL 60031",1,Device 1,AP_A,,"100.64.0.1, 100.64.0.14",,25,5,155,20,
2,"31 Marconi Street, Lake In The Hills, IL 60156",2,Device 2,AP_A,,100.64.0.2,,25,5,105,18,
3,"255 NW. Newport Ave., Jamestown, NY 14701",3,Device 3,AP_9,,100.64.0.3,,25,5,105,18,
4,"8493 Campfire Street, Peabody, MA 01960",4,Device 4,AP_9,,100.64.0.4,,25,5,105,18,
2794,"6 Littleton Drive, Ringgold, GA 30736",5,Device 5,AP_11,,100.64.0.5,,25,5,105,18,
2794,"6 Littleton Drive, Ringgold, GA 30736",6,Device 6,AP_11,,100.64.0.6,,25,5,105,18,
5,"93 Oklahoma Ave., Parsippany, NJ 07054",7,Device 7,AP_1,,100.64.0.7,,25,5,155,20,
6,"74 Bishop Ave., Bakersfield, CA 93306",8,Device 8,AP_1,,100.64.0.8,,25,5,105,18,
7,"9598 Peg Shop Drive, Lutherville Timonium, MD 21093",9,Device 9,AP_7,,100.64.0.9,,25,5,105,18,
8,"115 Gartner Rd., Gettysburg, PA 17325",10,Device 10,AP_7,,100.64.0.10,,25,5,105,18,
9,"525 Birchpond St., Romulus, MI 48174",11,Device 11,Site_1,,100.64.0.11,,25,5,105,18,
#LibreQoS - autogenerated file - EOF
Can't render this file because it has a wrong number of fields in line 2.

1
old/v1.3/cpumap-pping Submodule

@ -0,0 +1 @@
Subproject commit 0d4df7f91885021f49d08b91720a9031be3ac7e7

557
old/v1.3/graphInfluxDB.py Normal file
View File

@ -0,0 +1,557 @@
import subprocess
import json
import subprocess
from datetime import datetime
from pathlib import Path
import statistics
import time
import psutil
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
from ispConfig import interfaceA, interfaceB, influxDBEnabled, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl, sqm
def getInterfaceStats(interface):
command = 'tc -j -s qdisc show dev ' + interface
jsonAr = json.loads(subprocess.run(command.split(' '), stdout=subprocess.PIPE).stdout.decode('utf-8'))
jsonDict = {}
for element in filter(lambda e: 'parent' in e, jsonAr):
flowID = ':'.join(map(lambda p: f'0x{p}', element['parent'].split(':')[0:2]))
jsonDict[flowID] = element
del jsonAr
return jsonDict
def chunk_list(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def getCircuitBandwidthStats(subscriberCircuits, tinsStats):
interfaces = [interfaceA, interfaceB]
ifaceStats = list(map(getInterfaceStats, interfaces))
for circuit in subscriberCircuits:
if 'stats' not in circuit:
circuit['stats'] = {}
if 'currentQuery' in circuit['stats']:
circuit['stats']['priorQuery'] = circuit['stats']['currentQuery']
circuit['stats']['currentQuery'] = {}
circuit['stats']['sinceLastQuery'] = {}
else:
#circuit['stats']['priorQuery'] = {}
#circuit['stats']['priorQuery']['time'] = datetime.now().isoformat()
circuit['stats']['currentQuery'] = {}
circuit['stats']['sinceLastQuery'] = {}
#for entry in tinsStats:
if 'currentQuery' in tinsStats:
tinsStats['priorQuery'] = tinsStats['currentQuery']
tinsStats['currentQuery'] = {}
tinsStats['sinceLastQuery'] = {}
else:
tinsStats['currentQuery'] = {}
tinsStats['sinceLastQuery'] = {}
tinsStats['currentQuery'] = { 'Bulk': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
'BestEffort': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
'Video': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
'Voice': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
}
tinsStats['sinceLastQuery'] = { 'Bulk': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
'BestEffort': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
'Video': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
'Voice': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
}
for circuit in subscriberCircuits:
for (interface, stats, dirSuffix) in zip(interfaces, ifaceStats, ['Download', 'Upload']):
element = stats[circuit['classid']] if circuit['classid'] in stats else False
if element:
bytesSent = float(element['bytes'])
drops = float(element['drops'])
packets = float(element['packets'])
if (element['drops'] > 0) and (element['packets'] > 0):
overloadFactor = float(round(element['drops']/element['packets'],3))
else:
overloadFactor = 0.0
if 'cake diffserv4' in sqm:
tinCounter = 1
for tin in element['tins']:
sent_packets = float(tin['sent_packets'])
ack_drops = float(tin['ack_drops'])
ecn_mark = float(tin['ecn_mark'])
tinDrops = float(tin['drops'])
trueDrops = ecn_mark + tinDrops - ack_drops
if tinCounter == 1:
tinsStats['currentQuery']['Bulk'][dirSuffix]['sent_packets'] += sent_packets
tinsStats['currentQuery']['Bulk'][dirSuffix]['drops'] += trueDrops
elif tinCounter == 2:
tinsStats['currentQuery']['BestEffort'][dirSuffix]['sent_packets'] += sent_packets
tinsStats['currentQuery']['BestEffort'][dirSuffix]['drops'] += trueDrops
elif tinCounter == 3:
tinsStats['currentQuery']['Video'][dirSuffix]['sent_packets'] += sent_packets
tinsStats['currentQuery']['Video'][dirSuffix]['drops'] += trueDrops
elif tinCounter == 4:
tinsStats['currentQuery']['Voice'][dirSuffix]['sent_packets'] += sent_packets
tinsStats['currentQuery']['Voice'][dirSuffix]['drops'] += trueDrops
tinCounter += 1
circuit['stats']['currentQuery']['bytesSent' + dirSuffix] = bytesSent
circuit['stats']['currentQuery']['packetDrops' + dirSuffix] = drops
circuit['stats']['currentQuery']['packetsSent' + dirSuffix] = packets
circuit['stats']['currentQuery']['overloadFactor' + dirSuffix] = overloadFactor
#if 'cake diffserv4' in sqm:
# circuit['stats']['currentQuery']['tins'] = theseTins
circuit['stats']['currentQuery']['time'] = datetime.now().isoformat()
allPacketsDownload = 0.0
allPacketsUpload = 0.0
for circuit in subscriberCircuits:
circuit['stats']['sinceLastQuery']['bitsDownload'] = circuit['stats']['sinceLastQuery']['bitsUpload'] = 0.0
circuit['stats']['sinceLastQuery']['bytesSentDownload'] = circuit['stats']['sinceLastQuery']['bytesSentUpload'] = 0.0
circuit['stats']['sinceLastQuery']['packetDropsDownload'] = circuit['stats']['sinceLastQuery']['packetDropsUpload'] = 0.0
circuit['stats']['sinceLastQuery']['packetsSentDownload'] = circuit['stats']['sinceLastQuery']['packetsSentUpload'] = 0.0
try:
circuit['stats']['sinceLastQuery']['bytesSentDownload'] = circuit['stats']['currentQuery']['bytesSentDownload'] - circuit['stats']['priorQuery']['bytesSentDownload']
circuit['stats']['sinceLastQuery']['bytesSentUpload'] = circuit['stats']['currentQuery']['bytesSentUpload'] - circuit['stats']['priorQuery']['bytesSentUpload']
except:
circuit['stats']['sinceLastQuery']['bytesSentDownload'] = 0.0
circuit['stats']['sinceLastQuery']['bytesSentUpload'] = 0.0
try:
circuit['stats']['sinceLastQuery']['packetDropsDownload'] = circuit['stats']['currentQuery']['packetDropsDownload'] - circuit['stats']['priorQuery']['packetDropsDownload']
circuit['stats']['sinceLastQuery']['packetDropsUpload'] = circuit['stats']['currentQuery']['packetDropsUpload'] - circuit['stats']['priorQuery']['packetDropsUpload']
except:
circuit['stats']['sinceLastQuery']['packetDropsDownload'] = 0.0
circuit['stats']['sinceLastQuery']['packetDropsUpload'] = 0.0
try:
circuit['stats']['sinceLastQuery']['packetsSentDownload'] = circuit['stats']['currentQuery']['packetsSentDownload'] - circuit['stats']['priorQuery']['packetsSentDownload']
circuit['stats']['sinceLastQuery']['packetsSentUpload'] = circuit['stats']['currentQuery']['packetsSentUpload'] - circuit['stats']['priorQuery']['packetsSentUpload']
except:
circuit['stats']['sinceLastQuery']['packetsSentDownload'] = 0.0
circuit['stats']['sinceLastQuery']['packetsSentUpload'] = 0.0
allPacketsDownload += circuit['stats']['sinceLastQuery']['packetsSentDownload']
allPacketsUpload += circuit['stats']['sinceLastQuery']['packetsSentUpload']
if 'priorQuery' in circuit['stats']:
if 'time' in circuit['stats']['priorQuery']:
currentQueryTime = datetime.fromisoformat(circuit['stats']['currentQuery']['time'])
priorQueryTime = datetime.fromisoformat(circuit['stats']['priorQuery']['time'])
deltaSeconds = (currentQueryTime - priorQueryTime).total_seconds()
circuit['stats']['sinceLastQuery']['bitsDownload'] = round(
((circuit['stats']['sinceLastQuery']['bytesSentDownload'] * 8) / deltaSeconds)) if deltaSeconds > 0 else 0
circuit['stats']['sinceLastQuery']['bitsUpload'] = round(
((circuit['stats']['sinceLastQuery']['bytesSentUpload'] * 8) / deltaSeconds)) if deltaSeconds > 0 else 0
else:
circuit['stats']['sinceLastQuery']['bitsDownload'] = (circuit['stats']['sinceLastQuery']['bytesSentDownload'] * 8)
circuit['stats']['sinceLastQuery']['bitsUpload'] = (circuit['stats']['sinceLastQuery']['bytesSentUpload'] * 8)
tinsStats['sinceLastQuery']['Bulk']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['Bulk']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['BestEffort']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Video']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['Video']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Voice']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['Voice']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Bulk']['Download']['percentage'] = tinsStats['sinceLastQuery']['Bulk']['Upload']['percentage'] = 0.0
tinsStats['sinceLastQuery']['BestEffort']['Download']['percentage'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['percentage'] = 0.0
tinsStats['sinceLastQuery']['Video']['Download']['percentage'] = tinsStats['sinceLastQuery']['Video']['Upload']['percentage'] = 0.0
tinsStats['sinceLastQuery']['Voice']['Download']['percentage'] = tinsStats['sinceLastQuery']['Voice']['Upload']['percentage'] = 0.0
try:
tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets'] = tinsStats['currentQuery']['Bulk']['Download']['sent_packets'] - tinsStats['priorQuery']['Bulk']['Download']['sent_packets']
tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets'] = tinsStats['currentQuery']['BestEffort']['Download']['sent_packets'] - tinsStats['priorQuery']['BestEffort']['Download']['sent_packets']
tinsStats['sinceLastQuery']['Video']['Download']['sent_packets'] = tinsStats['currentQuery']['Video']['Download']['sent_packets'] - tinsStats['priorQuery']['Video']['Download']['sent_packets']
tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets'] = tinsStats['currentQuery']['Voice']['Download']['sent_packets'] - tinsStats['priorQuery']['Voice']['Download']['sent_packets']
tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets'] = tinsStats['currentQuery']['Bulk']['Upload']['sent_packets'] - tinsStats['priorQuery']['Bulk']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets'] = tinsStats['currentQuery']['BestEffort']['Upload']['sent_packets'] - tinsStats['priorQuery']['BestEffort']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets'] = tinsStats['currentQuery']['Video']['Upload']['sent_packets'] - tinsStats['priorQuery']['Video']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets'] = tinsStats['currentQuery']['Voice']['Upload']['sent_packets'] - tinsStats['priorQuery']['Voice']['Upload']['sent_packets']
except:
tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets'] = tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets'] = 0.0
tinsStats['sinceLastQuery']['Video']['Download']['sent_packets'] = tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets'] = 0.0
tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets'] = 0.0
tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets'] = tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets'] = 0.0
try:
tinsStats['sinceLastQuery']['Bulk']['Download']['drops'] = tinsStats['currentQuery']['Bulk']['Download']['drops'] - tinsStats['priorQuery']['Bulk']['Download']['drops']
tinsStats['sinceLastQuery']['BestEffort']['Download']['drops'] = tinsStats['currentQuery']['BestEffort']['Download']['drops'] - tinsStats['priorQuery']['BestEffort']['Download']['drops']
tinsStats['sinceLastQuery']['Video']['Download']['drops'] = tinsStats['currentQuery']['Video']['Download']['drops'] - tinsStats['priorQuery']['Video']['Download']['drops']
tinsStats['sinceLastQuery']['Voice']['Download']['drops'] = tinsStats['currentQuery']['Voice']['Download']['drops'] - tinsStats['priorQuery']['Voice']['Download']['drops']
tinsStats['sinceLastQuery']['Bulk']['Upload']['drops'] = tinsStats['currentQuery']['Bulk']['Upload']['drops'] - tinsStats['priorQuery']['Bulk']['Upload']['drops']
tinsStats['sinceLastQuery']['BestEffort']['Upload']['drops'] = tinsStats['currentQuery']['BestEffort']['Upload']['drops'] - tinsStats['priorQuery']['BestEffort']['Upload']['drops']
tinsStats['sinceLastQuery']['Video']['Upload']['drops'] = tinsStats['currentQuery']['Video']['Upload']['drops'] - tinsStats['priorQuery']['Video']['Upload']['drops']
tinsStats['sinceLastQuery']['Voice']['Upload']['drops'] = tinsStats['currentQuery']['Voice']['Upload']['drops'] - tinsStats['priorQuery']['Voice']['Upload']['drops']
except:
tinsStats['sinceLastQuery']['Bulk']['Download']['drops'] = tinsStats['sinceLastQuery']['BestEffort']['Download']['drops'] = 0.0
tinsStats['sinceLastQuery']['Video']['Download']['drops'] = tinsStats['sinceLastQuery']['Voice']['Download']['drops'] = 0.0
tinsStats['sinceLastQuery']['Bulk']['Upload']['drops'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['drops'] = 0.0
tinsStats['sinceLastQuery']['Video']['Upload']['drops'] = tinsStats['sinceLastQuery']['Voice']['Upload']['drops'] = 0.0
try:
dlPerc = tinsStats['sinceLastQuery']['Bulk']['Download']['drops'] / tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets']
ulPerc = tinsStats['sinceLastQuery']['Bulk']['Upload']['drops'] / tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['Bulk']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
tinsStats['sinceLastQuery']['Bulk']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
dlPerc = tinsStats['sinceLastQuery']['BestEffort']['Download']['drops'] / tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets']
ulPerc = tinsStats['sinceLastQuery']['BestEffort']['Upload']['drops'] / tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['BestEffort']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
tinsStats['sinceLastQuery']['BestEffort']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
dlPerc = tinsStats['sinceLastQuery']['Video']['Download']['drops'] / tinsStats['sinceLastQuery']['Video']['Download']['sent_packets']
ulPerc = tinsStats['sinceLastQuery']['Video']['Upload']['drops'] / tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['Video']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
tinsStats['sinceLastQuery']['Video']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
dlPerc = tinsStats['sinceLastQuery']['Voice']['Download']['drops'] / tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets']
ulPerc = tinsStats['sinceLastQuery']['Voice']['Upload']['drops'] / tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets']
tinsStats['sinceLastQuery']['Voice']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
tinsStats['sinceLastQuery']['Voice']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
except:
tinsStats['sinceLastQuery']['Bulk']['Download']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Bulk']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['BestEffort']['Download']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['BestEffort']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Video']['Download']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Video']['Upload']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Voice']['Download']['dropPercentage'] = 0.0
tinsStats['sinceLastQuery']['Voice']['Upload']['dropPercentage'] = 0.0
try:
tinsStats['sinceLastQuery']['Bulk']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['Bulk']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['BestEffort']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets']/allPacketsDownload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['BestEffort']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['Video']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['Video']['Download']['sent_packets']/allPacketsDownload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['Video']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['Voice']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets']/allPacketsDownload)*100.0, 3),100.0)
tinsStats['sinceLastQuery']['Voice']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
except:
# To avoid graphing 0.0 for all categories, which would show unusual graph results upon each queue reload, we just set these to None if the above calculations fail.
tinsStats['sinceLastQuery']['Bulk']['Download']['percentage'] = tinsStats['sinceLastQuery']['Bulk']['Upload']['percentage'] = None
tinsStats['sinceLastQuery']['BestEffort']['Download']['percentage'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['percentage'] = None
tinsStats['sinceLastQuery']['Video']['Download']['percentage'] = tinsStats['sinceLastQuery']['Video']['Upload']['percentage'] = None
tinsStats['sinceLastQuery']['Voice']['Download']['percentage'] = tinsStats['sinceLastQuery']['Voice']['Upload']['percentage'] = None
return subscriberCircuits, tinsStats
def getParentNodeBandwidthStats(parentNodes, subscriberCircuits):
for parentNode in parentNodes:
thisNodeDropsDownload = 0
thisNodeDropsUpload = 0
thisNodeDropsTotal = 0
thisNodeBitsDownload = 0
thisNodeBitsUpload = 0
packetsSentDownloadAggregate = 0.0
packetsSentUploadAggregate = 0.0
packetsSentTotalAggregate = 0.0
circuitsMatched = 0
thisParentNodeStats = {'sinceLastQuery': {}}
for circuit in subscriberCircuits:
if circuit['ParentNode'] == parentNode['parentNodeName']:
thisNodeBitsDownload += circuit['stats']['sinceLastQuery']['bitsDownload']
thisNodeBitsUpload += circuit['stats']['sinceLastQuery']['bitsUpload']
#thisNodeDropsDownload += circuit['packetDropsDownloadSinceLastQuery']
#thisNodeDropsUpload += circuit['packetDropsUploadSinceLastQuery']
thisNodeDropsTotal += (circuit['stats']['sinceLastQuery']['packetDropsDownload'] + circuit['stats']['sinceLastQuery']['packetDropsUpload'])
packetsSentDownloadAggregate += circuit['stats']['sinceLastQuery']['packetsSentDownload']
packetsSentUploadAggregate += circuit['stats']['sinceLastQuery']['packetsSentUpload']
packetsSentTotalAggregate += (circuit['stats']['sinceLastQuery']['packetsSentDownload'] + circuit['stats']['sinceLastQuery']['packetsSentUpload'])
circuitsMatched += 1
if (packetsSentDownloadAggregate > 0) and (packetsSentUploadAggregate > 0):
#overloadFactorDownloadSinceLastQuery = float(round((thisNodeDropsDownload/packetsSentDownloadAggregate)*100.0, 3))
#overloadFactorUploadSinceLastQuery = float(round((thisNodeDropsUpload/packetsSentUploadAggregate)*100.0, 3))
overloadFactorTotalSinceLastQuery = float(round((thisNodeDropsTotal/packetsSentTotalAggregate)*100.0, 1))
else:
#overloadFactorDownloadSinceLastQuery = 0.0
#overloadFactorUploadSinceLastQuery = 0.0
overloadFactorTotalSinceLastQuery = 0.0
thisParentNodeStats['sinceLastQuery']['bitsDownload'] = thisNodeBitsDownload
thisParentNodeStats['sinceLastQuery']['bitsUpload'] = thisNodeBitsUpload
thisParentNodeStats['sinceLastQuery']['packetDropsTotal'] = thisNodeDropsTotal
thisParentNodeStats['sinceLastQuery']['overloadFactorTotal'] = overloadFactorTotalSinceLastQuery
parentNode['stats'] = thisParentNodeStats
return parentNodes
def getParentNodeLatencyStats(parentNodes, subscriberCircuits):
for parentNode in parentNodes:
if 'stats' not in parentNode:
parentNode['stats'] = {}
parentNode['stats']['sinceLastQuery'] = {}
for parentNode in parentNodes:
thisParentNodeStats = {'sinceLastQuery': {}}
circuitsMatchedLatencies = []
for circuit in subscriberCircuits:
if circuit['ParentNode'] == parentNode['parentNodeName']:
if circuit['stats']['sinceLastQuery']['tcpLatency'] != None:
circuitsMatchedLatencies.append(circuit['stats']['sinceLastQuery']['tcpLatency'])
if len(circuitsMatchedLatencies) > 0:
thisParentNodeStats['sinceLastQuery']['tcpLatency'] = statistics.median(circuitsMatchedLatencies)
else:
thisParentNodeStats['sinceLastQuery']['tcpLatency'] = None
parentNode['stats'] = thisParentNodeStats
return parentNodes
def getCircuitLatencyStats(subscriberCircuits):
command = './cpumap-pping/src/xdp_pping'
listOfEntries = json.loads(subprocess.run(command.split(' '), stdout=subprocess.PIPE).stdout.decode('utf-8'))
tcpLatencyForClassID = {}
for entry in listOfEntries:
if 'tc' in entry:
handle = hex(int(entry['tc'].split(':')[0])) + ':' + hex(int(entry['tc'].split(':')[1]))
# To avoid outliers messing up avg for each circuit - cap at ceiling of 200ms
ceiling = 200.0
tcpLatencyForClassID[handle] = min(entry['avg'], ceiling)
for circuit in subscriberCircuits:
if 'stats' not in circuit:
circuit['stats'] = {}
circuit['stats']['sinceLastQuery'] = {}
for circuit in subscriberCircuits:
classID = circuit['classid']
if classID in tcpLatencyForClassID:
circuit['stats']['sinceLastQuery']['tcpLatency'] = tcpLatencyForClassID[classID]
else:
circuit['stats']['sinceLastQuery']['tcpLatency'] = None
return subscriberCircuits
def getParentNodeDict(data, depth, parentNodeNameDict):
if parentNodeNameDict == None:
parentNodeNameDict = {}
for elem in data:
if 'children' in data[elem]:
for child in data[elem]['children']:
parentNodeNameDict[child] = elem
tempDict = getParentNodeDict(data[elem]['children'], depth + 1, parentNodeNameDict)
parentNodeNameDict = dict(parentNodeNameDict, **tempDict)
return parentNodeNameDict
def parentNodeNameDictPull():
# Load network heirarchy
with open('network.json', 'r') as j:
network = json.loads(j.read())
parentNodeNameDict = getParentNodeDict(network, 0, None)
return parentNodeNameDict
def refreshBandwidthGraphs():
startTime = datetime.now()
with open('statsByParentNode.json', 'r') as j:
parentNodes = json.loads(j.read())
with open('statsByCircuit.json', 'r') as j:
subscriberCircuits = json.loads(j.read())
fileLoc = Path("tinsStats.json")
if fileLoc.is_file():
with open(fileLoc, 'r') as j:
tinsStats = json.loads(j.read())
else:
tinsStats = {}
fileLoc = Path("longTermStats.json")
if fileLoc.is_file():
with open(fileLoc, 'r') as j:
longTermStats = json.loads(j.read())
droppedPacketsAllTime = longTermStats['droppedPacketsTotal']
else:
longTermStats = {}
longTermStats['droppedPacketsTotal'] = 0.0
droppedPacketsAllTime = 0.0
parentNodeNameDict = parentNodeNameDictPull()
print("Retrieving circuit statistics")
subscriberCircuits, tinsStats = getCircuitBandwidthStats(subscriberCircuits, tinsStats)
print("Computing parent node statistics")
parentNodes = getParentNodeBandwidthStats(parentNodes, subscriberCircuits)
print("Writing data to InfluxDB")
client = InfluxDBClient(
url=influxDBurl,
token=influxDBtoken,
org=influxDBOrg
)
# Record current timestamp, use for all points added
timestamp = time.time_ns()
write_api = client.write_api(write_options=SYNCHRONOUS)
chunkedsubscriberCircuits = list(chunk_list(subscriberCircuits, 200))
queriesToSendCount = 0
for chunk in chunkedsubscriberCircuits:
queriesToSend = []
for circuit in chunk:
bitsDownload = float(circuit['stats']['sinceLastQuery']['bitsDownload'])
bitsUpload = float(circuit['stats']['sinceLastQuery']['bitsUpload'])
if (bitsDownload > 0) and (bitsUpload > 0):
percentUtilizationDownload = round((bitsDownload / round(circuit['maxDownload'] * 1000000))*100.0, 1)
percentUtilizationUpload = round((bitsUpload / round(circuit['maxUpload'] * 1000000))*100.0, 1)
p = Point('Bandwidth').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("Download", bitsDownload).field("Upload", bitsUpload).time(timestamp)
queriesToSend.append(p)
p = Point('Utilization').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("Download", percentUtilizationDownload).field("Upload", percentUtilizationUpload).time(timestamp)
queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend)
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
queriesToSendCount += len(queriesToSend)
queriesToSend = []
for parentNode in parentNodes:
bitsDownload = float(parentNode['stats']['sinceLastQuery']['bitsDownload'])
bitsUpload = float(parentNode['stats']['sinceLastQuery']['bitsUpload'])
dropsTotal = float(parentNode['stats']['sinceLastQuery']['packetDropsTotal'])
overloadFactor = float(parentNode['stats']['sinceLastQuery']['overloadFactorTotal'])
droppedPacketsAllTime += dropsTotal
if (bitsDownload > 0) and (bitsUpload > 0):
percentUtilizationDownload = round((bitsDownload / round(parentNode['maxDownload'] * 1000000))*100.0, 1)
percentUtilizationUpload = round((bitsUpload / round(parentNode['maxUpload'] * 1000000))*100.0, 1)
p = Point('Bandwidth').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Download", bitsDownload).field("Upload", bitsUpload).time(timestamp)
queriesToSend.append(p)
p = Point('Utilization').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Download", percentUtilizationDownload).field("Upload", percentUtilizationUpload).time(timestamp)
queriesToSend.append(p)
p = Point('Overload').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Overload", overloadFactor).time(timestamp)
queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend)
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
queriesToSendCount += len(queriesToSend)
if 'cake diffserv4' in sqm:
queriesToSend = []
listOfTins = ['Bulk', 'BestEffort', 'Video', 'Voice']
for tin in listOfTins:
p = Point('Tin Drop Percentage').tag("Type", "Tin").tag("Tin", tin).field("Download", tinsStats['sinceLastQuery'][tin]['Download']['dropPercentage']).field("Upload", tinsStats['sinceLastQuery'][tin]['Upload']['dropPercentage']).time(timestamp)
queriesToSend.append(p)
# Check to ensure tin percentage has value (!= None) before graphing. During partial or full reload these will have a value of None.
if (tinsStats['sinceLastQuery'][tin]['Download']['percentage'] != None) and (tinsStats['sinceLastQuery'][tin]['Upload']['percentage'] != None):
p = Point('Tins Assigned').tag("Type", "Tin").tag("Tin", tin).field("Download", tinsStats['sinceLastQuery'][tin]['Download']['percentage']).field("Upload", tinsStats['sinceLastQuery'][tin]['Upload']['percentage']).time(timestamp)
queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend)
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
queriesToSendCount += len(queriesToSend)
# Graph CPU use
cpuVals = psutil.cpu_percent(percpu=True)
queriesToSend = []
for index, item in enumerate(cpuVals):
p = Point('CPU').field('CPU_' + str(index), item)
queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend)
queriesToSendCount += len(queriesToSend)
print("Added " + str(queriesToSendCount) + " points to InfluxDB.")
client.close()
with open('statsByParentNode.json', 'w') as f:
f.write(json.dumps(parentNodes, indent=4))
with open('statsByCircuit.json', 'w') as f:
f.write(json.dumps(subscriberCircuits, indent=4))
longTermStats['droppedPacketsTotal'] = droppedPacketsAllTime
with open('longTermStats.json', 'w') as f:
f.write(json.dumps(longTermStats, indent=4))
with open('tinsStats.json', 'w') as f:
f.write(json.dumps(tinsStats, indent=4))
endTime = datetime.now()
durationSeconds = round((endTime - startTime).total_seconds(), 2)
print("Graphs updated within " + str(durationSeconds) + " seconds.")
def refreshLatencyGraphs():
startTime = datetime.now()
with open('statsByParentNode.json', 'r') as j:
parentNodes = json.loads(j.read())
with open('statsByCircuit.json', 'r') as j:
subscriberCircuits = json.loads(j.read())
parentNodeNameDict = parentNodeNameDictPull()
print("Retrieving circuit statistics")
subscriberCircuits = getCircuitLatencyStats(subscriberCircuits)
print("Computing parent node statistics")
parentNodes = getParentNodeLatencyStats(parentNodes, subscriberCircuits)
print("Writing data to InfluxDB")
client = InfluxDBClient(
url=influxDBurl,
token=influxDBtoken,
org=influxDBOrg
)
# Record current timestamp, use for all points added
timestamp = time.time_ns()
write_api = client.write_api(write_options=SYNCHRONOUS)
chunkedsubscriberCircuits = list(chunk_list(subscriberCircuits, 200))
queriesToSendCount = 0
for chunk in chunkedsubscriberCircuits:
queriesToSend = []
for circuit in chunk:
if circuit['stats']['sinceLastQuery']['tcpLatency'] != None:
tcpLatency = float(circuit['stats']['sinceLastQuery']['tcpLatency'])
p = Point('TCP Latency').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("TCP Latency", tcpLatency).time(timestamp)
queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend)
queriesToSendCount += len(queriesToSend)
queriesToSend = []
for parentNode in parentNodes:
if parentNode['stats']['sinceLastQuery']['tcpLatency'] != None:
tcpLatency = float(parentNode['stats']['sinceLastQuery']['tcpLatency'])
p = Point('TCP Latency').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("TCP Latency", tcpLatency).time(timestamp)
queriesToSend.append(p)
write_api.write(bucket=influxDBBucket, record=queriesToSend)
queriesToSendCount += len(queriesToSend)
listOfAllLatencies = []
for circuit in subscriberCircuits:
if circuit['stats']['sinceLastQuery']['tcpLatency'] != None:
listOfAllLatencies.append(circuit['stats']['sinceLastQuery']['tcpLatency'])
currentNetworkLatency = statistics.median(listOfAllLatencies)
p = Point('TCP Latency').tag("Type", "Network").field("TCP Latency", currentNetworkLatency).time(timestamp)
write_api.write(bucket=influxDBBucket, record=p)
queriesToSendCount += 1
print("Added " + str(queriesToSendCount) + " points to InfluxDB.")
client.close()
with open('statsByParentNode.json', 'w') as f:
f.write(json.dumps(parentNodes, indent=4))
with open('statsByCircuit.json', 'w') as f:
f.write(json.dumps(subscriberCircuits, indent=4))
endTime = datetime.now()
durationSeconds = round((endTime - startTime).total_seconds(), 2)
print("Graphs updated within " + str(durationSeconds) + " seconds.")
if __name__ == '__main__':
refreshBandwidthGraphs()
refreshLatencyGraphs()

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,421 @@
# Provides common functionality shared between
# integrations.
from typing import List, Any
from ispConfig import allowedSubnets, ignoreSubnets, generatedPNUploadMbps, generatedPNDownloadMbps
import ipaddress
import enum
def isInAllowedSubnets(inputIP):
# Check whether an IP address occurs inside the allowedSubnets list
isAllowed = False
if '/' in inputIP:
inputIP = inputIP.split('/')[0]
for subnet in allowedSubnets:
if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)):
isAllowed = True
return isAllowed
def isInIgnoredSubnets(inputIP):
# Check whether an IP address occurs within the ignoreSubnets list
isIgnored = False
if '/' in inputIP:
inputIP = inputIP.split('/')[0]
for subnet in ignoreSubnets:
if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)):
isIgnored = True
return isIgnored
def isIpv4Permitted(inputIP):
# Checks whether an IP address is in Allowed Subnets.
# If it is, check that it isn't in Ignored Subnets.
# If it is allowed and not ignored, returns true.
# Otherwise, returns false.
return isInIgnoredSubnets(inputIP) == False and isInAllowedSubnets(inputIP)
def fixSubnet(inputIP):
# If an IP address has a CIDR other than /32 (e.g. 192.168.1.1/24),
# but doesn't appear as a network address (e.g. 192.168.1.0/24)
# then it probably isn't actually serving that whole subnet.
# This allows you to specify e.g. 192.168.1.0/24 is "the client
# on port 3" in the device, without falling afoul of UISP's inclusion
# of subnet masks in device IPs.
[rawIp, cidr] = inputIP.split('/')
if cidr != "32":
try:
subnet = ipaddress.ip_network(inputIP)
except:
# Not a network address
return rawIp + "/32"
return inputIP
class NodeType(enum.IntEnum):
# Enumeration to define what type of node
# a NetworkNode is.
root = 1
site = 2
ap = 3
client = 4
clientWithChildren = 5
device = 6
class NetworkNode:
# Defines a node on a LibreQoS network graph.
# Nodes default to being disconnected, and
# will be mapped to the root of the overall
# graph.
id: str
displayName: str
parentIndex: int
parentId: str
type: NodeType
downloadMbps: int
uploadMbps: int
ipv4: List
ipv6: List
address: str
mac: str
def __init__(self, id: str, displayName: str = "", parentId: str = "", type: NodeType = NodeType.site, download: int = generatedPNDownloadMbps, upload: int = generatedPNUploadMbps, ipv4: List = [], ipv6: List = [], address: str = "", mac: str = "") -> None:
self.id = id
self.parentIndex = 0
self.type = type
self.parentId = parentId
if displayName == "":
self.displayName = id
else:
self.displayName = displayName
self.downloadMbps = download
self.uploadMbps = upload
self.ipv4 = ipv4
self.ipv6 = ipv6
self.address = address
self.mac = mac
class NetworkGraph:
# Defines a network as a graph topology
# allowing any integration to build the
# graph via a common API, emitting
# ShapedDevices and network.json files
# via a common interface.
nodes: List
ipv4ToIPv6: Any
excludeSites: List # Copied to allow easy in-test patching
exceptionCPEs: Any
def __init__(self) -> None:
from ispConfig import findIPv6usingMikrotik, excludeSites, exceptionCPEs
self.nodes = [
NetworkNode("FakeRoot", type=NodeType.root,
parentId="", displayName="Shaper Root")
]
self.excludeSites = excludeSites
self.exceptionCPEs = exceptionCPEs
if findIPv6usingMikrotik:
from mikrotikFindIPv6 import pullMikrotikIPv6
self.ipv4ToIPv6 = pullMikrotikIPv6()
else:
self.ipv4ToIPv6 = {}
def addRawNode(self, node: NetworkNode) -> None:
# Adds a NetworkNode to the graph, unchanged.
# If a site is excluded (via excludedSites in ispConfig)
# it won't be added
if not node.displayName in self.excludeSites:
if node.displayName in self.exceptionCPEs.keys():
node.parentId = self.exceptionCPEs[node.displayName]
self.nodes.append(node)
def replaceRootNote(self, node: NetworkNode) -> None:
# Replaces the automatically generated root node
# with a new node. Useful when you have a top-level
# node specified (e.g. "uispSite" in the UISP
# integration)
self.nodes[0] = node
def addNodeAsChild(self, parent: str, node: NetworkNode) -> None:
# Searches the existing graph for a named parent,
# adjusts the new node's parentIndex to match the new
# node. The parented node is then inserted.
#
# Exceptions are NOT applied, since we're explicitly
# specifying the parent - we're assuming you really
# meant it.
if node.displayName in self.excludeSites: return
parentIdx = 0
for (i, node) in enumerate(self.nodes):
if node.id == parent:
parentIdx = i
node.parentIndex = parentIdx
self.nodes.append(node)
def __reparentById(self) -> None:
# Scans the entire node tree, searching for parents
# by name. Entries are re-mapped to match the named
# parents. You can use this to build a tree from a
# blob of raw data.
for child in self.nodes:
if child.parentId != "":
for (i, node) in enumerate(self.nodes):
if node.id == child.parentId:
child.parentIndex = i
def findNodeIndexById(self, id: str) -> int:
# Finds a single node by identity(id)
# Return -1 if not found
for (i, node) in enumerate(self.nodes):
if node.id == id:
return i
return -1
def findNodeIndexByName(self, name: str) -> int:
# Finds a single node by identity(name)
# Return -1 if not found
for (i, node) in enumerate(self.nodes):
if node.displayName == name:
return i
return -1
def findChildIndices(self, parentIndex: int) -> List:
# Returns the indices of all nodes with a
# parentIndex equal to the specified parameter
result = []
for (i, node) in enumerate(self.nodes):
if node.parentIndex == parentIndex:
result.append(i)
return result
def __promoteClientsWithChildren(self) -> None:
# Searches for client sites that have children,
# and changes their node type to clientWithChildren
for (i, node) in enumerate(self.nodes):
if node.type == NodeType.client:
for child in self.findChildIndices(i):
if self.nodes[child].type != NodeType.device:
node.type = NodeType.clientWithChildren
def __clientsWithChildrenToSites(self) -> None:
toAdd = []
for (i, node) in enumerate(self.nodes):
if node.type == NodeType.clientWithChildren:
siteNode = NetworkNode(
id=node.id + "_gen",
displayName="(Generated Site) " + node.displayName,
type=NodeType.site
)
siteNode.parentIndex = node.parentIndex
node.parentId = siteNode.id
if node.type == NodeType.clientWithChildren:
node.type = NodeType.client
for child in self.findChildIndices(i):
if self.nodes[child].type == NodeType.client or self.nodes[child].type == NodeType.clientWithChildren or self.nodes[child].type == NodeType.site:
self.nodes[child].parentId = siteNode.id
toAdd.append(siteNode)
for n in toAdd:
self.addRawNode(n)
self.__reparentById()
def __findUnconnectedNodes(self) -> List:
# Performs a tree-traversal and finds any nodes that
# aren't connected to the root. This is a "sanity check",
# and also an easy way to handle "flat" topologies and
# ensure that the unconnected nodes are re-connected to
# the root.
visited = []
next = [0]
while len(next) > 0:
nextTraversal = next.pop()
visited.append(nextTraversal)
for idx in self.findChildIndices(nextTraversal):
if idx not in visited:
next.append(idx)
result = []
for i, n in enumerate(self.nodes):
if i not in visited:
result.append(i)
return result
def __reconnectUnconnected(self):
# Finds any unconnected nodes and reconnects
# them to the root
for idx in self.__findUnconnectedNodes():
if self.nodes[idx].type == NodeType.site:
self.nodes[idx].parentIndex = 0
for idx in self.__findUnconnectedNodes():
if self.nodes[idx].type == NodeType.clientWithChildren:
self.nodes[idx].parentIndex = 0
for idx in self.__findUnconnectedNodes():
if self.nodes[idx].type == NodeType.client:
self.nodes[idx].parentIndex = 0
def prepareTree(self) -> None:
# Helper function that calls all the cleanup and mapping
# functions in the right order. Unless you are doing
# something special, you can use this instead of
# calling the functions individually
self.__reparentById()
self.__promoteClientsWithChildren()
self.__clientsWithChildrenToSites()
self.__reconnectUnconnected()
def doesNetworkJsonExist(self):
# Returns true if "network.json" exists, false otherwise
import os
return os.path.isfile("network.json")
def __isSite(self, index) -> bool:
return self.nodes[index].type == NodeType.ap or self.nodes[index].type == NodeType.site or self.nodes[index].type == NodeType.clientWithChildren
def createNetworkJson(self):
import json
topLevelNode = {}
self.__visited = [] # Protection against loops - never visit twice
for child in self.findChildIndices(0):
if child > 0 and self.__isSite(child):
topLevelNode[self.nodes[child].displayName] = self.__buildNetworkObject(
child)
del self.__visited
with open('network.json', 'w') as f:
json.dump(topLevelNode, f, indent=4)
def __buildNetworkObject(self, idx):
# Private: used to recurse down the network tree while building
# network.json
self.__visited.append(idx)
node = {
"downloadBandwidthMbps": self.nodes[idx].downloadMbps,
"uploadBandwidthMbps": self.nodes[idx].uploadMbps,
}
children = {}
hasChildren = False
for child in self.findChildIndices(idx):
if child > 0 and self.__isSite(child) and child not in self.__visited:
children[self.nodes[child].displayName] = self.__buildNetworkObject(
child)
hasChildren = True
if hasChildren:
node["children"] = children
return node
def __addIpv6FromMap(self, ipv4, ipv6) -> None:
# Scans each address in ipv4. If its present in the
# IPv4 to Ipv6 map (currently pulled from Mikrotik devices
# if findIPv6usingMikrotik is enabled), then matching
# IPv6 networks are appended to the ipv6 list.
# This is explicitly non-destructive of the existing IPv6
# list, in case you already have some.
for ipCidr in ipv4:
if '/' in ipCidr: ip = ipCidr.split('/')[0]
else: ip = ipCidr
if ip in self.ipv4ToIPv6.keys():
ipv6.append(self.ipv4ToIPv6[ip])
def createShapedDevices(self):
import csv
from ispConfig import bandwidthOverheadFactor
# Builds ShapedDevices.csv from the network tree.
circuits = []
for (i, node) in enumerate(self.nodes):
if node.type == NodeType.client:
parent = self.nodes[node.parentIndex].displayName
if parent == "Shaper Root": parent = ""
circuit = {
"id": node.id,
"name": node.address,
"parent": parent,
"download": node.downloadMbps,
"upload": node.uploadMbps,
"devices": []
}
for child in self.findChildIndices(i):
if self.nodes[child].type == NodeType.device and (len(self.nodes[child].ipv4)+len(self.nodes[child].ipv6)>0):
ipv4 = self.nodes[child].ipv4
ipv6 = self.nodes[child].ipv6
self.__addIpv6FromMap(ipv4, ipv6)
device = {
"id": self.nodes[child].id,
"name": self.nodes[child].displayName,
"mac": self.nodes[child].mac,
"ipv4": ipv4,
"ipv6": ipv6,
}
circuit["devices"].append(device)
if len(circuit["devices"]) > 0:
circuits.append(circuit)
with open('ShapedDevices.csv', 'w', newline='') as csvfile:
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
wr.writerow(['Circuit ID', 'Circuit Name', 'Device ID', 'Device Name', 'Parent Node', 'MAC',
'IPv4', 'IPv6', 'Download Min', 'Upload Min', 'Download Max', 'Upload Max', 'Comment'])
for circuit in circuits:
for device in circuit["devices"]:
#Remove brackets and quotes of list so LibreQoS.py can parse it
device["ipv4"] = str(device["ipv4"]).replace('[','').replace(']','').replace("'",'')
device["ipv6"] = str(device["ipv6"]).replace('[','').replace(']','').replace("'",'')
row = [
circuit["id"],
circuit["name"],
device["id"],
device["name"],
circuit["parent"],
device["mac"],
device["ipv4"],
device["ipv6"],
int(circuit["download"] * 0.98),
int(circuit["upload"] * 0.98),
int(circuit["download"] * bandwidthOverheadFactor),
int(circuit["upload"] * bandwidthOverheadFactor),
""
]
wr.writerow(row)
def plotNetworkGraph(self, showClients=False):
# Requires `pip install graphviz` to function.
# You also need to install graphviz on your PC.
# In Ubuntu, apt install graphviz will do it.
# Plots the network graph to a PDF file, allowing
# visual verification that the graph makes sense.
# Could potentially be useful in a future
# web interface.
import importlib.util
if (spec := importlib.util.find_spec('graphviz')) is None:
return
import graphviz
dot = graphviz.Digraph(
'network', comment="Network Graph", engine="fdp")
for (i, node) in enumerate(self.nodes):
if ((node.type != NodeType.client and node.type != NodeType.device) or showClients):
color = "white"
match node.type:
case NodeType.root: color = "green"
case NodeType.site: color = "red"
case NodeType.ap: color = "blue"
case NodeType.clientWithChildren: color = "magenta"
case NodeType.device: color = "white"
case default: color = "grey"
dot.node("N" + str(i), node.displayName, color=color)
children = self.findChildIndices(i)
for child in children:
if child != i:
if (self.nodes[child].type != NodeType.client and self.nodes[child].type != NodeType.device) or showClients:
dot.edge("N" + str(i), "N" + str(child))
dot.render("network.pdf")

View File

@ -0,0 +1,125 @@
import requests
from ispConfig import excludeSites, findIPv6usingMikrotik, bandwidthOverheadFactor, exceptionCPEs, splynx_api_key, splynx_api_secret, splynx_api_url
from integrationCommon import isIpv4Permitted
import base64
from requests.auth import HTTPBasicAuth
if findIPv6usingMikrotik == True:
from mikrotikFindIPv6 import pullMikrotikIPv6
from integrationCommon import NetworkGraph, NetworkNode, NodeType
def buildHeaders():
credentials = splynx_api_key + ':' + splynx_api_secret
credentials = base64.b64encode(credentials.encode()).decode()
return {'Authorization' : "Basic %s" % credentials}
def spylnxRequest(target, headers):
# Sends a REST GET request to Spylnx and returns the
# result in JSON
url = splynx_api_url + "/api/2.0/" + target
r = requests.get(url, headers=headers)
return r.json()
def getTariffs(headers):
data = spylnxRequest("admin/tariffs/internet", headers)
tariff = []
downloadForTariffID = {}
uploadForTariffID = {}
for tariff in data:
tariffID = tariff['id']
speed_download = round((int(tariff['speed_download']) / 1000))
speed_upload = round((int(tariff['speed_upload']) / 1000))
downloadForTariffID[tariffID] = speed_download
uploadForTariffID[tariffID] = speed_upload
return (tariff, downloadForTariffID, uploadForTariffID)
def getCustomers(headers):
data = spylnxRequest("admin/customers/customer", headers)
#addressForCustomerID = {}
#customerIDs = []
#for customer in data:
# customerIDs.append(customer['id'])
# addressForCustomerID[customer['id']] = customer['street_1']
return data
def getRouters(headers):
data = spylnxRequest("admin/networking/routers", headers)
ipForRouter = {}
for router in data:
routerID = router['id']
ipForRouter[routerID] = router['ip']
return ipForRouter
def combineAddress(json):
# Combines address fields into a single string
# The API docs seem to indicate that there isn't a "state" field?
if json["street_1"]=="" and json["city"]=="" and json["zip_code"]=="":
return json["id"] + "/" + json["name"]
else:
return json["street_1"] + " " + json["city"] + " " + json["zip_code"]
def createShaper():
net = NetworkGraph()
print("Fetching data from Spylnx")
headers = buildHeaders()
tariff, downloadForTariffID, uploadForTariffID = getTariffs(headers)
customers = getCustomers(headers)
ipForRouter = getRouters(headers)
# It's not very clear how a service is meant to handle multiple
# devices on a shared tariff. Creating each service as a combined
# entity including the customer, to be on the safe side.
for customerJson in customers:
services = spylnxRequest("admin/customers/customer/" + customerJson["id"] + "/internet-services", headers)
for serviceJson in services:
combinedId = "c_" + str(customerJson["id"]) + "_s_" + str(serviceJson["id"])
tariff_id = serviceJson['tariff_id']
customer = NetworkNode(
type=NodeType.client,
id=combinedId,
displayName=customerJson["name"],
address=combineAddress(customerJson),
download=downloadForTariffID[tariff_id],
upload=uploadForTariffID[tariff_id],
)
net.addRawNode(customer)
ipv4 = ''
ipv6 = ''
routerID = serviceJson['router_id']
# If not "Taking IPv4" (Router will assign IP), then use router's set IP
if serviceJson['taking_ipv4'] == 0:
ipv4 = ipForRouter[routerID]
elif serviceJson['taking_ipv4'] == 1:
ipv4 = serviceJson['ipv4']
# If not "Taking IPv6" (Router will assign IP), then use router's set IP
if serviceJson['taking_ipv6'] == 0:
ipv6 = ''
elif serviceJson['taking_ipv6'] == 1:
ipv6 = serviceJson['ipv6']
device = NetworkNode(
id=combinedId+"_d" + str(serviceJson["id"]),
displayName=serviceJson["description"],
type=NodeType.device,
parentId=combinedId,
mac=serviceJson["mac"],
ipv4=[ipv4],
ipv6=[ipv6]
)
net.addRawNode(device)
net.prepareTree()
net.plotNetworkGraph(False)
if net.doesNetworkJsonExist():
print("network.json already exists. Leaving in-place.")
else:
net.createNetworkJson()
net.createShapedDevices()
def importFromSplynx():
#createNetworkJSON()
createShaper()
if __name__ == '__main__':
importFromSplynx()

220
old/v1.3/integrationUISP.py Normal file
View File

@ -0,0 +1,220 @@
import requests
import os
import csv
from ispConfig import uispSite, uispStrategy
from integrationCommon import isIpv4Permitted, fixSubnet
def uispRequest(target):
# Sends an HTTP request to UISP and returns the
# result in JSON. You only need to specify the
# tail end of the URL, e.g. "sites"
from ispConfig import UISPbaseURL, uispAuthToken
url = UISPbaseURL + "/nms/api/v2.1/" + target
headers = {'accept': 'application/json', 'x-auth-token': uispAuthToken}
r = requests.get(url, headers=headers)
return r.json()
def buildFlatGraph():
# Builds a high-performance (but lacking in site or AP bandwidth control)
# network.
from integrationCommon import NetworkGraph, NetworkNode, NodeType
from ispConfig import generatedPNUploadMbps, generatedPNDownloadMbps
# Load network sites
print("Loading Data from UISP")
sites = uispRequest("sites")
devices = uispRequest("devices?withInterfaces=true&authorized=true")
# Build a basic network adding every client to the tree
print("Building Flat Topology")
net = NetworkGraph()
for site in sites:
type = site['identification']['type']
if type == "endpoint":
id = site['identification']['id']
address = site['description']['address']
name = site['identification']['name']
type = site['identification']['type']
download = generatedPNDownloadMbps
upload = generatedPNUploadMbps
if (site['qos']['downloadSpeed']) and (site['qos']['uploadSpeed']):
download = int(round(site['qos']['downloadSpeed']/1000000))
upload = int(round(site['qos']['uploadSpeed']/1000000))
node = NetworkNode(id=id, displayName=name, type=NodeType.client, download=download, upload=upload, address=address)
net.addRawNode(node)
for device in devices:
if device['identification']['site'] is not None and device['identification']['site']['id'] == id:
# The device is at this site, so add it
ipv4 = []
ipv6 = []
for interface in device["interfaces"]:
for ip in interface["addresses"]:
ip = ip["cidr"]
if isIpv4Permitted(ip):
ip = fixSubnet(ip)
if ip not in ipv4:
ipv4.append(ip)
# TODO: Figure out Mikrotik IPv6?
mac = device['identification']['mac']
net.addRawNode(NetworkNode(id=device['identification']['id'], displayName=device['identification']
['name'], parentId=id, type=NodeType.device, ipv4=ipv4, ipv6=ipv6, mac=mac))
# Finish up
net.prepareTree()
net.plotNetworkGraph(False)
if net.doesNetworkJsonExist():
print("network.json already exists. Leaving in-place.")
else:
net.createNetworkJson()
net.createShapedDevices()
def buildFullGraph():
# Attempts to build a full network graph, incorporating as much of the UISP
# hierarchy as possible.
from integrationCommon import NetworkGraph, NetworkNode, NodeType
from ispConfig import generatedPNUploadMbps, generatedPNDownloadMbps
# Load network sites
print("Loading Data from UISP")
sites = uispRequest("sites")
devices = uispRequest("devices?withInterfaces=true&authorized=true")
dataLinks = uispRequest("data-links?siteLinksOnly=true")
# Do we already have a integrationUISPbandwidths.csv file?
siteBandwidth = {}
if os.path.isfile("integrationUISPbandwidths.csv"):
with open('integrationUISPbandwidths.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
next(csv_reader)
for row in csv_reader:
name, download, upload = row
download = int(download)
upload = int(upload)
siteBandwidth[name] = {"download": download, "upload": upload}
# Find AP capacities from UISP
for device in devices:
if device['identification']['role'] == "ap":
name = device['identification']['name']
if not name in siteBandwidth and device['overview']['downlinkCapacity'] and device['overview']['uplinkCapacity']:
download = int(device['overview']
['downlinkCapacity'] / 1000000)
upload = int(device['overview']['uplinkCapacity'] / 1000000)
siteBandwidth[device['identification']['name']] = {
"download": download, "upload": upload}
print("Building Topology")
net = NetworkGraph()
# Add all sites and client sites
for site in sites:
id = site['identification']['id']
name = site['identification']['name']
type = site['identification']['type']
download = generatedPNDownloadMbps
upload = generatedPNUploadMbps
address = ""
if site['identification']['parent'] is None:
parent = ""
else:
parent = site['identification']['parent']['id']
match type:
case "site":
nodeType = NodeType.site
if name in siteBandwidth:
# Use the CSV bandwidth values
download = siteBandwidth[name]["download"]
upload = siteBandwidth[name]["upload"]
else:
# Add them just in case
siteBandwidth[name] = {
"download": download, "upload": upload}
case default:
nodeType = NodeType.client
address = site['description']['address']
if (site['qos']['downloadSpeed']) and (site['qos']['uploadSpeed']):
download = int(round(site['qos']['downloadSpeed']/1000000))
upload = int(round(site['qos']['uploadSpeed']/1000000))
node = NetworkNode(id=id, displayName=name, type=nodeType,
parentId=parent, download=download, upload=upload, address=address)
# If this is the uispSite node, it becomes the root. Otherwise, add it to the
# node soup.
if name == uispSite:
net.replaceRootNote(node)
else:
net.addRawNode(node)
for device in devices:
if device['identification']['site'] is not None and device['identification']['site']['id'] == id:
# The device is at this site, so add it
ipv4 = []
ipv6 = []
for interface in device["interfaces"]:
for ip in interface["addresses"]:
ip = ip["cidr"]
if isIpv4Permitted(ip):
ip = fixSubnet(ip)
if ip not in ipv4:
ipv4.append(ip)
# TODO: Figure out Mikrotik IPv6?
mac = device['identification']['mac']
net.addRawNode(NetworkNode(id=device['identification']['id'], displayName=device['identification']
['name'], parentId=id, type=NodeType.device, ipv4=ipv4, ipv6=ipv6, mac=mac))
# Now iterate access points, and look for connections to sites
for node in net.nodes:
if node.type == NodeType.device:
for dl in dataLinks:
if dl['from']['device'] is not None and dl['from']['device']['identification']['id'] == node.id:
if dl['to']['site'] is not None and dl['from']['site']['identification']['id'] != dl['to']['site']['identification']['id']:
target = net.findNodeIndexById(
dl['to']['site']['identification']['id'])
if target > -1:
# We found the site
if net.nodes[target].type == NodeType.client or net.nodes[target].type == NodeType.clientWithChildren:
net.nodes[target].parentId = node.id
node.type = NodeType.ap
if node.displayName in siteBandwidth:
# Use the bandwidth numbers from the CSV file
node.uploadMbps = siteBandwidth[node.displayName]["upload"]
node.downloadMbps = siteBandwidth[node.displayName]["download"]
else:
# Add some defaults in case they want to change them
siteBandwidth[node.displayName] = {
"download": generatedPNDownloadMbps, "upload": generatedPNUploadMbps}
net.prepareTree()
net.plotNetworkGraph(False)
if net.doesNetworkJsonExist():
print("network.json already exists. Leaving in-place.")
else:
net.createNetworkJson()
net.createShapedDevices()
# Save integrationUISPbandwidths.csv
# (the newLine fixes generating extra blank lines)
with open('integrationUISPbandwidths.csv', 'w', newline='') as csvfile:
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
wr.writerow(['ParentNode', 'Download Mbps', 'Upload Mbps'])
for device in siteBandwidth:
entry = (
device, siteBandwidth[device]["download"], siteBandwidth[device]["upload"])
wr.writerow(entry)
def importFromUISP():
match uispStrategy:
case "full": buildFullGraph()
case default: buildFlatGraph()
if __name__ == '__main__':
importFromUISP()

View File

@ -0,0 +1,101 @@
# 'fq_codel' or 'cake diffserv4'
# 'cake diffserv4' is recommended
# sqm = 'fq_codel'
sqm = 'cake diffserv4'
# Used to passively monitor the network for before / after comparisons. Leave as False to
# ensure actual shaping. After changing this value, run "sudo systemctl restart LibreQoS.service"
monitorOnlyMode = False
# How many Mbps are available to the edge of this network
upstreamBandwidthCapacityDownloadMbps = 1000
upstreamBandwidthCapacityUploadMbps = 1000
# Devices in ShapedDevices.csv without a defined ParentNode will be placed under a generated
# parent node, evenly spread out across CPU cores. Here, define the bandwidth limit for each
# of those generated parent nodes.
generatedPNDownloadMbps = 1000
generatedPNUploadMbps = 1000
# Interface connected to core router
interfaceA = 'eth1'
# Interface connected to edge router
interfaceB = 'eth2'
## WORK IN PROGRESS. Note that interfaceA determines the "stick" interface
## I could only get scanning to work if I issued ethtool -K enp1s0f1 rxvlan off
OnAStick = False
# VLAN facing the core router
StickVlanA = 0
# VLAN facing the edge router
StickVlanB = 0
# Allow shell commands. False causes commands print to console only without being executed.
# MUST BE ENABLED FOR PROGRAM TO FUNCTION
enableActualShellCommands = True
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
runShellCommandsAsSudo = False
# Allows overriding queues / CPU cores used. When set to 0, the max possible queues / CPU cores are utilized. Please leave as 0.
queuesAvailableOverride = 0
# Some networks are flat - where there are no Parent Nodes defined in ShapedDevices.csv
# For such flat networks, just define network.json as {} and enable this setting
# By default, it balances the subscribers across CPU cores, factoring in their max bandwidth rates
# Past 25,000 subsribers this algorithm becomes inefficient and is not advised
useBinPackingToBalanceCPU = True
# Bandwidth & Latency Graphing
influxDBEnabled = True
influxDBurl = "http://localhost:8086"
influxDBBucket = "libreqos"
influxDBOrg = "Your ISP Name Here"
influxDBtoken = ""
# NMS/CRM Integration
# If a device shows a WAN IP within these subnets, assume they are behind NAT / un-shapable, and ignore them
ignoreSubnets = ['192.168.0.0/16']
allowedSubnets = ['100.64.0.0/10']
# Splynx Integration
automaticImportSplynx = False
splynx_api_key = ''
splynx_api_secret = ''
# Everything before /api/2.0/ on your Splynx instance
splynx_api_url = 'https://YOUR_URL.splynx.app'
# UISP integration
automaticImportUISP = False
uispAuthToken = ''
# Everything before /nms/ on your UISP instance
UISPbaseURL = 'https://examplesite.com'
# UISP Site - enter the name of the root site in your network tree
# to act as the starting point for the tree mapping
uispSite = ''
# Strategy:
# * "flat" - create all client sites directly off the top of the tree,
# provides maximum performance - at the expense of not offering AP,
# or site options.
# * "full" - build a complete network map
uispStrategy = "full"
# List any sites that should not be included, with each site name surrounded by '' and seperated by commas
excludeSites = []
# If you use IPv6, this can be used to find associated IPv6 prefixes for your clients' IPv4 addresses, and match them to those devices
findIPv6usingMikrotik = False
# If you want to provide a safe cushion for speed test results to prevent customer complains, you can set this to 1.15 (15% above plan rate).
# If not, you can leave as 1.0
bandwidthOverheadFactor = 1.0
# For edge cases, set the respective ParentNode for these CPEs
exceptionCPEs = {}
# 'CPE-SomeLocation1': 'AP-SomeLocation1',
# 'CPE-SomeLocation2': 'AP-SomeLocation2',
#}
# API Auth
apiUsername = "testUser"
apiPassword = "changeme8343486806"
apiHostIP = "127.0.0.1"
apiHostPost = 5000

View File

@ -0,0 +1,2 @@
Router Name / ID,IP,API Username,API Password, API Port
main,100.64.0.1,admin,password,8728
1 Router Name / ID IP API Username API Password API Port
2 main 100.64.0.1 admin password 8728

View File

@ -0,0 +1,52 @@
#!/usr/bin/python3
import routeros_api
import csv
def pullMikrotikIPv6():
ipv4ToIPv6 = {}
routerList = []
with open('mikrotikDHCPRouterList.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
next(csv_reader)
for row in csv_reader:
RouterName, IP, Username, Password, apiPort = row
routerList.append((RouterName, IP, Username, Password, apiPort))
for router in routerList:
RouterName, IP, inputUsername, inputPassword = router
connection = routeros_api.RouterOsApiPool(IP, username=inputUsername, password=inputPassword, port=apiPort, use_ssl=False, ssl_verify=False, ssl_verify_hostname=False, plaintext_login=True)
api = connection.get_api()
macToIPv4 = {}
macToIPv6 = {}
clientAddressToIPv6 = {}
list_dhcp = api.get_resource('/ip/dhcp-server/lease')
entries = list_dhcp.get()
for entry in entries:
try:
macToIPv4[entry['mac-address']] = entry['address']
except:
pass
list_dhcp = api.get_resource('/ipv6/dhcp-server/binding')
entries = list_dhcp.get()
for entry in entries:
try:
clientAddressToIPv6[entry['client-address']] = entry['address']
except:
pass
list_dhcp = api.get_resource('/ipv6/neighbor')
entries = list_dhcp.get()
for entry in entries:
try:
realIPv6 = clientAddressToIPv6[entry['address']]
macToIPv6[entry['mac-address']] = realIPv6
except:
pass
for mac, ipv6 in macToIPv6.items():
try:
ipv4 = macToIPv4[mac]
ipv4ToIPv6[ipv4] = ipv6
except:
print('Failed to find associated IPv4 for ' + ipv6)
return ipv4ToIPv6
if __name__ == '__main__':
print(pullMikrotikIPv6())

View File

@ -0,0 +1,75 @@
{
"Site_1":
{
"downloadBandwidthMbps":1000,
"uploadBandwidthMbps":1000,
"children":
{
"AP_A":
{
"downloadBandwidthMbps":500,
"uploadBandwidthMbps":500
},
"Site_3":
{
"downloadBandwidthMbps":500,
"uploadBandwidthMbps":500,
"children":
{
"PoP_5":
{
"downloadBandwidthMbps":200,
"uploadBandwidthMbps":200,
"children":
{
"AP_9":
{
"downloadBandwidthMbps":120,
"uploadBandwidthMbps":120
},
"PoP_6":
{
"downloadBandwidthMbps":60,
"uploadBandwidthMbps":60,
"children":
{
"AP_11":
{
"downloadBandwidthMbps":30,
"uploadBandwidthMbps":30
}
}
}
}
}
}
}
}
},
"Site_2":
{
"downloadBandwidthMbps":500,
"uploadBandwidthMbps":500,
"children":
{
"PoP_1":
{
"downloadBandwidthMbps":200,
"uploadBandwidthMbps":200,
"children":
{
"AP_7":
{
"downloadBandwidthMbps":100,
"uploadBandwidthMbps":100
}
}
},
"AP_1":
{
"downloadBandwidthMbps":150,
"uploadBandwidthMbps":150
}
}
}
}

62
old/v1.3/scheduler.py Normal file
View File

@ -0,0 +1,62 @@
import time
import datetime
from LibreQoS import refreshShapers, refreshShapersUpdateOnly
from graphInfluxDB import refreshBandwidthGraphs, refreshLatencyGraphs
from ispConfig import influxDBEnabled, automaticImportUISP, automaticImportSplynx
if automaticImportUISP:
from integrationUISP import importFromUISP
if automaticImportSplynx:
from integrationSplynx import importFromSplynx
def importFromCRM():
if automaticImportUISP:
try:
importFromUISP()
except:
print("Failed to import from UISP")
elif automaticImportSplynx:
try:
importFromSplynx()
except:
print("Failed to import from Splynx")
def importAndShapeFullReload():
importFromCRM()
refreshShapers()
def importAndShapePartialReload():
importFromCRM()
refreshShapersUpdateOnly()
def graph():
time.sleep(10)
try:
refreshBandwidthGraphs()
except:
print("Failed to run refreshBandwidthGraphs()")
time.sleep(10)
try:
refreshBandwidthGraphs()
except:
print("Failed to run refreshBandwidthGraphs()")
time.sleep(10)
try:
refreshBandwidthGraphs()
except:
print("Failed to run refreshBandwidthGraphs()")
time.sleep(1)
try:
refreshLatencyGraphs()
except:
print("Failed to run refreshLatencyGraphs()")
if __name__ == '__main__':
importAndShapeFullReload()
while True:
finish_time = datetime.datetime.now() + datetime.timedelta(minutes=30)
while datetime.datetime.now() < finish_time:
if influxDBEnabled:
graph()
else:
time.sleep(1)
importAndShapePartialReload()

333
old/v1.3/testGraph.py Normal file
View File

@ -0,0 +1,333 @@
import unittest
class TestGraph(unittest.TestCase):
def test_empty_graph(self):
"""
Test instantiation of the graph type
"""
from integrationCommon import NetworkGraph
graph = NetworkGraph()
self.assertEqual(len(graph.nodes), 1) # There is an automatic root entry
self.assertEqual(graph.nodes[0].id, "FakeRoot")
def test_empty_node(self):
"""
Test instantiation of the GraphNode type
"""
from integrationCommon import NetworkNode, NodeType
node = NetworkNode("test")
self.assertEqual(node.type.value, NodeType.site.value)
self.assertEqual(node.id, "test")
self.assertEqual(node.parentIndex, 0)
def test_node_types(self):
"""
Test that the NodeType enum is working
"""
from integrationCommon import NetworkNode, NodeType
node = NetworkNode("Test", type = NodeType.root)
self.assertEqual(node.type.value, NodeType.root.value)
node = NetworkNode("Test", type = NodeType.site)
self.assertEqual(node.type.value, NodeType.site.value)
node = NetworkNode("Test", type = NodeType.ap)
self.assertEqual(node.type.value, NodeType.ap.value)
node = NetworkNode("Test", type = NodeType.client)
self.assertEqual(node.type.value, NodeType.client.value)
def test_add_raw_node(self):
"""
Adds a single node to a graph to ensure add works
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site"))
self.assertEqual(len(graph.nodes), 2)
self.assertEqual(graph.nodes[1].type.value, NodeType.site.value)
self.assertEqual(graph.nodes[1].parentIndex, 0)
self.assertEqual(graph.nodes[1].id, "Site")
def test_replace_root(self):
"""
Test replacing the default root node with a specified node
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
node = NetworkNode("Test", type = NodeType.site)
graph.replaceRootNote(node)
self.assertEqual(graph.nodes[0].id, "Test")
def add_child_by_named_parent(self):
"""
Tests inserting a node with a named parent
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site"))
graph.addNodeAsChild("site", NetworkNode("Client", type = NodeType.client))
self.assertEqual(len(graph.nodes), 3)
self.assertEqual(graph.nodes[2].parentIndex, 1)
self.assertEqual(graph.nodes[0].parentIndex, 0)
def test_reparent_by_name(self):
"""
Tests that re-parenting a tree by name is functional
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site 1"))
graph.addRawNode(NetworkNode("Site 2"))
graph.addRawNode(NetworkNode("Client 1", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 2", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 3", parentId="Site 2", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 4", parentId="Missing Site", type=NodeType.client))
graph._NetworkGraph__reparentById()
self.assertEqual(len(graph.nodes), 7) # Includes 1 for the fake root
self.assertEqual(graph.nodes[1].parentIndex, 0) # Site 1 is off root
self.assertEqual(graph.nodes[2].parentIndex, 0) # Site 2 is off root
self.assertEqual(graph.nodes[3].parentIndex, 1) # Client 1 found Site 1
self.assertEqual(graph.nodes[4].parentIndex, 1) # Client 2 found Site 1
self.assertEqual(graph.nodes[5].parentIndex, 2) # Client 3 found Site 2
self.assertEqual(graph.nodes[6].parentIndex, 0) # Client 4 didn't find "Missing Site" and goes to root
def test_find_by_id(self):
"""
Tests that finding a node by name succeeds or fails
as expected.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
self.assertEqual(graph.findNodeIndexById("Site 1"), -1) # Test failure
graph.addRawNode(NetworkNode("Site 1"))
self.assertEqual(graph.findNodeIndexById("Site 1"), 1) # Test success
def test_find_by_name(self):
"""
Tests that finding a node by name succeeds or fails
as expected.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
self.assertEqual(graph.findNodeIndexByName("Site 1"), -1) # Test failure
graph.addRawNode(NetworkNode("Site 1", "Site X"))
self.assertEqual(graph.findNodeIndexByName("Site X"), 1) # Test success
def test_find_children(self):
"""
Tests that finding children in the tree works,
both for full and empty cases.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site 1"))
graph.addRawNode(NetworkNode("Site 2"))
graph.addRawNode(NetworkNode("Client 1", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 2", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 3", parentId="Site 2", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 4", parentId="Missing Site", type=NodeType.client))
graph._NetworkGraph__reparentById()
self.assertEqual(graph.findChildIndices(1), [3, 4])
self.assertEqual(graph.findChildIndices(2), [5])
self.assertEqual(graph.findChildIndices(3), [])
def test_clients_with_children(self):
"""
Tests handling cases where a client site
itself has children. This is only useful for
relays where a site hasn't been created in the
middle - but it allows us to graph the more
pathological designs people come up with.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site 1"))
graph.addRawNode(NetworkNode("Site 2"))
graph.addRawNode(NetworkNode("Client 1", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 2", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 3", parentId="Site 2", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 4", parentId="Client 3", type=NodeType.client))
graph._NetworkGraph__reparentById()
graph._NetworkGraph__promoteClientsWithChildren()
self.assertEqual(graph.nodes[5].type, NodeType.clientWithChildren)
self.assertEqual(graph.nodes[6].type, NodeType.client) # Test that a client is still a client
def test_client_with_children_promotion(self):
"""
Test locating a client site with children, and then promoting it to
create a generated site
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site 1"))
graph.addRawNode(NetworkNode("Site 2"))
graph.addRawNode(NetworkNode("Client 1", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 2", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 3", parentId="Site 2", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 4", parentId="Client 3", type=NodeType.client))
graph._NetworkGraph__reparentById()
graph._NetworkGraph__promoteClientsWithChildren()
graph._NetworkGraph__clientsWithChildrenToSites()
self.assertEqual(graph.nodes[5].type, NodeType.client)
self.assertEqual(graph.nodes[6].type, NodeType.client) # Test that a client is still a client
self.assertEqual(graph.nodes[7].type, NodeType.site)
self.assertEqual(graph.nodes[7].id, "Client 3_gen")
def test_find_unconnected(self):
"""
Tests traversing a tree and finding nodes that
have no connection to the rest of the tree.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site 1"))
graph.addRawNode(NetworkNode("Site 2"))
graph.addRawNode(NetworkNode("Client 1", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 2", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 3", parentId="Site 2", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 4", parentId="Client 3", type=NodeType.client))
graph._NetworkGraph__reparentById()
graph._NetworkGraph__promoteClientsWithChildren()
graph.nodes[6].parentIndex = 6 # Create a circle
unconnected = graph._NetworkGraph__findUnconnectedNodes()
self.assertEqual(len(unconnected), 1)
self.assertEqual(unconnected[0], 6)
self.assertEqual(graph.nodes[unconnected[0]].id, "Client 4")
def test_reconnect_unconnected(self):
"""
Tests traversing a tree and finding nodes that
have no connection to the rest of the tree.
Reconnects them and ensures that the orphan is now
parented.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
graph = NetworkGraph()
graph.addRawNode(NetworkNode("Site 1"))
graph.addRawNode(NetworkNode("Site 2"))
graph.addRawNode(NetworkNode("Client 1", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 2", parentId="Site 1", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 3", parentId="Site 2", type=NodeType.client))
graph.addRawNode(NetworkNode("Client 4", parentId="Client 3", type=NodeType.client))
graph._NetworkGraph__reparentById()
graph._NetworkGraph__promoteClientsWithChildren()
graph.nodes[6].parentIndex = 6 # Create a circle
graph._NetworkGraph__reconnectUnconnected()
unconnected = graph._NetworkGraph__findUnconnectedNodes()
self.assertEqual(len(unconnected), 0)
self.assertEqual(graph.nodes[6].parentIndex, 0)
def test_network_json_exists(self):
from integrationCommon import NetworkGraph
import os
if os.path.exists("network.json"):
os.remove("network.json")
graph = NetworkGraph()
self.assertEqual(graph.doesNetworkJsonExist(), False)
with open('network.json', 'w') as f:
f.write('Dummy')
self.assertEqual(graph.doesNetworkJsonExist(), True)
os.remove("network.json")
def test_network_json_example(self):
"""
Rebuilds the network in network.example.json
and makes sure that it matches.
Should serve as an example for how an integration
can build a functional tree.
"""
from integrationCommon import NetworkGraph, NetworkNode, NodeType
import json
net = NetworkGraph()
net.addRawNode(NetworkNode("Site_1", "Site_1", "", NodeType.site, 1000, 1000))
net.addRawNode(NetworkNode("Site_2", "Site_2", "", NodeType.site, 500, 500))
net.addRawNode(NetworkNode("AP_A", "AP_A", "Site_1", NodeType.ap, 500, 500))
net.addRawNode(NetworkNode("Site_3", "Site_3", "Site_1", NodeType.site, 500, 500))
net.addRawNode(NetworkNode("PoP_5", "PoP_5", "Site_3", NodeType.site, 200, 200))
net.addRawNode(NetworkNode("AP_9", "AP_9", "PoP_5", NodeType.ap, 120, 120))
net.addRawNode(NetworkNode("PoP_6", "PoP_6", "PoP_5", NodeType.site, 60, 60))
net.addRawNode(NetworkNode("AP_11", "AP_11", "PoP_6", NodeType.ap, 30, 30))
net.addRawNode(NetworkNode("PoP_1", "PoP_1", "Site_2", NodeType.site, 200, 200))
net.addRawNode(NetworkNode("AP_7", "AP_7", "PoP_1", NodeType.ap, 100, 100))
net.addRawNode(NetworkNode("AP_1", "AP_1", "Site_2", NodeType.ap, 150, 150))
net.prepareTree()
net.createNetworkJson()
with open('network.json') as file:
newFile = json.load(file)
with open('src/network.example.json') as file:
exampleFile = json.load(file)
self.assertEqual(newFile, exampleFile)
def test_ipv4_to_ipv6_map(self):
"""
Tests the underlying functionality of finding an IPv6 address from an IPv4 mapping
"""
from integrationCommon import NetworkGraph
net = NetworkGraph()
ipv4 = [ "100.64.1.1" ]
ipv6 = []
# Test that it doesn't cause issues without any mappings
net._NetworkGraph__addIpv6FromMap(ipv4, ipv6)
self.assertEqual(len(ipv4), 1)
self.assertEqual(len(ipv6), 0)
# Test a mapping
net.ipv4ToIPv6 = {
"100.64.1.1":"dead::beef/64"
}
net._NetworkGraph__addIpv6FromMap(ipv4, ipv6)
self.assertEqual(len(ipv4), 1)
self.assertEqual(len(ipv6), 1)
self.assertEqual(ipv6[0], "dead::beef/64")
def test_site_exclusion(self):
from integrationCommon import NetworkGraph, NetworkNode, NodeType
net = NetworkGraph()
net.excludeSites = ['Site_2']
net.addRawNode(NetworkNode("Site_1", "Site_1", "", NodeType.site, 1000, 1000))
net.addRawNode(NetworkNode("Site_2", "Site_2", "", NodeType.site, 500, 500))
self.assertEqual(len(net.nodes), 2)
def test_site_exception(self):
from integrationCommon import NetworkGraph, NetworkNode, NodeType
net = NetworkGraph()
net.exceptionCPEs = {
"Site_2": "Site_1"
}
net.addRawNode(NetworkNode("Site_1", "Site_1", "", NodeType.site, 1000, 1000))
net.addRawNode(NetworkNode("Site_2", "Site_2", "", NodeType.site, 500, 500))
self.assertEqual(net.nodes[2].parentId, "Site_1")
net.prepareTree()
self.assertEqual(net.nodes[2].parentIndex, 1)
def test_graph_render_to_pdf(self):
"""
Requires that graphviz be installed with
pip install graphviz
And also the associated graphviz package for
your platform.
See: https://www.graphviz.org/download/
Test that it creates a graphic
"""
import importlib.util
if (spec := importlib.util.find_spec('graphviz')) is None:
return
from integrationCommon import NetworkGraph, NetworkNode, NodeType
net = NetworkGraph()
net.addRawNode(NetworkNode("Site_1", "Site_1", "", NodeType.site, 1000, 1000))
net.addRawNode(NetworkNode("Site_2", "Site_2", "", NodeType.site, 500, 500))
net.addRawNode(NetworkNode("AP_A", "AP_A", "Site_1", NodeType.ap, 500, 500))
net.addRawNode(NetworkNode("Site_3", "Site_3", "Site_1", NodeType.site, 500, 500))
net.addRawNode(NetworkNode("PoP_5", "PoP_5", "Site_3", NodeType.site, 200, 200))
net.addRawNode(NetworkNode("AP_9", "AP_9", "PoP_5", NodeType.ap, 120, 120))
net.addRawNode(NetworkNode("PoP_6", "PoP_6", "PoP_5", NodeType.site, 60, 60))
net.addRawNode(NetworkNode("AP_11", "AP_11", "PoP_6", NodeType.ap, 30, 30))
net.addRawNode(NetworkNode("PoP_1", "PoP_1", "Site_2", NodeType.site, 200, 200))
net.addRawNode(NetworkNode("AP_7", "AP_7", "PoP_1", NodeType.ap, 100, 100))
net.addRawNode(NetworkNode("AP_1", "AP_1", "Site_2", NodeType.ap, 150, 150))
net.prepareTree()
net.plotNetworkGraph(False)
from os.path import exists
self.assertEqual(exists("network.pdf.pdf"), True)
if __name__ == '__main__':
unittest.main()

54
old/v1.3/testIP.py Normal file
View File

@ -0,0 +1,54 @@
import unittest
import sys
class TestIP(unittest.TestCase):
def test_ignore(self):
"""
Test that we are correctly ignoring an IP address
"""
sys.path.append('testdata/')
from integrationCommon import isInIgnoredSubnets
self.assertEqual(isInIgnoredSubnets("192.168.1.1"),True)
def test_not_ignore(self):
"""
Test that we are not ignoring an IP address
"""
sys.path.append('testdata/')
from integrationCommon import isInIgnoredSubnets
self.assertEqual(isInIgnoredSubnets("10.0.0.1"),False)
def test_allowed(self):
"""
Test that we are correctly permitting an IP address
"""
sys.path.append('testdata/')
from integrationCommon import isInAllowedSubnets
self.assertEqual(isInAllowedSubnets("100.64.1.1"),True)
def test_not_allowed(self):
"""
Test that we are correctly not permitting an IP address
"""
sys.path.append('testdata/')
from integrationCommon import isInAllowedSubnets
self.assertEqual(isInAllowedSubnets("101.64.1.1"),False)
def test_is_permitted(self):
"""
Test the combined isIpv4Permitted function for true
"""
sys.path.append('testdata/')
from integrationCommon import isIpv4Permitted
self.assertEqual(isIpv4Permitted("100.64.1.1"),True)
def test_is_not_permitted(self):
"""
Test the combined isIpv4Permitted function for false
"""
sys.path.append('testdata/')
from integrationCommon import isIpv4Permitted
self.assertEqual(isIpv4Permitted("101.64.1.1"),False)
if __name__ == '__main__':
unittest.main()

2
old/v1.3/testdata/ispConfig.py vendored Normal file
View File

@ -0,0 +1,2 @@
ignoreSubnets = ['192.168.0.0/16']
allowedSubnets = ['100.64.0.0/10']

BIN
old/v1.3/testdata/sample_layout.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB