mirror of
https://github.com/LibreQoE/LibreQoS.git
synced 2025-02-25 18:55:32 -06:00
Merge branch 'main' of https://github.com/LibreQoE/LibreQoS
This commit is contained in:
commit
67c2ef7a24
339
old/v0.7/LICENSE
339
old/v0.7/LICENSE
@ -1,339 +0,0 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 2, June 1991
|
||||
|
||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your
|
||||
freedom to share and change it. By contrast, the GNU General Public
|
||||
License is intended to guarantee your freedom to share and change free
|
||||
software--to make sure the software is free for all its users. This
|
||||
General Public License applies to most of the Free Software
|
||||
Foundation's software and to any other program whose authors commit to
|
||||
using it. (Some other Free Software Foundation software is covered by
|
||||
the GNU Lesser General Public License instead.) You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
this service if you wish), that you receive source code or can get it
|
||||
if you want it, that you can change the software or use pieces of it
|
||||
in new free programs; and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
anyone to deny you these rights or to ask you to surrender the rights.
|
||||
These restrictions translate to certain responsibilities for you if you
|
||||
distribute copies of the software, or if you modify it.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must give the recipients all the rights that
|
||||
you have. You must make sure that they, too, receive or can get the
|
||||
source code. And you must show them these terms so they know their
|
||||
rights.
|
||||
|
||||
We protect your rights with two steps: (1) copyright the software, and
|
||||
(2) offer you this license which gives you legal permission to copy,
|
||||
distribute and/or modify the software.
|
||||
|
||||
Also, for each author's protection and ours, we want to make certain
|
||||
that everyone understands that there is no warranty for this free
|
||||
software. If the software is modified by someone else and passed on, we
|
||||
want its recipients to know that what they have is not the original, so
|
||||
that any problems introduced by others will not reflect on the original
|
||||
authors' reputations.
|
||||
|
||||
Finally, any free program is threatened constantly by software
|
||||
patents. We wish to avoid the danger that redistributors of a free
|
||||
program will individually obtain patent licenses, in effect making the
|
||||
program proprietary. To prevent this, we have made it clear that any
|
||||
patent must be licensed for everyone's free use or not licensed at all.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License applies to any program or other work which contains
|
||||
a notice placed by the copyright holder saying it may be distributed
|
||||
under the terms of this General Public License. The "Program", below,
|
||||
refers to any such program or work, and a "work based on the Program"
|
||||
means either the Program or any derivative work under copyright law:
|
||||
that is to say, a work containing the Program or a portion of it,
|
||||
either verbatim or with modifications and/or translated into another
|
||||
language. (Hereinafter, translation is included without limitation in
|
||||
the term "modification".) Each licensee is addressed as "you".
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running the Program is not restricted, and the output from the Program
|
||||
is covered only if its contents constitute a work based on the
|
||||
Program (independent of having been made by running the Program).
|
||||
Whether that is true depends on what the Program does.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Program's
|
||||
source code as you receive it, in any medium, provided that you
|
||||
conspicuously and appropriately publish on each copy an appropriate
|
||||
copyright notice and disclaimer of warranty; keep intact all the
|
||||
notices that refer to this License and to the absence of any warranty;
|
||||
and give any other recipients of the Program a copy of this License
|
||||
along with the Program.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy, and
|
||||
you may at your option offer warranty protection in exchange for a fee.
|
||||
|
||||
2. You may modify your copy or copies of the Program or any portion
|
||||
of it, thus forming a work based on the Program, and copy and
|
||||
distribute such modifications or work under the terms of Section 1
|
||||
above, provided that you also meet all of these conditions:
|
||||
|
||||
a) You must cause the modified files to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
|
||||
b) You must cause any work that you distribute or publish, that in
|
||||
whole or in part contains or is derived from the Program or any
|
||||
part thereof, to be licensed as a whole at no charge to all third
|
||||
parties under the terms of this License.
|
||||
|
||||
c) If the modified program normally reads commands interactively
|
||||
when run, you must cause it, when started running for such
|
||||
interactive use in the most ordinary way, to print or display an
|
||||
announcement including an appropriate copyright notice and a
|
||||
notice that there is no warranty (or else, saying that you provide
|
||||
a warranty) and that users may redistribute the program under
|
||||
these conditions, and telling the user how to view a copy of this
|
||||
License. (Exception: if the Program itself is interactive but
|
||||
does not normally print such an announcement, your work based on
|
||||
the Program is not required to print an announcement.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Program,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Program, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote it.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Program.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Program
|
||||
with the Program (or with a work based on the Program) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
|
||||
3. You may copy and distribute the Program (or a work based on it,
|
||||
under Section 2) in object code or executable form under the terms of
|
||||
Sections 1 and 2 above provided that you also do one of the following:
|
||||
|
||||
a) Accompany it with the complete corresponding machine-readable
|
||||
source code, which must be distributed under the terms of Sections
|
||||
1 and 2 above on a medium customarily used for software interchange; or,
|
||||
|
||||
b) Accompany it with a written offer, valid for at least three
|
||||
years, to give any third party, for a charge no more than your
|
||||
cost of physically performing source distribution, a complete
|
||||
machine-readable copy of the corresponding source code, to be
|
||||
distributed under the terms of Sections 1 and 2 above on a medium
|
||||
customarily used for software interchange; or,
|
||||
|
||||
c) Accompany it with the information you received as to the offer
|
||||
to distribute corresponding source code. (This alternative is
|
||||
allowed only for noncommercial distribution and only if you
|
||||
received the program in object code or executable form with such
|
||||
an offer, in accord with Subsection b above.)
|
||||
|
||||
The source code for a work means the preferred form of the work for
|
||||
making modifications to it. For an executable work, complete source
|
||||
code means all the source code for all modules it contains, plus any
|
||||
associated interface definition files, plus the scripts used to
|
||||
control compilation and installation of the executable. However, as a
|
||||
special exception, the source code distributed need not include
|
||||
anything that is normally distributed (in either source or binary
|
||||
form) with the major components (compiler, kernel, and so on) of the
|
||||
operating system on which the executable runs, unless that component
|
||||
itself accompanies the executable.
|
||||
|
||||
If distribution of executable or object code is made by offering
|
||||
access to copy from a designated place, then offering equivalent
|
||||
access to copy the source code from the same place counts as
|
||||
distribution of the source code, even though third parties are not
|
||||
compelled to copy the source along with the object code.
|
||||
|
||||
4. You may not copy, modify, sublicense, or distribute the Program
|
||||
except as expressly provided under this License. Any attempt
|
||||
otherwise to copy, modify, sublicense or distribute the Program is
|
||||
void, and will automatically terminate your rights under this License.
|
||||
However, parties who have received copies, or rights, from you under
|
||||
this License will not have their licenses terminated so long as such
|
||||
parties remain in full compliance.
|
||||
|
||||
5. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Program or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Program (or any work based on the
|
||||
Program), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Program or works based on it.
|
||||
|
||||
6. Each time you redistribute the Program (or any work based on the
|
||||
Program), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute or modify the Program subject to
|
||||
these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted herein.
|
||||
You are not responsible for enforcing compliance by third parties to
|
||||
this License.
|
||||
|
||||
7. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Program at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Program by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Program.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under
|
||||
any particular circumstance, the balance of the section is intended to
|
||||
apply and the section as a whole is intended to apply in other
|
||||
circumstances.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system, which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
8. If the distribution and/or use of the Program is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Program under this License
|
||||
may add an explicit geographical distribution limitation excluding
|
||||
those countries, so that distribution is permitted only in or among
|
||||
countries not thus excluded. In such case, this License incorporates
|
||||
the limitation as if written in the body of this License.
|
||||
|
||||
9. The Free Software Foundation may publish revised and/or new versions
|
||||
of the General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program
|
||||
specifies a version number of this License which applies to it and "any
|
||||
later version", you have the option of following the terms and conditions
|
||||
either of that version or of any later version published by the Free
|
||||
Software Foundation. If the Program does not specify a version number of
|
||||
this License, you may choose any version ever published by the Free Software
|
||||
Foundation.
|
||||
|
||||
10. If you wish to incorporate parts of the Program into other free
|
||||
programs whose distribution conditions are different, write to the author
|
||||
to ask for permission. For software which is copyrighted by the Free
|
||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
||||
make exceptions for this. Our decision will be guided by the two goals
|
||||
of preserving the free status of all derivatives of our free software and
|
||||
of promoting the sharing and reuse of software generally.
|
||||
|
||||
NO WARRANTY
|
||||
|
||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
||||
REPAIR OR CORRECTION.
|
||||
|
||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
convey the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along
|
||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program is interactive, make it output a short notice like this
|
||||
when it starts in an interactive mode:
|
||||
|
||||
Gnomovision version 69, Copyright (C) year name of author
|
||||
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, the commands you use may
|
||||
be called something other than `show w' and `show c'; they could even be
|
||||
mouse-clicks or menu items--whatever suits your program.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or your
|
||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||
necessary. Here is a sample; alter the names:
|
||||
|
||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
|
||||
`Gnomovision' (which makes passes at compilers) written by James Hacker.
|
||||
|
||||
<signature of Ty Coon>, 1 April 1989
|
||||
Ty Coon, President of Vice
|
||||
|
||||
This General Public License does not permit incorporating your program into
|
||||
proprietary programs. If your program is a subroutine library, you may
|
||||
consider it more useful to permit linking proprietary applications with the
|
||||
library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License.
|
@ -1,264 +0,0 @@
|
||||
# Copyright (C) 2020 Robert Chacón
|
||||
# This file is part of LibreQoS.
|
||||
#
|
||||
# LibreQoS is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# LibreQoS is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with LibreQoS. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# _ _ _ ___ ____
|
||||
# | | (_) |__ _ __ ___ / _ \ ___/ ___|
|
||||
# | | | | '_ \| '__/ _ \ | | |/ _ \___ \
|
||||
# | |___| | |_) | | | __/ |_| | (_) |__) |
|
||||
# |_____|_|_.__/|_| \___|\__\_\\___/____/
|
||||
# v.0.78-beta
|
||||
#
|
||||
import random
|
||||
import logging
|
||||
import os
|
||||
import io
|
||||
import json
|
||||
import csv
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import ipaddress
|
||||
from ipaddress import IPv4Address, IPv6Address
|
||||
import time
|
||||
from datetime import date, datetime
|
||||
from ispConfig import fqOrCAKE, pipeBandwidthCapacityMbps, defaultClassCapacityMbps,interfaceA, interfaceB, enableActualShellCommands, runShellCommandsAsSudo
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
if runShellCommandsAsSudo:
|
||||
command = 'sudo ' + command
|
||||
commands = command.split(' ')
|
||||
print(command)
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line)
|
||||
else:
|
||||
print(command)
|
||||
|
||||
def clearMemoryCache():
|
||||
command = "sudo sh -c 'echo 1 >/proc/sys/vm/drop_caches'"
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"):
|
||||
print(line)
|
||||
|
||||
def clearPriorSettings(interfaceA, interfaceB):
|
||||
shell('tc filter delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceB)
|
||||
shell('tc filter delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB)
|
||||
if runShellCommandsAsSudo:
|
||||
clearMemoryCache()
|
||||
|
||||
def refreshShapers():
|
||||
devices = []
|
||||
filterHandleCounter = 101
|
||||
#Load Devices
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, AP, mac, hostname,ipv4, ipv6, download, upload = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"AP": AP,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"download": int(download),
|
||||
"upload": int(upload),
|
||||
"qdiscSrc": '',
|
||||
"qdiscDst": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
|
||||
#Clear Prior Configs
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
|
||||
shell('tc filter delete dev ' + interfaceA + ' parent 1: u32')
|
||||
shell('tc filter delete dev ' + interfaceB + ' parent 2: u32')
|
||||
|
||||
ipv4FiltersSrc = []
|
||||
ipv4FiltersDst = []
|
||||
ipv6FiltersSrc = []
|
||||
ipv6FiltersDst = []
|
||||
|
||||
#InterfaceA
|
||||
parentIDFirstPart = 1
|
||||
srcOrDst = 'dst'
|
||||
thisInterface = interfaceA
|
||||
classIDCounter = 101
|
||||
hashIDCounter = parentIDFirstPart + 1
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 1: htb default 15 r2q 1514')
|
||||
shell('tc class add dev ' + thisInterface + ' parent 1: classid 1:1 htb rate '+ str(pipeBandwidthCapacityMbps) + 'mbit ceil ' + str(pipeBandwidthCapacityMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:1 ' + fqOrCAKE)
|
||||
#Default class - traffic gets passed through this limiter if not otherwise classified by the Shaper.csv
|
||||
shell('tc class add dev ' + thisInterface + ' parent 1:1 classid 1:15 htb rate ' + str(defaultClassCapacityMbps) + 'mbit ceil ' + str(defaultClassCapacityMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:15 ' + fqOrCAKE)
|
||||
handleIDSecond = 1
|
||||
for device in devices:
|
||||
speedcap = 0
|
||||
if srcOrDst == 'dst':
|
||||
speedcap = device['download']
|
||||
elif srcOrDst == 'src':
|
||||
speedcap = device['upload']
|
||||
#Create Hash Table
|
||||
shell('tc class add dev ' + thisInterface + ' parent 1:1 classid 1:' + str(classIDCounter) + ' htb rate '+ str(speedcap) + 'mbit ceil '+ str(round(speedcap*1.05)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:' + str(classIDCounter) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = '1:'
|
||||
flowIDstring = str(parentIDFirstPart) + ':' + str(classIDCounter)
|
||||
ipv4FiltersDst.append((device['ipv4'], parentString, flowIDstring))
|
||||
if device['ipv6']:
|
||||
parentString = '1:'
|
||||
flowIDstring = str(parentIDFirstPart) + ':' + str(classIDCounter)
|
||||
ipv6FiltersDst.append((device['ipv6'], parentString, flowIDstring))
|
||||
deviceQDiscID = '1:' + str(classIDCounter)
|
||||
device['qdiscDst'] = deviceQDiscID
|
||||
if srcOrDst == 'src':
|
||||
device['qdiscSrc'] = deviceQDiscID
|
||||
elif srcOrDst == 'dst':
|
||||
device['qdiscDst'] = deviceQDiscID
|
||||
classIDCounter += 1
|
||||
hashIDCounter += 1
|
||||
|
||||
#InterfaceB
|
||||
parentIDFirstPart = 2
|
||||
srcOrDst = 'src'
|
||||
thisInterface = interfaceB
|
||||
classIDCounter = 101
|
||||
hashIDCounter = parentIDFirstPart + 1
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 2: htb default 15 r2q 1514')
|
||||
shell('tc class add dev ' + thisInterface + ' parent 2: classid 2:1 htb rate '+ str(pipeBandwidthCapacityMbps) + 'mbit ceil ' + str(pipeBandwidthCapacityMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:1 ' + fqOrCAKE)
|
||||
#Default class - traffic gets passed through this limiter if not otherwise classified by the Shaper.csv
|
||||
shell('tc class add dev ' + thisInterface + ' parent 2:1 classid 2:15 htb rate ' + str(defaultClassCapacityMbps) + 'mbit ceil ' + str(defaultClassCapacityMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:15 ' + fqOrCAKE)
|
||||
handleIDSecond = 1
|
||||
for device in devices:
|
||||
speedcap = 0
|
||||
if srcOrDst == 'dst':
|
||||
speedcap = device['download']
|
||||
elif srcOrDst == 'src':
|
||||
speedcap = device['upload']
|
||||
#Create Hash Table
|
||||
shell('tc class add dev ' + thisInterface + ' parent 2:1 classid 2:' + str(classIDCounter) + ' htb rate '+ str(speedcap) + 'mbit ceil '+ str(round(speedcap*1.05)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:' + str(classIDCounter) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = '2:'
|
||||
flowIDstring = str(parentIDFirstPart) + ':' + str(classIDCounter)
|
||||
ipv4FiltersSrc.append((device['ipv4'], parentString, flowIDstring))
|
||||
if device['ipv6']:
|
||||
parentString = '2:'
|
||||
flowIDstring = str(parentIDFirstPart) + ':' + str(classIDCounter)
|
||||
ipv6FiltersSrc.append((device['ipv6'], parentString, flowIDstring))
|
||||
deviceQDiscID = '2:' + str(classIDCounter)
|
||||
device['qdiscSrc'] = deviceQDiscID
|
||||
if srcOrDst == 'src':
|
||||
device['qdiscSrc'] = deviceQDiscID
|
||||
elif srcOrDst == 'dst':
|
||||
device['qdiscDst'] = deviceQDiscID
|
||||
classIDCounter += 1
|
||||
hashIDCounter += 1
|
||||
|
||||
shell('tc filter add dev ' + interfaceA + ' parent 1: protocol all u32')
|
||||
shell('tc filter add dev ' + interfaceB + ' parent 2: protocol all u32')
|
||||
|
||||
#IPv4 Hash Filters
|
||||
#Dst
|
||||
interface = interfaceA
|
||||
shell('tc filter add dev ' + interface + ' parent 1: protocol ip handle 3: u32 divisor 256')
|
||||
|
||||
for i in range (256):
|
||||
hexID = str(hex(i)).replace('0x','')
|
||||
for ipv4Filter in ipv4FiltersDst:
|
||||
ipv4, parent, classid = ipv4Filter
|
||||
if '/' in ipv4:
|
||||
ipv4 = ipv4.split('/')[0]
|
||||
if (ipv4.split('.', 3)[3]) == str(i):
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ip parent 1: u32 ht 3:' + hexID + ': match ip dst ' + ipv4 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
shell('tc filter add dev ' + interface + ' protocol ip parent 1: u32 ht 800: match ip dst 0.0.0.0/0 hashkey mask 0x000000ff at 16 link 3:')
|
||||
|
||||
#Src
|
||||
interface = interfaceB
|
||||
shell('tc filter add dev ' + interface + ' parent 2: protocol ip handle 4: u32 divisor 256')
|
||||
|
||||
for i in range (256):
|
||||
hexID = str(hex(i)).replace('0x','')
|
||||
for ipv4Filter in ipv4FiltersSrc:
|
||||
ipv4, parent, classid = ipv4Filter
|
||||
if '/' in ipv4:
|
||||
ipv4 = ipv4.split('/')[0]
|
||||
if (ipv4.split('.', 3)[3]) == str(i):
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ip parent 2: u32 ht 4:' + hexID + ': match ip src ' + ipv4 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
shell('tc filter add dev ' + interface + ' protocol ip parent 2: u32 ht 800: match ip src 0.0.0.0/0 hashkey mask 0x000000ff at 12 link 4:')
|
||||
|
||||
#IPv6 Hash Filters
|
||||
#Dst
|
||||
interface = interfaceA
|
||||
shell('tc filter add dev ' + interface + ' parent 1: handle 5: protocol ipv6 u32 divisor 256')
|
||||
|
||||
for ipv6Filter in ipv6FiltersDst:
|
||||
ipv6, parent, classid = ipv6Filter
|
||||
withoutCIDR = ipv6.split('/')[0]
|
||||
third = str(IPv6Address(withoutCIDR).exploded).split(':',5)[3]
|
||||
usefulPart = third[:2]
|
||||
hexID = usefulPart
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ipv6 parent 1: u32 ht 5:' + hexID + ': match ip6 dst ' + ipv6 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' protocol ipv6 parent 1: u32 ht 800:: match ip6 dst ::/0 hashkey mask 0x0000ff00 at 28 link 5:')
|
||||
filterHandleCounter += 1
|
||||
|
||||
#Src
|
||||
interface = interfaceB
|
||||
shell('tc filter add dev ' + interface + ' parent 2: handle 6: protocol ipv6 u32 divisor 256')
|
||||
|
||||
for ipv6Filter in ipv6FiltersSrc:
|
||||
ipv6, parent, classid = ipv6Filter
|
||||
withoutCIDR = ipv6.split('/')[0]
|
||||
third = str(IPv6Address(withoutCIDR).exploded).split(':',5)[3]
|
||||
usefulPart = third[:2]
|
||||
hexID = usefulPart
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ipv6 parent 2: u32 ht 6:' + hexID + ': match ip6 src ' + ipv6 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' protocol ipv6 parent 2: u32 ht 800:: match ip6 src ::/0 hashkey mask 0x0000ff00 at 12 link 6:')
|
||||
filterHandleCounter += 1
|
||||
|
||||
#Save devices to file to allow for statistics runs
|
||||
with open('devices.json', 'w') as outfile:
|
||||
json.dump(devices, outfile)
|
||||
|
||||
#Done
|
||||
currentTimeString = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
|
||||
print("Successful run completed on " + currentTimeString)
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
print("Program complete")
|
@ -1,6 +0,0 @@
|
||||
ID,AP,MAC,Hostname,IPv4,IPv6,Download,Upload
|
||||
3001,A,32:3B:FE:B0:92:C1,CPE-Customer1,100.126.0.77,2001:495:1f0f:58a::4/64,25,5
|
||||
3002,C,AE:EC:D3:70:DD:36,CPE-Customer2,100.126.0.78,2001:495:1f0f:58a::8/64,50,10
|
||||
3003,F,1C:1E:60:69:88:9A,CPE-Customer3,100.126.0.79,2001:495:1f0f:58a::12/64,100,15
|
||||
3004,R,11:B1:63:C4:DA:4C,CPE-Customer4,100.126.0.80,2001:495:1f0f:58a::16/64,200,30
|
||||
3005,X,46:2F:B5:C2:0B:15,CPE-Customer5,100.126.0.81,2001:495:1f0f:58a::20/64,300,45
|
|
@ -1,22 +0,0 @@
|
||||
#'fq_codel' or 'cake'
|
||||
# Cake requires many specific packages and kernel changes:
|
||||
# https://www.bufferbloat.net/projects/codel/wiki/Cake/
|
||||
# https://github.com/dtaht/tc-adv
|
||||
fqOrCAKE = 'fq_codel'
|
||||
|
||||
# How many symmetrical Mbps are available to the edge of this network
|
||||
pipeBandwidthCapacityMbps = 1000
|
||||
|
||||
defaultClassCapacityMbps = 750
|
||||
|
||||
# Interface connected to core router
|
||||
interfaceA = 'eth1'
|
||||
|
||||
# Interface connected to edge router
|
||||
interfaceB = 'eth2'
|
||||
|
||||
# Allow shell commands. False causes commands print to console only without being executed. MUST BE ENABLED FOR PROGRAM TO FUNCTION
|
||||
enableActualShellCommands = True
|
||||
|
||||
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
|
||||
runShellCommandsAsSudo = False
|
@ -1,11 +0,0 @@
|
||||
import time
|
||||
import schedule
|
||||
from datetime import date
|
||||
from LibreQoS import refreshShapers
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
schedule.every().day.at("04:00").do(refreshShapers)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(60) # wait one minute
|
@ -1,176 +0,0 @@
|
||||
import os
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import io
|
||||
import decimal
|
||||
import json
|
||||
from operator import itemgetter
|
||||
from prettytable import PrettyTable
|
||||
from ispConfig import fqOrCAKE
|
||||
|
||||
def getStatistics():
|
||||
tcShowResults = []
|
||||
command = 'tc -s qdisc show'
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
tcShowResults.append(line)
|
||||
allQDiscStats = []
|
||||
thisFlow = {}
|
||||
thisFlowStats = {}
|
||||
withinCorrectChunk = False
|
||||
for line in tcShowResults:
|
||||
expecting = "qdisc " + fqOrCAKE
|
||||
if expecting in line:
|
||||
thisFlow['qDiscID'] = line.split(' ')[6]
|
||||
withinCorrectChunk = True
|
||||
elif ("Sent " in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['GigabytesSent'] = str(round((int(items[2]) * 0.000000001), 1))
|
||||
thisFlowStats['PacketsSent'] = int(items[4])
|
||||
thisFlowStats['droppedPackets'] = int(items[7].replace(',',''))
|
||||
thisFlowStats['overlimitsPackets'] = int(items[9])
|
||||
thisFlowStats['requeuedPackets'] = int(items[11].replace(')',''))
|
||||
if thisFlowStats['PacketsSent'] > 0:
|
||||
overlimitsFreq = (thisFlowStats['overlimitsPackets']/thisFlowStats['PacketsSent'])
|
||||
else:
|
||||
overlimitsFreq = -1
|
||||
elif ('backlog' in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['backlogBytes'] = int(items[2].replace('b',''))
|
||||
thisFlowStats['backlogPackets'] = int(items[3].replace('p',''))
|
||||
thisFlowStats['requeues'] = int(items[5])
|
||||
elif ('maxpacket' in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['maxPacket'] = int(items[3])
|
||||
thisFlowStats['dropOverlimit'] = int(items[5])
|
||||
thisFlowStats['newFlowCount'] = int(items[7])
|
||||
thisFlowStats['ecnMark'] = int(items[9])
|
||||
elif ("new_flows_len" in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['newFlowsLen'] = int(items[3])
|
||||
thisFlowStats['oldFlowsLen'] = int(items[5])
|
||||
if thisFlowStats['PacketsSent'] == 0:
|
||||
thisFlowStats['percentageDropped'] = 0
|
||||
else:
|
||||
thisFlowStats['percentageDropped'] = thisFlowStats['droppedPackets']/thisFlowStats['PacketsSent']
|
||||
withinCorrectChunk = False
|
||||
thisFlow['stats'] = thisFlowStats
|
||||
allQDiscStats.append(thisFlow)
|
||||
thisFlowStats = {}
|
||||
thisFlow = {}
|
||||
#Load shapableDevices
|
||||
updatedFlowStats = []
|
||||
with open('devices.json', 'r') as infile:
|
||||
devices = json.load(infile)
|
||||
for shapableDevice in devices:
|
||||
shapableDeviceqdiscSrc = shapableDevice['qdiscSrc']
|
||||
shapableDeviceqdiscDst = shapableDevice['qdiscDst']
|
||||
for device in allQDiscStats:
|
||||
deviceFlowID = device['qDiscID']
|
||||
if shapableDeviceqdiscSrc == deviceFlowID:
|
||||
name = shapableDevice['hostname']
|
||||
AP = shapableDevice['AP']
|
||||
ipv4 = shapableDevice['ipv4']
|
||||
ipv6 = shapableDevice['ipv6']
|
||||
srcOrDst = 'src'
|
||||
tempDict = {'name': name, 'AP': AP, 'ipv4': ipv4, 'ipv6': ipv6, 'srcOrDst': srcOrDst}
|
||||
device['identification'] = tempDict
|
||||
updatedFlowStats.append(device)
|
||||
if shapableDeviceqdiscDst == deviceFlowID:
|
||||
name = shapableDevice['hostname']
|
||||
AP = shapableDevice['AP']
|
||||
ipv4 = shapableDevice['ipv4']
|
||||
ipv6 = shapableDevice['ipv6']
|
||||
srcOrDst = 'dst'
|
||||
tempDict = {'name': name, 'AP': AP, 'ipv4': ipv4, 'ipv6': ipv6, 'srcOrDst': srcOrDst}
|
||||
device['identification'] = tempDict
|
||||
updatedFlowStats.append(device)
|
||||
mergedStats = []
|
||||
for item in updatedFlowStats:
|
||||
if item['identification']['srcOrDst'] == 'src':
|
||||
newStat = {
|
||||
'identification': {
|
||||
'name': item['identification']['name'],
|
||||
'AP': item['identification']['AP'],
|
||||
'ipv4': item['identification']['ipv4'],
|
||||
'ipv6': item['identification']['ipv6']
|
||||
},
|
||||
'src': {
|
||||
'GigabytesSent': item['stats']['GigabytesSent'],
|
||||
'PacketsSent': item['stats']['PacketsSent'],
|
||||
'droppedPackets': item['stats']['droppedPackets'],
|
||||
'overlimitsPackets': item['stats']['overlimitsPackets'],
|
||||
'requeuedPackets': item['stats']['requeuedPackets'],
|
||||
'backlogBytes': item['stats']['backlogBytes'],
|
||||
'backlogPackets': item['stats']['backlogPackets'],
|
||||
'requeues': item['stats']['requeues'],
|
||||
'maxPacket': item['stats']['maxPacket'],
|
||||
'dropOverlimit': item['stats']['dropOverlimit'],
|
||||
'newFlowCount': item['stats']['newFlowCount'],
|
||||
'ecnMark': item['stats']['ecnMark'],
|
||||
'newFlowsLen': item['stats']['newFlowsLen'],
|
||||
'oldFlowsLen': item['stats']['oldFlowsLen'],
|
||||
'percentageDropped': item['stats']['percentageDropped'],
|
||||
}
|
||||
}
|
||||
mergedStats.append(newStat)
|
||||
for item in updatedFlowStats:
|
||||
if item['identification']['srcOrDst'] == 'dst':
|
||||
ipv4 = item['identification']['ipv4']
|
||||
ipv6 = item['identification']['ipv6']
|
||||
newStat = {
|
||||
'dst': {
|
||||
'GigabytesSent': item['stats']['GigabytesSent'],
|
||||
'PacketsSent': item['stats']['PacketsSent'],
|
||||
'droppedPackets': item['stats']['droppedPackets'],
|
||||
'overlimitsPackets': item['stats']['overlimitsPackets'],
|
||||
'requeuedPackets': item['stats']['requeuedPackets'] ,
|
||||
'backlogBytes': item['stats']['backlogBytes'],
|
||||
'backlogPackets': item['stats']['backlogPackets'],
|
||||
'requeues': item['stats']['requeues'],
|
||||
'maxPacket': item['stats']['maxPacket'],
|
||||
'dropOverlimit': item['stats']['dropOverlimit'],
|
||||
'newFlowCount': item['stats']['newFlowCount'],
|
||||
'ecnMark': item['stats']['ecnMark'],
|
||||
'newFlowsLen': item['stats']['newFlowsLen'],
|
||||
'oldFlowsLen': item['stats']['oldFlowsLen'],
|
||||
'percentageDropped': item['stats']['percentageDropped']
|
||||
}
|
||||
}
|
||||
for item2 in mergedStats:
|
||||
if ipv4 in item2['identification']['ipv4']:
|
||||
item2 = item2.update(newStat)
|
||||
elif ipv6 in item2['identification']['ipv6']:
|
||||
item2 = item2.update(newStat)
|
||||
return mergedStats
|
||||
|
||||
if __name__ == '__main__':
|
||||
mergedStats = getStatistics()
|
||||
|
||||
# Display table of Customer CPEs with most packets dropped
|
||||
x = PrettyTable()
|
||||
x.field_names = ["Device", "AP", "IPv4", "IPv6", "UL Dropped", "DL Dropped", "GB Down/Up"]
|
||||
sortableList = []
|
||||
pickTop = 30
|
||||
for stat in mergedStats:
|
||||
name = stat['identification']['name']
|
||||
AP = stat['identification']['AP']
|
||||
ipv4 = stat['identification']['ipv4']
|
||||
ipv6 = stat['identification']['ipv6']
|
||||
srcDropped = stat['src']['percentageDropped']
|
||||
dstDropped = stat['dst']['percentageDropped']
|
||||
GBuploadedString = stat['src']['GigabytesSent']
|
||||
GBdownloadedString = stat['dst']['GigabytesSent']
|
||||
GBstring = GBuploadedString + '/' + GBdownloadedString
|
||||
avgDropped = (srcDropped + dstDropped)/2
|
||||
sortableList.append((name, AP, ipv4, ipv6, srcDropped, dstDropped, avgDropped, GBstring))
|
||||
res = sorted(sortableList, key = itemgetter(4), reverse = True)[:pickTop]
|
||||
for stat in res:
|
||||
name, AP, ipv4, ipv6, srcDropped, dstDropped, avgDropped, GBstring = stat
|
||||
if not name:
|
||||
name = ipv4
|
||||
srcDroppedString = "{0:.4%}".format(srcDropped)
|
||||
dstDroppedString = "{0:.4%}".format(dstDropped)
|
||||
x.add_row([name, AP, ipv4, ipv6, srcDroppedString, dstDroppedString, GBstring])
|
||||
print(x)
|
@ -1,6 +0,0 @@
|
||||
AP,Max Download,Max Upload
|
||||
A,500,100
|
||||
C,225,50
|
||||
F,500,100
|
||||
R,225,50
|
||||
X,500,100
|
|
@ -1,291 +0,0 @@
|
||||
# Copyright (C) 2020 Robert Chacón
|
||||
# This file is part of LibreQoS.
|
||||
#
|
||||
# LibreQoS is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# LibreQoS is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with LibreQoS. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# _ _ _ ___ ____
|
||||
# | | (_) |__ _ __ ___ / _ \ ___/ ___|
|
||||
# | | | | '_ \| '__/ _ \ | | |/ _ \___ \
|
||||
# | |___| | |_) | | | __/ |_| | (_) |__) |
|
||||
# |_____|_|_.__/|_| \___|\__\_\\___/____/
|
||||
# v.0.81-stable
|
||||
#
|
||||
import random
|
||||
import logging
|
||||
import os
|
||||
import io
|
||||
import json
|
||||
import csv
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import ipaddress
|
||||
from ipaddress import IPv4Address, IPv6Address
|
||||
import time
|
||||
from datetime import date, datetime
|
||||
from ispConfig import fqOrCAKE, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, defaultClassCapacityDownloadMbps, defaultClassCapacityUploadMbps, interfaceA, interfaceB, enableActualShellCommands, runShellCommandsAsSudo
|
||||
import collections
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
if runShellCommandsAsSudo:
|
||||
command = 'sudo ' + command
|
||||
commands = command.split(' ')
|
||||
print(command)
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line)
|
||||
else:
|
||||
print(command)
|
||||
|
||||
def clearPriorSettings(interfaceA, interfaceB):
|
||||
shell('tc filter delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceB)
|
||||
shell('tc filter delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB)
|
||||
if runShellCommandsAsSudo:
|
||||
clearMemoryCache()
|
||||
|
||||
def refreshShapers():
|
||||
devices = []
|
||||
accessPointDownloadMbps = {}
|
||||
accessPointUploadMbps = {}
|
||||
filterHandleCounter = 101
|
||||
#Load Access Points
|
||||
with open('AccessPoints.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
AP, download, upload = row
|
||||
accessPointDownloadMbps[AP] = int(download)
|
||||
accessPointUploadMbps[AP] = int(upload)
|
||||
#Load Devices
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, AP, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if AP == "":
|
||||
AP = "none"
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"AP": AP,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": int(downloadMin),
|
||||
"uploadMin": int(uploadMin),
|
||||
"downloadMax": int(downloadMax),
|
||||
"uploadMax": int(uploadMax),
|
||||
"qdiscSrc": '',
|
||||
"qdiscDst": '',
|
||||
}
|
||||
# If an AP is specified for a device in Shaper.csv, but AP is not listed in AccessPoints.csv, raise exception
|
||||
if (AP != "none") and (AP not in accessPointDownloadMbps):
|
||||
raise ValueError('AP ' + AP + ' not listed in AccessPoints.csv')
|
||||
devices.append(thisDevice)
|
||||
|
||||
# If no AP is specified for a device in Shaper.csv, it is placed under this 'default' AP shaper, set to bandwidth max at edge
|
||||
accessPointDownloadMbps['none'] = upstreamBandwidthCapacityDownloadMbps
|
||||
accessPointUploadMbps['none'] = upstreamBandwidthCapacityUploadMbps
|
||||
|
||||
#Sort into bins by AP
|
||||
result = collections.defaultdict(list)
|
||||
|
||||
for d in devices:
|
||||
result[d['AP']].append(d)
|
||||
|
||||
devicesByAP = list(result.values())
|
||||
|
||||
#Clear Prior Configs
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
shell('tc filter delete dev ' + interfaceA + ' parent 1: u32')
|
||||
shell('tc filter delete dev ' + interfaceB + ' parent 2: u32')
|
||||
|
||||
ipv4FiltersSrc = []
|
||||
ipv4FiltersDst = []
|
||||
ipv6FiltersSrc = []
|
||||
ipv6FiltersDst = []
|
||||
|
||||
#InterfaceA
|
||||
parentIDFirstPart = 1
|
||||
srcOrDst = 'dst'
|
||||
thisInterface = interfaceA
|
||||
classIDcounter = 3
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 1: htb default 2 r2q 1514')
|
||||
shell('tc class add dev ' + thisInterface + ' parent 1: classid 1:1 htb rate '+ str(upstreamBandwidthCapacityDownloadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:1 ' + fqOrCAKE)
|
||||
#Default class - traffic gets passed through this limiter if not otherwise classified by the Shaper.csv
|
||||
shell('tc class add dev ' + thisInterface + ' parent 1:1 classid 1:2 htb rate ' + str(defaultClassCapacityDownloadMbps) + 'mbit ceil ' + str(defaultClassCapacityDownloadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:2 ' + fqOrCAKE)
|
||||
#Create HTBs by AP
|
||||
for AP in devicesByAP:
|
||||
currentAPname = AP[0]['AP']
|
||||
thisAPdownload = accessPointDownloadMbps[currentAPname]
|
||||
thisAPupload = accessPointUploadMbps[currentAPname]
|
||||
thisHTBrate = thisAPdownload
|
||||
#HTBs for each AP
|
||||
thisHTBclassID = '1:' + str(classIDcounter)
|
||||
# Guarentee AP gets at least 1/2 of its radio capacity, allow up to its max radio capacity when network not at peak load
|
||||
shell('tc class add dev ' + thisInterface + ' parent 1:1 classid ' + str(classIDcounter) + ' htb rate '+ str(round(thisHTBrate/2)) + 'mbit ceil '+ str(round(thisHTBrate)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:' + str(classIDcounter) + ' ' + fqOrCAKE)
|
||||
classIDcounter += 1
|
||||
for device in AP:
|
||||
#QDiscs for each AP
|
||||
speedcap = 0
|
||||
downloadMin = device['downloadMin']
|
||||
downloadMax = device['downloadMax']
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + thisHTBclassID + ' classid ' + str(classIDcounter) + ' htb rate '+ str(downloadMin) + 'mbit ceil '+ str(downloadMax) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 1:' + str(classIDcounter) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = '1:'
|
||||
flowIDstring = '1:' + str(classIDcounter)
|
||||
ipv4FiltersDst.append((device['ipv4'], parentString, flowIDstring))
|
||||
if device['ipv6']:
|
||||
parentString = '1:'
|
||||
flowIDstring = '1:' + str(classIDcounter)
|
||||
ipv6FiltersDst.append((device['ipv6'], parentString, flowIDstring))
|
||||
deviceQDiscID = '1:' + str(classIDcounter)
|
||||
device['qdiscDst'] = '1:' + str(classIDcounter)
|
||||
classIDcounter += 1
|
||||
|
||||
#InterfaceB
|
||||
parentIDFirstPart = 2
|
||||
srcOrDst = 'src'
|
||||
thisInterface = interfaceB
|
||||
classIDcounter = 3
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 2: htb default 2 r2q 1514')
|
||||
shell('tc class add dev ' + thisInterface + ' parent 2: classid 2:1 htb rate '+ str(upstreamBandwidthCapacityUploadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:1 ' + fqOrCAKE)
|
||||
#Default class - traffic gets passed through this limiter if not otherwise classified by the Shaper.csv
|
||||
shell('tc class add dev ' + thisInterface + ' parent 2:1 classid 2:2 htb rate ' + str(defaultClassCapacityUploadMbps) + 'mbit ceil ' + str(defaultClassCapacityUploadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:2 ' + fqOrCAKE)
|
||||
#Create HTBs by AP
|
||||
for AP in devicesByAP:
|
||||
currentAPname = AP[0]['AP']
|
||||
thisAPdownload = accessPointDownloadMbps[currentAPname]
|
||||
thisAPupload = accessPointUploadMbps[currentAPname]
|
||||
thisHTBrate = thisAPupload
|
||||
#HTBs for each AP
|
||||
thisHTBclassID = '2:' + str(classIDcounter)
|
||||
# Guarentee AP gets at least 1/2 of its radio capacity, allow up to its max radio capacity when network not at peak load
|
||||
shell('tc class add dev ' + thisInterface + ' parent 2:1 classid ' + str(classIDcounter) + ' htb rate '+ str(round(thisHTBrate/2)) + 'mbit ceil '+ str(round(thisHTBrate)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:' + str(classIDcounter) + ' ' + fqOrCAKE)
|
||||
classIDcounter += 1
|
||||
for device in AP:
|
||||
#QDiscs for each AP
|
||||
speedcap = 0
|
||||
uploadMin = device['uploadMin']
|
||||
uploadMax = device['uploadMax']
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + thisHTBclassID + ' classid ' + str(classIDcounter) + ' htb rate '+ str(uploadMin) + 'mbit ceil '+ str(uploadMax) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 2:' + str(classIDcounter) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = '2:'
|
||||
flowIDstring = '2:' + str(classIDcounter)
|
||||
ipv4FiltersSrc.append((device['ipv4'], parentString, flowIDstring))
|
||||
if device['ipv6']:
|
||||
parentString = '2:'
|
||||
flowIDstring = '2:' + str(classIDcounter)
|
||||
ipv6FiltersSrc.append((device['ipv6'], parentString, flowIDstring))
|
||||
device['qdiscSrc'] = '2:' + str(classIDcounter)
|
||||
classIDcounter += 1
|
||||
|
||||
#IPv4 Hash Filters
|
||||
shell('tc filter add dev ' + interfaceA + ' parent 1: protocol all u32')
|
||||
shell('tc filter add dev ' + interfaceB + ' parent 2: protocol all u32')
|
||||
|
||||
#Dst
|
||||
interface = interfaceA
|
||||
shell('tc filter add dev ' + interface + ' parent 1: protocol ip handle 3: u32 divisor 256')
|
||||
|
||||
for i in range (256):
|
||||
hexID = str(hex(i)).replace('0x','')
|
||||
for ipv4Filter in ipv4FiltersDst:
|
||||
ipv4, parent, classid = ipv4Filter
|
||||
if '/' in ipv4:
|
||||
ipv4 = ipv4.split('/')[0]
|
||||
if (ipv4.split('.', 3)[3]) == str(i):
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ip parent 1: u32 ht 3:' + hexID + ': match ip dst ' + ipv4 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
shell('tc filter add dev ' + interface + ' protocol ip parent 1: u32 ht 800: match ip dst 0.0.0.0/0 hashkey mask 0x000000ff at 16 link 3:')
|
||||
|
||||
#Src
|
||||
interface = interfaceB
|
||||
shell('tc filter add dev ' + interface + ' parent 2: protocol ip handle 4: u32 divisor 256')
|
||||
|
||||
for i in range (256):
|
||||
hexID = str(hex(i)).replace('0x','')
|
||||
for ipv4Filter in ipv4FiltersSrc:
|
||||
ipv4, parent, classid = ipv4Filter
|
||||
if '/' in ipv4:
|
||||
ipv4 = ipv4.split('/')[0]
|
||||
if (ipv4.split('.', 3)[3]) == str(i):
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ip parent 2: u32 ht 4:' + hexID + ': match ip src ' + ipv4 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
shell('tc filter add dev ' + interface + ' protocol ip parent 2: u32 ht 800: match ip src 0.0.0.0/0 hashkey mask 0x000000ff at 12 link 4:')
|
||||
|
||||
#IPv6 Hash Filters
|
||||
#Dst
|
||||
interface = interfaceA
|
||||
shell('tc filter add dev ' + interface + ' parent 1: handle 5: protocol ipv6 u32 divisor 256')
|
||||
|
||||
for ipv6Filter in ipv6FiltersDst:
|
||||
ipv6, parent, classid = ipv6Filter
|
||||
withoutCIDR = ipv6.split('/')[0]
|
||||
third = str(IPv6Address(withoutCIDR).exploded).split(':',5)[3]
|
||||
usefulPart = third[:2]
|
||||
hexID = usefulPart
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ipv6 parent 1: u32 ht 5:' + hexID + ': match ip6 dst ' + ipv6 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' protocol ipv6 parent 1: u32 ht 800:: match ip6 dst ::/0 hashkey mask 0x0000ff00 at 28 link 5:')
|
||||
filterHandleCounter += 1
|
||||
|
||||
#Src
|
||||
interface = interfaceB
|
||||
shell('tc filter add dev ' + interface + ' parent 2: handle 6: protocol ipv6 u32 divisor 256')
|
||||
|
||||
for ipv6Filter in ipv6FiltersSrc:
|
||||
ipv6, parent, classid = ipv6Filter
|
||||
withoutCIDR = ipv6.split('/')[0]
|
||||
third = str(IPv6Address(withoutCIDR).exploded).split(':',5)[3]
|
||||
usefulPart = third[:2]
|
||||
hexID = usefulPart
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' handle ' + filterHandle + ' protocol ipv6 parent 2: u32 ht 6:' + hexID + ': match ip6 src ' + ipv6 + ' flowid ' + classid)
|
||||
filterHandleCounter += 1
|
||||
filterHandle = hex(filterHandleCounter)
|
||||
shell('tc filter add dev ' + interface + ' protocol ipv6 parent 2: u32 ht 800:: match ip6 src ::/0 hashkey mask 0x0000ff00 at 12 link 6:')
|
||||
filterHandleCounter += 1
|
||||
|
||||
#Save devices to file to allow for statistics runs
|
||||
with open('devices.json', 'w') as outfile:
|
||||
json.dump(devices, outfile)
|
||||
|
||||
#Done
|
||||
currentTimeString = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
|
||||
print("Successful run completed on " + currentTimeString)
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
print("Program complete")
|
@ -1,30 +0,0 @@
|
||||
# v0.8 (IPv4 & IPv6) (Stable)
|
||||
|
||||
- Released: 2 July 2021
|
||||
|
||||
## Installation Guide
|
||||
- 📄 [LibreQoS 0.8 Installation and Usage Guide - Proxmox and Ubuntu 20.04 LTS](https://github.com/rchac/LibreQoS/wiki/LibreQoS-v0.8-Installation-&-Usage-Guide----Proxmox-and-Ubuntu-20.04)
|
||||
|
||||
## Features
|
||||
|
||||
- Dual stack: client can be shaped by same qdisc for both IPv4 and IPv6
|
||||
|
||||
- Up to 1000 clients (IPv4/IPv6)
|
||||
|
||||
- Real world asymmetrical throughput: between 2Gbps and 4.5Gbps depending on CPU single thread performance.
|
||||
|
||||
- HTB+fq_codel or HTB+cake
|
||||
|
||||
- Shape Clients by Access Point / Node capacity
|
||||
|
||||
- TC filters split into groups through hashing filters to increase throughput
|
||||
|
||||
- Simple client management via csv file
|
||||
|
||||
- Simple statistics - table shows top 20 subscribers by packet loss, with APs listed
|
||||
|
||||
## Limitations
|
||||
|
||||
- Qdisc locking problem limits throughput of HTB used in v0.8 (solved in v0.9). Tested up to 4Gbps/500Mbps asymmetrical throughput using Microsoft Ethr with n=500 streams. High quantities of small packets will reduce max throughput in practice.
|
||||
|
||||
- Linux tc hash tables can only handle ~4000 rules each. This limits total possible clients to 1000 in v0.8.
|
@ -1,6 +0,0 @@
|
||||
ID,AP,MAC,Hostname,IPv4,IPv6,Download Min,Upload Min,Download Max,Upload Max
|
||||
3001,A,32:3B:FE:B0:92:C1,CPE-Customer1,100.126.0.77,2001:495:1f0f:58a::4/64,25,8,115,18
|
||||
3002,C,AE:EC:D3:70:DD:36,CPE-Customer2,100.126.0.78,2001:495:1f0f:58a::8/64,25,8,115,18
|
||||
3003,F,1C:1E:60:69:88:9A,CPE-Customer3,100.126.0.79,2001:495:1f0f:58a::12/64,25,8,115,18
|
||||
3004,R,11:B1:63:C4:DA:4C,CPE-Customer4,100.126.0.80,2001:495:1f0f:58a::16/64,25,8,115,18
|
||||
3005,X,46:2F:B5:C2:0B:15,CPE-Customer5,100.126.0.81,2001:495:1f0f:58a::20/64,25,8,115,18
|
|
@ -1,25 +0,0 @@
|
||||
#'fq_codel' or 'cake'
|
||||
# Cake requires many specific packages and kernel changes:
|
||||
# https://www.bufferbloat.net/projects/codel/wiki/Cake/
|
||||
# https://github.com/dtaht/tc-adv
|
||||
fqOrCAKE = 'fq_codel'
|
||||
|
||||
# How many Mbps are available to the edge of this network
|
||||
upstreamBandwidthCapacityDownloadMbps = 1000
|
||||
upstreamBandwidthCapacityUploadMbps = 1000
|
||||
|
||||
# Traffic from devices not specified in Shaper.csv will be rate limited by an HTB of this many Mbps
|
||||
defaultClassCapacityDownloadMbps = 1000
|
||||
defaultClassCapacityUploadMbps = 1000
|
||||
|
||||
# Interface connected to core router
|
||||
interfaceA = 'eth1'
|
||||
|
||||
# Interface connected to edge router
|
||||
interfaceB = 'eth2'
|
||||
|
||||
# Allow shell commands. False causes commands print to console only without being executed. MUST BE ENABLED FOR PROGRAM TO FUNCTION
|
||||
enableActualShellCommands = True
|
||||
|
||||
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
|
||||
runShellCommandsAsSudo = False
|
@ -1,11 +0,0 @@
|
||||
import time
|
||||
import schedule
|
||||
from datetime import date
|
||||
from LibreQoS import refreshShapers
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
schedule.every().day.at("04:00").do(refreshShapers)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(60) # wait one minute
|
@ -1,176 +0,0 @@
|
||||
import os
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import io
|
||||
import decimal
|
||||
import json
|
||||
from operator import itemgetter
|
||||
from prettytable import PrettyTable
|
||||
from ispConfig import fqOrCAKE
|
||||
|
||||
def getStatistics():
|
||||
tcShowResults = []
|
||||
command = 'tc -s qdisc show'
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
tcShowResults.append(line)
|
||||
allQDiscStats = []
|
||||
thisFlow = {}
|
||||
thisFlowStats = {}
|
||||
withinCorrectChunk = False
|
||||
for line in tcShowResults:
|
||||
expecting = "qdisc " + fqOrCAKE
|
||||
if expecting in line:
|
||||
thisFlow['qDiscID'] = line.split(' ')[6]
|
||||
withinCorrectChunk = True
|
||||
elif ("Sent " in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['GigabytesSent'] = str(round((int(items[2]) * 0.000000001), 1))
|
||||
thisFlowStats['PacketsSent'] = int(items[4])
|
||||
thisFlowStats['droppedPackets'] = int(items[7].replace(',',''))
|
||||
thisFlowStats['overlimitsPackets'] = int(items[9])
|
||||
thisFlowStats['requeuedPackets'] = int(items[11].replace(')',''))
|
||||
if thisFlowStats['PacketsSent'] > 0:
|
||||
overlimitsFreq = (thisFlowStats['overlimitsPackets']/thisFlowStats['PacketsSent'])
|
||||
else:
|
||||
overlimitsFreq = -1
|
||||
elif ('backlog' in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['backlogBytes'] = int(items[2].replace('b',''))
|
||||
thisFlowStats['backlogPackets'] = int(items[3].replace('p',''))
|
||||
thisFlowStats['requeues'] = int(items[5])
|
||||
elif ('maxpacket' in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['maxPacket'] = int(items[3])
|
||||
thisFlowStats['dropOverlimit'] = int(items[5])
|
||||
thisFlowStats['newFlowCount'] = int(items[7])
|
||||
thisFlowStats['ecnMark'] = int(items[9])
|
||||
elif ("new_flows_len" in line) and withinCorrectChunk:
|
||||
items = line.split(' ')
|
||||
thisFlowStats['newFlowsLen'] = int(items[3])
|
||||
thisFlowStats['oldFlowsLen'] = int(items[5])
|
||||
if thisFlowStats['PacketsSent'] == 0:
|
||||
thisFlowStats['percentageDropped'] = 0
|
||||
else:
|
||||
thisFlowStats['percentageDropped'] = thisFlowStats['droppedPackets']/thisFlowStats['PacketsSent']
|
||||
withinCorrectChunk = False
|
||||
thisFlow['stats'] = thisFlowStats
|
||||
allQDiscStats.append(thisFlow)
|
||||
thisFlowStats = {}
|
||||
thisFlow = {}
|
||||
#Load shapableDevices
|
||||
updatedFlowStats = []
|
||||
with open('devices.json', 'r') as infile:
|
||||
devices = json.load(infile)
|
||||
for shapableDevice in devices:
|
||||
shapableDeviceqdiscSrc = shapableDevice['qdiscSrc']
|
||||
shapableDeviceqdiscDst = shapableDevice['qdiscDst']
|
||||
for device in allQDiscStats:
|
||||
deviceFlowID = device['qDiscID']
|
||||
if shapableDeviceqdiscSrc == deviceFlowID:
|
||||
name = shapableDevice['hostname']
|
||||
AP = shapableDevice['AP']
|
||||
ipv4 = shapableDevice['ipv4']
|
||||
ipv6 = shapableDevice['ipv6']
|
||||
srcOrDst = 'src'
|
||||
tempDict = {'name': name, 'AP': AP, 'ipv4': ipv4, 'ipv6': ipv6, 'srcOrDst': srcOrDst}
|
||||
device['identification'] = tempDict
|
||||
updatedFlowStats.append(device)
|
||||
if shapableDeviceqdiscDst == deviceFlowID:
|
||||
name = shapableDevice['hostname']
|
||||
AP = shapableDevice['AP']
|
||||
ipv4 = shapableDevice['ipv4']
|
||||
ipv6 = shapableDevice['ipv6']
|
||||
srcOrDst = 'dst'
|
||||
tempDict = {'name': name, 'AP': AP, 'ipv4': ipv4, 'ipv6': ipv6, 'srcOrDst': srcOrDst}
|
||||
device['identification'] = tempDict
|
||||
updatedFlowStats.append(device)
|
||||
mergedStats = []
|
||||
for item in updatedFlowStats:
|
||||
if item['identification']['srcOrDst'] == 'src':
|
||||
newStat = {
|
||||
'identification': {
|
||||
'name': item['identification']['name'],
|
||||
'AP': item['identification']['AP'],
|
||||
'ipv4': item['identification']['ipv4'],
|
||||
'ipv6': item['identification']['ipv6']
|
||||
},
|
||||
'src': {
|
||||
'GigabytesSent': item['stats']['GigabytesSent'],
|
||||
'PacketsSent': item['stats']['PacketsSent'],
|
||||
'droppedPackets': item['stats']['droppedPackets'],
|
||||
'overlimitsPackets': item['stats']['overlimitsPackets'],
|
||||
'requeuedPackets': item['stats']['requeuedPackets'],
|
||||
'backlogBytes': item['stats']['backlogBytes'],
|
||||
'backlogPackets': item['stats']['backlogPackets'],
|
||||
'requeues': item['stats']['requeues'],
|
||||
'maxPacket': item['stats']['maxPacket'],
|
||||
'dropOverlimit': item['stats']['dropOverlimit'],
|
||||
'newFlowCount': item['stats']['newFlowCount'],
|
||||
'ecnMark': item['stats']['ecnMark'],
|
||||
'newFlowsLen': item['stats']['newFlowsLen'],
|
||||
'oldFlowsLen': item['stats']['oldFlowsLen'],
|
||||
'percentageDropped': item['stats']['percentageDropped'],
|
||||
}
|
||||
}
|
||||
mergedStats.append(newStat)
|
||||
for item in updatedFlowStats:
|
||||
if item['identification']['srcOrDst'] == 'dst':
|
||||
ipv4 = item['identification']['ipv4']
|
||||
ipv6 = item['identification']['ipv6']
|
||||
newStat = {
|
||||
'dst': {
|
||||
'GigabytesSent': item['stats']['GigabytesSent'],
|
||||
'PacketsSent': item['stats']['PacketsSent'],
|
||||
'droppedPackets': item['stats']['droppedPackets'],
|
||||
'overlimitsPackets': item['stats']['overlimitsPackets'],
|
||||
'requeuedPackets': item['stats']['requeuedPackets'] ,
|
||||
'backlogBytes': item['stats']['backlogBytes'],
|
||||
'backlogPackets': item['stats']['backlogPackets'],
|
||||
'requeues': item['stats']['requeues'],
|
||||
'maxPacket': item['stats']['maxPacket'],
|
||||
'dropOverlimit': item['stats']['dropOverlimit'],
|
||||
'newFlowCount': item['stats']['newFlowCount'],
|
||||
'ecnMark': item['stats']['ecnMark'],
|
||||
'newFlowsLen': item['stats']['newFlowsLen'],
|
||||
'oldFlowsLen': item['stats']['oldFlowsLen'],
|
||||
'percentageDropped': item['stats']['percentageDropped']
|
||||
}
|
||||
}
|
||||
for item2 in mergedStats:
|
||||
if ipv4 in item2['identification']['ipv4']:
|
||||
item2 = item2.update(newStat)
|
||||
elif ipv6 in item2['identification']['ipv6']:
|
||||
item2 = item2.update(newStat)
|
||||
return mergedStats
|
||||
|
||||
if __name__ == '__main__':
|
||||
mergedStats = getStatistics()
|
||||
|
||||
# Display table of Customer CPEs with most packets dropped
|
||||
x = PrettyTable()
|
||||
x.field_names = ["Device", "AP", "IPv4", "IPv6", "UL Dropped", "DL Dropped", "GB Down/Up"]
|
||||
sortableList = []
|
||||
pickTop = 30
|
||||
for stat in mergedStats:
|
||||
name = stat['identification']['name']
|
||||
AP = stat['identification']['AP']
|
||||
ipv4 = stat['identification']['ipv4']
|
||||
ipv6 = stat['identification']['ipv6']
|
||||
srcDropped = stat['src']['percentageDropped']
|
||||
dstDropped = stat['dst']['percentageDropped']
|
||||
GBuploadedString = stat['src']['GigabytesSent']
|
||||
GBdownloadedString = stat['dst']['GigabytesSent']
|
||||
GBstring = GBuploadedString + '/' + GBdownloadedString
|
||||
avgDropped = (srcDropped + dstDropped)/2
|
||||
sortableList.append((name, AP, ipv4, ipv6, srcDropped, dstDropped, avgDropped, GBstring))
|
||||
res = sorted(sortableList, key = itemgetter(4), reverse = True)[:pickTop]
|
||||
for stat in res:
|
||||
name, AP, ipv4, ipv6, srcDropped, dstDropped, avgDropped, GBstring = stat
|
||||
if not name:
|
||||
name = ipv4
|
||||
srcDroppedString = "{0:.4%}".format(srcDropped)
|
||||
dstDroppedString = "{0:.4%}".format(dstDropped)
|
||||
x.add_row([name, AP, ipv4, ipv6, srcDroppedString, dstDroppedString, GBstring])
|
||||
print(x)
|
@ -1,6 +0,0 @@
|
||||
AP,Max Download,Max Upload
|
||||
A,500,100
|
||||
C,225,50
|
||||
F,500,100
|
||||
R,225,50
|
||||
X,500,100
|
|
@ -1,83 +0,0 @@
|
||||
import requests
|
||||
import csv
|
||||
from ispConfig.py import UISPbaseURL, uispAuthToken, shapeRouterOrStation
|
||||
|
||||
|
||||
stationModels = ['LBE-5AC-Gen2', 'LBE-5AC-Gen2', 'LBE-5AC-LR', 'AF-LTU5', 'AFLTULR', 'AFLTUPro', 'LTU-LITE']
|
||||
routerModels = ['ACB-AC', 'ACB-ISP']
|
||||
|
||||
def pullShapedDevices():
|
||||
devices = []
|
||||
uispSitesToImport = []
|
||||
url = UISPbaseURL + "/nms/api/v2.1/sites?type=client&ucrm=true&ucrmDetails=true"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
jsonData = r.json()
|
||||
uispDevicesToImport = []
|
||||
for uispClientSite in jsonData:
|
||||
if (uispClientSite['identification']['status'] == 'active'):
|
||||
if (uispClientSite['qos']['downloadSpeed']) and (uispClientSite['qos']['uploadSpeed']):
|
||||
downloadSpeedMbps = int(round(uispClientSite['qos']['downloadSpeed']/1000000))
|
||||
uploadSpeedMbps = int(round(uispClientSite['qos']['uploadSpeed']/1000000))
|
||||
address = uispClientSite['description']['address']
|
||||
uispClientSiteID = uispClientSite['id']
|
||||
devicesInUISPsite = getUISPdevicesAtClientSite(uispClientSiteID)
|
||||
UCRMclientID = uispClientSite['ucrm']['client']['id']
|
||||
AP = 'none'
|
||||
thisSiteDevices = []
|
||||
#Look for station devices, use those to find AP name
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station') or (deviceModel in stationModels):
|
||||
if device['attributes']['apDevice']:
|
||||
AP = device['attributes']['apDevice']['name']
|
||||
if shapeRouterOrStation == 'router':
|
||||
#Look for router devices, use those as shaped CPE
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceMAC = device['identification']['mac']
|
||||
deviceIPstring = device['ipAddress']
|
||||
if '/' in deviceIPstring:
|
||||
deviceIPstring = deviceIPstring.split("/")[0]
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'router') or (deviceModel in routerModels):
|
||||
print("Added " + ":\t" + deviceName)
|
||||
devices.append((UCRMclientID, AP,deviceMAC, deviceName, deviceIPstring,'', str(downloadSpeedMbps/4), str(uploadSpeedMbps/4), str(downloadSpeedMbps),str(uploadSpeedMbps)))
|
||||
elif shapeRouterOrStation == 'station':
|
||||
#Look for station devices, use those as shaped CPE
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceMAC = device['identification']['mac']
|
||||
deviceIPstring = device['ipAddress']
|
||||
if '/' in deviceIPstring:
|
||||
deviceIPstring = deviceIPstring.split("/")[0]
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station') or (deviceModel in stationModels):
|
||||
print("Added " + ":\t" + deviceName)
|
||||
devices.append((UCRMclientID, AP,deviceMAC, deviceName, deviceIPstring,'', str(downloadSpeedMbps/4), str(uploadSpeedMbps/4), str(downloadSpeedMbps),str(uploadSpeedMbps)))
|
||||
uispSitesToImport.append(thisSiteDevices)
|
||||
print("Imported " + address)
|
||||
else:
|
||||
print("Failed to import devices from " + uispClientSite['description']['address'] + ". Missing QoS.")
|
||||
return devices
|
||||
|
||||
def getUISPdevicesAtClientSite(siteID):
|
||||
url = UISPbaseURL + "/nms/api/v2.1/devices?siteId=" + siteID
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
return (r.json())
|
||||
|
||||
if __name__ == '__main__':
|
||||
devicesList = pullShapedDevices()
|
||||
with open('Shaper.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(['ID', 'AP', 'MAC', 'Hostname', 'IPv4', 'IPv6', 'Download Min', 'Upload Min', 'Download Max', 'Upload Max'])
|
||||
for device in devicesList:
|
||||
wr.writerow(device)
|
@ -1,218 +0,0 @@
|
||||
# Copyright (C) 2020 Robert Chacón
|
||||
# This file is part of LibreQoS.
|
||||
#
|
||||
# LibreQoS is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# LibreQoS is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with LibreQoS. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# _ _ _ ___ ____
|
||||
# | | (_) |__ _ __ ___ / _ \ ___/ ___|
|
||||
# | | | | '_ \| '__/ _ \ | | |/ _ \___ \
|
||||
# | |___| | |_) | | | __/ |_| | (_) |__) |
|
||||
# |_____|_|_.__/|_| \___|\__\_\\___/____/
|
||||
# v.0.91-stable
|
||||
#
|
||||
import random
|
||||
import logging
|
||||
import os
|
||||
import io
|
||||
import json
|
||||
import csv
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import ipaddress
|
||||
from ipaddress import IPv4Address, IPv6Address
|
||||
import time
|
||||
from datetime import date, datetime
|
||||
from ispConfig import fqOrCAKE, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, defaultClassCapacityDownloadMbps, defaultClassCapacityUploadMbps, interfaceA, interfaceB, enableActualShellCommands, runShellCommandsAsSudo
|
||||
import collections
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
if runShellCommandsAsSudo:
|
||||
command = 'sudo ' + command
|
||||
commands = command.split(' ')
|
||||
print(command)
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line)
|
||||
else:
|
||||
print(command)
|
||||
|
||||
def clearPriorSettings(interfaceA, interfaceB):
|
||||
shell('tc filter delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceB)
|
||||
shell('tc filter delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB)
|
||||
if runShellCommandsAsSudo:
|
||||
clearMemoryCache()
|
||||
|
||||
def refreshShapers():
|
||||
tcpOverheadFactor = 1.09
|
||||
devices = []
|
||||
accessPointDownloadMbps = {}
|
||||
accessPointUploadMbps = {}
|
||||
# Load Access Points
|
||||
with open('AccessPoints.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
AP, download, upload = row
|
||||
accessPointDownloadMbps[AP] = int(download)*tcpOverheadFactor
|
||||
accessPointUploadMbps[AP] = int(upload)*tcpOverheadFactor
|
||||
# Load Devices
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, AP, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if AP == "":
|
||||
AP = "none"
|
||||
AP = AP.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"AP": AP,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": int(downloadMin)*tcpOverheadFactor,
|
||||
"uploadMin": int(uploadMin)*tcpOverheadFactor,
|
||||
"downloadMax": int(downloadMax)*tcpOverheadFactor,
|
||||
"uploadMax": int(uploadMax)*tcpOverheadFactor,
|
||||
"qdisc": '',
|
||||
}
|
||||
# If an AP is specified for a device in Shaper.csv, but AP is not listed in AccessPoints.csv, raise exception
|
||||
if (AP != "none") and (AP not in accessPointDownloadMbps):
|
||||
raise ValueError('AP ' + AP + ' not listed in AccessPoints.csv')
|
||||
devices.append(thisDevice)
|
||||
# If no AP is specified for a device in Shaper.csv, it is placed under this 'default' AP shaper, set to bandwidth max at edge
|
||||
accessPointDownloadMbps['none'] = upstreamBandwidthCapacityDownloadMbps
|
||||
accessPointUploadMbps['none'] = upstreamBandwidthCapacityUploadMbps
|
||||
# Sort into bins by AP
|
||||
result = collections.defaultdict(list)
|
||||
for d in devices:
|
||||
result[d['AP']].append(d)
|
||||
devicesByAP = list(result.values())
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
# XDP-CPUMAP-TC
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceA + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceB + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceA + ' --lan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceB + ' --wan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --clear')
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceA)
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceB)
|
||||
# Find queues available
|
||||
queuesAvailable = 0
|
||||
path = '/sys/class/net/' + interfaceA + '/queues/'
|
||||
directory_contents = os.listdir(path)
|
||||
print(directory_contents)
|
||||
for item in directory_contents:
|
||||
if "tx-" in str(item):
|
||||
queuesAvailable += 1
|
||||
# For VMs, must reduce queues if more than 9, for some reason
|
||||
if queuesAvailable > 9:
|
||||
command = 'grep -q ^flags.*\ hypervisor\ /proc/cpuinfo && echo "This machine is a VM"'
|
||||
try:
|
||||
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True).decode()
|
||||
success = True
|
||||
except subprocess.CalledProcessError as e:
|
||||
output = e.output.decode()
|
||||
success = False
|
||||
if "This machine is a VM" in output:
|
||||
queuesAvailable = 9
|
||||
# Create MQ
|
||||
thisInterface = interfaceA
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq')
|
||||
for queue in range(queuesAvailable):
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 7FFF:' + str(queue+1) + ' handle ' + str(queue+1) + ': htb default 2')
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ': classid ' + str(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityDownloadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 ' + fqOrCAKE)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if not otherwise classified by the Shaper.csv
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
# Default class can use up to defaultClassCapacityDownloadMbps when that bandwidth isn't used by known hosts.
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 classid ' + str(queue+1) + ':2 htb rate ' + str(defaultClassCapacityDownloadMbps/4) + 'mbit ceil ' + str(defaultClassCapacityDownloadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':2 ' + fqOrCAKE)
|
||||
|
||||
thisInterface = interfaceB
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq')
|
||||
for queue in range(queuesAvailable):
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 7FFF:' + str(queue+1) + ' handle ' + str(queue+1) + ': htb default 2')
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ': classid ' + str(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityUploadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 ' + fqOrCAKE)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if not otherwise classified by the Shaper.csv.
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
# Default class can use up to defaultClassCapacityUploadMbps when that bandwidth isn't used by known hosts.
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 classid ' + str(queue+1) + ':2 htb rate ' + str(defaultClassCapacityUploadMbps/4) + 'mbit ceil ' + str(defaultClassCapacityUploadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':2 ' + fqOrCAKE)
|
||||
|
||||
currentQueueCounter = 1
|
||||
queueMinorCounterDict = {}
|
||||
# :1 and :2 are used for root and default classes, so start each counter at :3
|
||||
for queueNum in range(queuesAvailable):
|
||||
queueMinorCounterDict[queueNum+1] = 3
|
||||
|
||||
for AP in devicesByAP:
|
||||
currentAPname = AP[0]['AP']
|
||||
thisAPdownload = accessPointDownloadMbps[currentAPname]
|
||||
thisAPupload = accessPointUploadMbps[currentAPname]
|
||||
major = currentQueueCounter
|
||||
minor = queueMinorCounterDict[currentQueueCounter]
|
||||
thisHTBclassID = str(currentQueueCounter) + ':' + str(minor)
|
||||
# HTB + qdisc for each AP
|
||||
# Guarentee AP gets at least 1/4 of its radio capacity, allow up to its max radio capacity when network not at peak load
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + str(currentQueueCounter) + ':1 classid ' + str(minor) + ' htb rate '+ str(round(thisAPdownload/4)) + 'mbit ceil '+ str(round(thisAPdownload)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(currentQueueCounter) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + str(major) + ':1 classid ' + str(minor) + ' htb rate '+ str(round(thisAPupload/4)) + 'mbit ceil '+ str(round(thisAPupload)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
minor += 1
|
||||
for device in AP:
|
||||
#HTB + qdisc for each device
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + thisHTBclassID + ' classid ' + str(minor) + ' htb rate '+ str(device['downloadMin']) + 'mbit ceil '+ str(device['downloadMax']) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + thisHTBclassID + ' classid ' + str(minor) + ' htb rate '+ str(device['uploadMin']) + 'mbit ceil '+ str(device['uploadMax']) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = str(major) + ':'
|
||||
flowIDstring = str(major) + ':' + str(minor)
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv4'] + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
#Once XDP-CPUMAP-TC handles IPv6, this can be added
|
||||
#if device['ipv6']:
|
||||
# parentString = str(major) + ':'
|
||||
# flowIDstring = str(major) + ':' + str(minor)
|
||||
# shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv6'] + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
device['qdisc'] = str(major) + ':' + str(minor)
|
||||
minor += 1
|
||||
queueMinorCounterDict[currentQueueCounter] = minor
|
||||
|
||||
currentQueueCounter += 1
|
||||
if currentQueueCounter > queuesAvailable:
|
||||
currentQueueCounter = 1
|
||||
|
||||
# Save devices to file to allow for statistics runs
|
||||
with open('devices.json', 'w') as outfile:
|
||||
json.dump(devices, outfile)
|
||||
|
||||
# Done
|
||||
currentTimeString = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
|
||||
print("Successful run completed on " + currentTimeString)
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
print("Program complete")
|
@ -1,38 +0,0 @@
|
||||
# v0.9 (IPv4) (Stable)
|
||||
|
||||
- Released: 11 Jul 2021
|
||||
|
||||
## Installation Guide
|
||||
Best Performance, Bare Metal, IPv4 Only:
|
||||
- 📄 [LibreQoS v0.9 Installation & Usage Guide Physical Server and Ubuntu 21.10](https://github.com/rchac/LibreQoS/wiki/LibreQoS-v0.9-Installation-&-Usage-Guide----Physical-Server-and-Ubuntu-21.10)
|
||||
|
||||
Good Performance, VM, IPv4 Only:
|
||||
- 📄 [LibreQoS v0.9 Installation & Usage Guide Proxmox and Ubuntu 21.10](https://github.com/rchac/LibreQoS/wiki/LibreQoS-v0.9-Installation-&-Usage-Guide----Proxmox-and-Ubuntu-21.10)
|
||||
|
||||
## Features
|
||||
|
||||
- XDP-CPUMAP-TC integration greatly improves throughput, allows many more IPv4 clients, and lowers CPU use. Latency reduced by half on networks previously limited by single-CPU / TC QDisc locking problem in v.0.8.
|
||||
|
||||
- Tested up to 10Gbps asymmetrical throughput on dedicated server (lab only had 10G router). v0.9 is estimated to be capable of an asymmetrical throughput of 20Gbps-40Gbps on a dedicated server with 12+ cores.
|
||||
|
||||
- MQ+HTB+fq_codel or MQ+HTB+cake
|
||||
|
||||
- Now defaults to 'cake diffserv4' for optimal client performance
|
||||
|
||||
- Client limit raised from 1,000 to 32,767
|
||||
|
||||
- Shape Clients by Access Point / Node capacity
|
||||
|
||||
- APs equally distributed among CPUs / NIC queues to greatly increase throughput
|
||||
|
||||
- Simple client management via csv file
|
||||
|
||||
## Considerations
|
||||
|
||||
- Each Node / Access Point is tied to a queue and CPU core. Access Points are evenly distributed across CPUs. Since each CPU can usually only accommodate up to 4Gbps, ensure any single Node / Access Point will not require more than 4Gbps throughput.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Not dual stack, clients can only be shaped by IPv4 address for now in v0.9. Once IPv6 support is added to XDP-CPUMAP-TC we can then shape IPv6 as well.
|
||||
|
||||
- XDP's cpumap-redirect achieves higher throughput on a server with direct access to the NIC (XDP offloading possible) vs as a VM with bridges (generic XDP).
|
@ -1,6 +0,0 @@
|
||||
ID,AP,MAC,Hostname,IPv4,IPv6,Download Min,Upload Min,Download Max,Upload Max
|
||||
3001,A,32:3B:FE:B0:92:C1,CPE-Customer1,100.126.0.77,2001:495:1f0f:58a::4/64,25,8,115,18
|
||||
3002,C,AE:EC:D3:70:DD:36,CPE-Customer2,100.126.0.78,2001:495:1f0f:58a::8/64,25,8,115,18
|
||||
3003,F,1C:1E:60:69:88:9A,CPE-Customer3,100.126.0.79,2001:495:1f0f:58a::12/64,25,8,115,18
|
||||
3004,R,11:B1:63:C4:DA:4C,CPE-Customer4,100.126.0.80,2001:495:1f0f:58a::16/64,25,8,115,18
|
||||
3005,X,46:2F:B5:C2:0B:15,CPE-Customer5,100.126.0.81,2001:495:1f0f:58a::20/64,25,8,115,18
|
|
@ -1,33 +0,0 @@
|
||||
#'fq_codel' or 'cake diffserv4'
|
||||
#'cake diffserv4' is recommended
|
||||
|
||||
#fqOrCAKE = 'fq_codel'
|
||||
fqOrCAKE = 'cake diffserv4'
|
||||
|
||||
# How many Mbps are available to the edge of this network
|
||||
upstreamBandwidthCapacityDownloadMbps = 1000
|
||||
upstreamBandwidthCapacityUploadMbps = 1000
|
||||
|
||||
# Traffic from devices not specified in Shaper.csv will be rate limited by an HTB of this many Mbps
|
||||
defaultClassCapacityDownloadMbps = 500
|
||||
defaultClassCapacityUploadMbps = 500
|
||||
|
||||
# Interface connected to core router
|
||||
interfaceA = 'eth1'
|
||||
|
||||
# Interface connected to edge router
|
||||
interfaceB = 'eth2'
|
||||
|
||||
# Allow shell commands. False causes commands print to console only without being executed. MUST BE ENABLED FOR PROGRAM TO FUNCTION
|
||||
enableActualShellCommands = True
|
||||
|
||||
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
|
||||
runShellCommandsAsSudo = False
|
||||
|
||||
# Optional UISP integration
|
||||
# Everything before /nms/ on your UISP instance
|
||||
UISPbaseURL = 'https://examplesite.com'
|
||||
# UISP Auth Token
|
||||
uispAuthToken = ''
|
||||
# Whether to shape router at customer premises, or instead shape the station radio. When station radio is in router mode, use 'station'. Otherwise, use 'router'.
|
||||
shapeRouterOrStation = 'router'
|
@ -1,11 +0,0 @@
|
||||
import time
|
||||
import schedule
|
||||
from datetime import date
|
||||
from LibreQoS import refreshShapers
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
schedule.every().day.at("04:00").do(refreshShapers)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(60) # wait one minute
|
@ -1 +0,0 @@
|
||||
Subproject commit f44cbf31b5f610907fd278851f0c0ba4792c86a4
|
@ -1,9 +0,0 @@
|
||||
AP,Max Download,Max Upload,Parent Site
|
||||
AP1,250,60,Site1
|
||||
AP2,250,60,Site2
|
||||
AP3,250,60,Site3
|
||||
AP4,250,60,Site4
|
||||
AP5,250,60,Site1
|
||||
AP6,250,60,Site2
|
||||
AP7,250,60,Site3
|
||||
AP8,250,60,Site4
|
|
@ -1,83 +0,0 @@
|
||||
import requests
|
||||
import csv
|
||||
from ispConfig.py import UISPbaseURL, uispAuthToken, shapeRouterOrStation
|
||||
|
||||
|
||||
stationModels = ['LBE-5AC-Gen2', 'LBE-5AC-Gen2', 'LBE-5AC-LR', 'AF-LTU5', 'AFLTULR', 'AFLTUPro', 'LTU-LITE']
|
||||
routerModels = ['ACB-AC', 'ACB-ISP']
|
||||
|
||||
def pullShapedDevices():
|
||||
devices = []
|
||||
uispSitesToImport = []
|
||||
url = UISPbaseURL + "/nms/api/v2.1/sites?type=client&ucrm=true&ucrmDetails=true"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
jsonData = r.json()
|
||||
uispDevicesToImport = []
|
||||
for uispClientSite in jsonData:
|
||||
if (uispClientSite['identification']['status'] == 'active'):
|
||||
if (uispClientSite['qos']['downloadSpeed']) and (uispClientSite['qos']['uploadSpeed']):
|
||||
downloadSpeedMbps = int(round(uispClientSite['qos']['downloadSpeed']/1000000))
|
||||
uploadSpeedMbps = int(round(uispClientSite['qos']['uploadSpeed']/1000000))
|
||||
address = uispClientSite['description']['address']
|
||||
uispClientSiteID = uispClientSite['id']
|
||||
devicesInUISPsite = getUISPdevicesAtClientSite(uispClientSiteID)
|
||||
UCRMclientID = uispClientSite['ucrm']['client']['id']
|
||||
AP = 'none'
|
||||
thisSiteDevices = []
|
||||
#Look for station devices, use those to find AP name
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station') or (deviceModel in stationModels):
|
||||
if device['attributes']['apDevice']:
|
||||
AP = device['attributes']['apDevice']['name']
|
||||
if shapeRouterOrStation == 'router':
|
||||
#Look for router devices, use those as shaped CPE
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceMAC = device['identification']['mac']
|
||||
deviceIPstring = device['ipAddress']
|
||||
if '/' in deviceIPstring:
|
||||
deviceIPstring = deviceIPstring.split("/")[0]
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'router') or (deviceModel in routerModels):
|
||||
print("Added " + ":\t" + deviceName)
|
||||
devices.append((UCRMclientID, AP,deviceMAC, deviceName, deviceIPstring,'', str(downloadSpeedMbps/4), str(uploadSpeedMbps/4), str(downloadSpeedMbps),str(uploadSpeedMbps)))
|
||||
elif shapeRouterOrStation == 'station':
|
||||
#Look for station devices, use those as shaped CPE
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceMAC = device['identification']['mac']
|
||||
deviceIPstring = device['ipAddress']
|
||||
if '/' in deviceIPstring:
|
||||
deviceIPstring = deviceIPstring.split("/")[0]
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station') or (deviceModel in stationModels):
|
||||
print("Added " + ":\t" + deviceName)
|
||||
devices.append((UCRMclientID, AP,deviceMAC, deviceName, deviceIPstring,'', str(downloadSpeedMbps/4), str(uploadSpeedMbps/4), str(downloadSpeedMbps),str(uploadSpeedMbps)))
|
||||
uispSitesToImport.append(thisSiteDevices)
|
||||
print("Imported " + address)
|
||||
else:
|
||||
print("Failed to import devices from " + uispClientSite['description']['address'] + ". Missing QoS.")
|
||||
return devices
|
||||
|
||||
def getUISPdevicesAtClientSite(siteID):
|
||||
url = UISPbaseURL + "/nms/api/v2.1/devices?siteId=" + siteID
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
return (r.json())
|
||||
|
||||
if __name__ == '__main__':
|
||||
devicesList = pullShapedDevices()
|
||||
with open('Shaper.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(['ID', 'AP', 'MAC', 'Hostname', 'IPv4', 'IPv6', 'Download Min', 'Upload Min', 'Download Max', 'Upload Max'])
|
||||
for device in devicesList:
|
||||
wr.writerow(device)
|
@ -1,327 +0,0 @@
|
||||
# Copyright (C) 2020-2021 Robert Chacón
|
||||
# This file is part of LibreQoS.
|
||||
#
|
||||
# LibreQoS is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# LibreQoS is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with LibreQoS. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# _ _ _ ___ ____
|
||||
# | | (_) |__ _ __ ___ / _ \ ___/ ___|
|
||||
# | | | | '_ \| '__/ _ \ | | |/ _ \___ \
|
||||
# | |___| | |_) | | | __/ |_| | (_) |__) |
|
||||
# |_____|_|_.__/|_| \___|\__\_\\___/____/
|
||||
# v.1.0-stable
|
||||
#
|
||||
import random
|
||||
import logging
|
||||
import os
|
||||
import io
|
||||
import json
|
||||
import csv
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import ipaddress
|
||||
from ipaddress import IPv4Address, IPv6Address
|
||||
import time
|
||||
from datetime import date, datetime
|
||||
from ispConfig import fqOrCAKE, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, defaultClassCapacityDownloadMbps, defaultClassCapacityUploadMbps, interfaceA, interfaceB, shapeBySite, enableActualShellCommands, runShellCommandsAsSudo
|
||||
import collections
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
if runShellCommandsAsSudo:
|
||||
command = 'sudo ' + command
|
||||
commands = command.split(' ')
|
||||
print(command)
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line)
|
||||
else:
|
||||
print(command)
|
||||
|
||||
def clearPriorSettings(interfaceA, interfaceB):
|
||||
shell('tc filter delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceB)
|
||||
shell('tc filter delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB)
|
||||
if runShellCommandsAsSudo:
|
||||
clearMemoryCache()
|
||||
|
||||
def refreshShapers():
|
||||
tcpOverheadFactor = 1.09
|
||||
accessPointDownloadMbps = {}
|
||||
accessPointUploadMbps = {}
|
||||
|
||||
# Load Devices
|
||||
devices = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, AP, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if AP == "":
|
||||
AP = "none"
|
||||
AP = AP.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"AP": AP,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": round(int(downloadMin)*tcpOverheadFactor),
|
||||
"uploadMin": round(int(uploadMin)*tcpOverheadFactor),
|
||||
"downloadMax": round(int(downloadMax)*tcpOverheadFactor),
|
||||
"uploadMax": round(int(uploadMax)*tcpOverheadFactor),
|
||||
"qdisc": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
|
||||
# Load Access Points
|
||||
accessPoints = []
|
||||
accessPointNamesOnly = []
|
||||
with open('AccessPoints.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
APname, apDownload, apUpload, parentSite = row
|
||||
accessPointDownloadMbps[AP] = int(apDownload)*tcpOverheadFactor
|
||||
accessPointUploadMbps[AP] = int(apUpload)*tcpOverheadFactor
|
||||
accessPointNamesOnly.append(APname)
|
||||
apDownload = round(int(apDownload)*tcpOverheadFactor)
|
||||
apUpload = round(int(apUpload)*tcpOverheadFactor)
|
||||
devicesForThisAP = []
|
||||
for device in devices:
|
||||
if APname == device['AP']:
|
||||
devicesForThisAP.append(device)
|
||||
accessPoints.append((APname, apDownload, apUpload, parentSite, devicesForThisAP))
|
||||
|
||||
# Sort devices into bins by AP, for scenario shapeBySite = False
|
||||
result = collections.defaultdict(list)
|
||||
for d in devices:
|
||||
result[d['AP']].append(d)
|
||||
devicesByAP = list(result.values())
|
||||
# If no AP is specified for a device in Shaper.csv, it is placed under this 'default' AP shaper, set to bandwidth max at edge
|
||||
accessPointDownloadMbps['none'] = upstreamBandwidthCapacityDownloadMbps
|
||||
accessPointUploadMbps['none'] = upstreamBandwidthCapacityUploadMbps
|
||||
|
||||
# If an AP is specified for a device in Shaper.csv, but AP is not listed in AccessPoints.csv, raise exception
|
||||
for device in devices:
|
||||
if (device['AP'] not in accessPointNamesOnly):
|
||||
print(device['AP'])
|
||||
raise ValueError('AP for device ' + device['hostname'] + ' not listed in AccessPoints.csv')
|
||||
|
||||
# Load Sites
|
||||
sites = []
|
||||
with open('Sites.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
siteName, download, upload = row
|
||||
siteDownloadMbps = int(download)
|
||||
siteUploadMbps = int(upload)
|
||||
apsForThisSite = []
|
||||
for AP in accessPoints:
|
||||
APname, apDownload, apUpload, parentSite, devicesForThisAP = AP
|
||||
if parentSite == siteName:
|
||||
apsForThisSite.append((APname, apDownload, apUpload, parentSite, devicesForThisAP))
|
||||
sites.append((siteName, siteDownloadMbps, siteUploadMbps, apsForThisSite))
|
||||
|
||||
#Clear Prior Settings
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
|
||||
# XDP-CPUMAP-TC
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceA + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceB + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceA + ' --lan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceB + ' --wan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --clear')
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceA)
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceB)
|
||||
|
||||
# Find queues available
|
||||
queuesAvailable = 0
|
||||
path = '/sys/class/net/' + interfaceA + '/queues/'
|
||||
directory_contents = os.listdir(path)
|
||||
print(directory_contents)
|
||||
for item in directory_contents:
|
||||
if "tx-" in str(item):
|
||||
queuesAvailable += 1
|
||||
|
||||
# For VMs, must reduce queues if more than 9, for some reason
|
||||
if queuesAvailable > 9:
|
||||
command = 'grep -q ^flags.*\ hypervisor\ /proc/cpuinfo && echo "This machine is a VM"'
|
||||
try:
|
||||
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True).decode()
|
||||
success = True
|
||||
except subprocess.CalledProcessError as e:
|
||||
output = e.output.decode()
|
||||
success = False
|
||||
if "This machine is a VM" in output:
|
||||
queuesAvailable = 9
|
||||
|
||||
# Create MQ
|
||||
thisInterface = interfaceA
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq')
|
||||
for queue in range(queuesAvailable):
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 7FFF:' + str(queue+1) + ' handle ' + str(queue+1) + ': htb default 2')
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ': classid ' + str(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityDownloadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 ' + fqOrCAKE)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if not otherwise classified by the Shaper.csv
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
# Default class can use up to defaultClassCapacityDownloadMbps when that bandwidth isn't used by known hosts.
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 classid ' + str(queue+1) + ':2 htb rate ' + str(defaultClassCapacityDownloadMbps/4) + 'mbit ceil ' + str(defaultClassCapacityDownloadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':2 ' + fqOrCAKE)
|
||||
|
||||
thisInterface = interfaceB
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq')
|
||||
for queue in range(queuesAvailable):
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 7FFF:' + str(queue+1) + ' handle ' + str(queue+1) + ': htb default 2')
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ': classid ' + str(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityUploadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 ' + fqOrCAKE)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if not otherwise classified by the Shaper.csv.
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
# Default class can use up to defaultClassCapacityUploadMbps when that bandwidth isn't used by known hosts.
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + str(queue+1) + ':1 classid ' + str(queue+1) + ':2 htb rate ' + str(defaultClassCapacityUploadMbps/4) + 'mbit ceil ' + str(defaultClassCapacityUploadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + str(queue+1) + ':2 ' + fqOrCAKE)
|
||||
print()
|
||||
|
||||
#If shapeBySite == True, Shape by Site, AP and Client
|
||||
if shapeBySite:
|
||||
currentQueueCounter = 1
|
||||
queueMinorCounterDict = {}
|
||||
|
||||
# :1 and :2 are used for root and default classes, so start each counter at :3
|
||||
for queueNum in range(queuesAvailable):
|
||||
queueMinorCounterDict[queueNum+1] = 3
|
||||
for site in sites:
|
||||
siteName, siteDownloadMbps, siteUploadMbps, apsForThisSite = site
|
||||
print("Adding site " + siteName)
|
||||
major = currentQueueCounter
|
||||
minor = queueMinorCounterDict[currentQueueCounter]
|
||||
thisSiteclassID = str(currentQueueCounter) + ':' + str(minor)
|
||||
# HTB + qdisc for each Site
|
||||
# Guarentee Site gets at least 1/4 of its capacity, allow up to its max capacity when network not at peak load
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + str(major) + ':1 classid ' + str(minor) + ' htb rate '+ str(round(siteDownloadMbps/4)) + 'mbit ceil '+ str(round(siteDownloadMbps)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + str(major) + ':1 classid ' + str(minor) + ' htb rate '+ str(round(siteUploadMbps/4)) + 'mbit ceil '+ str(round(siteUploadMbps)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
minor += 1
|
||||
print()
|
||||
for AP in apsForThisSite:
|
||||
APname, apDownload, apUpload, parentSite, devicesForThisAP = AP
|
||||
print("Adding AP " + APname)
|
||||
# HTB + qdisc for each AP
|
||||
# Guarentee AP gets at least 1/4 of its capacity, allow up to its max capacity when network not at peak load
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + thisSiteclassID + ' classid ' + str(minor) + ' htb rate '+ str(round(apDownload/4)) + 'mbit ceil '+ str(round(apDownload)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + thisSiteclassID + ' classid ' + str(minor) + ' htb rate '+ str(round(apUpload/4)) + 'mbit ceil '+ str(round(apUpload)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
thisAPclassID = str(currentQueueCounter) + ':' + str(minor)
|
||||
minor += 1
|
||||
print()
|
||||
for device in devicesForThisAP:
|
||||
print("Adding device " + device['hostname'])
|
||||
#HTB + qdisc for each device
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + thisAPclassID + ' classid ' + str(minor) + ' htb rate '+ str(device['downloadMin']) + 'mbit ceil '+ str(device['downloadMax']) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + thisAPclassID + ' classid ' + str(minor) + ' htb rate '+ str(device['uploadMin']) + 'mbit ceil '+ str(device['uploadMax']) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = str(major) + ':'
|
||||
flowIDstring = str(major) + ':' + str(minor)
|
||||
if '/' in device['ipv4']:
|
||||
hosts = list(ipaddress.ip_network(device['ipv4']).hosts())
|
||||
for host in hosts:
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + str(host) + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
else:
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv4'] + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
#Once XDP-CPUMAP-TC handles IPv6, this can be added
|
||||
#if device['ipv6']:
|
||||
# parentString = str(major) + ':'
|
||||
# flowIDstring = str(major) + ':' + str(minor)
|
||||
# shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv6'] + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
device['qdisc'] = str(major) + ':' + str(minor)
|
||||
minor += 1
|
||||
queueMinorCounterDict[currentQueueCounter] = minor
|
||||
|
||||
currentQueueCounter += 1
|
||||
if currentQueueCounter > queuesAvailable:
|
||||
currentQueueCounter = 1
|
||||
|
||||
#If shapeBySite == False, shape by AP and Client only, not by Site
|
||||
else:
|
||||
currentQueueCounter = 1
|
||||
queueMinorCounterDict = {}
|
||||
# :1 and :2 are used for root and default classes, so start each counter at :3
|
||||
for queueNum in range(queuesAvailable):
|
||||
queueMinorCounterDict[queueNum+1] = 3
|
||||
|
||||
for AP in devicesByAP:
|
||||
currentAPname = AP[0]['AP']
|
||||
thisAPdownload = accessPointDownloadMbps[currentAPname]
|
||||
thisAPupload = accessPointUploadMbps[currentAPname]
|
||||
major = currentQueueCounter
|
||||
minor = queueMinorCounterDict[currentQueueCounter]
|
||||
thisAPclassID = str(currentQueueCounter) + ':' + str(minor)
|
||||
# HTB + qdisc for each AP
|
||||
# Guarentee AP gets at least 1/4 of its radio capacity, allow up to its max radio capacity when network not at peak load
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + str(major) + ':1 classid ' + str(minor) + ' htb rate '+ str(round(thisAPdownload/4)) + 'mbit ceil '+ str(round(thisAPdownload)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + str(major) + ':1 classid ' + str(minor) + ' htb rate '+ str(round(thisAPupload/4)) + 'mbit ceil '+ str(round(thisAPupload)) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
minor += 1
|
||||
for device in AP:
|
||||
#HTB + qdisc for each device
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + thisAPclassID + ' classid ' + str(minor) + ' htb rate '+ str(device['downloadMin']) + 'mbit ceil '+ str(device['downloadMax']) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + thisAPclassID + ' classid ' + str(minor) + ' htb rate '+ str(device['uploadMin']) + 'mbit ceil '+ str(device['uploadMax']) + 'mbit prio 3')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + str(major) + ':' + str(minor) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = str(major) + ':'
|
||||
flowIDstring = str(major) + ':' + str(minor)
|
||||
if '/' in device['ipv4']:
|
||||
hosts = list(ipaddress.ip_network(device['ipv4']).hosts())
|
||||
for host in hosts:
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + str(host) + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
else:
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv4'] + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
#Once XDP-CPUMAP-TC handles IPv6, this can be added
|
||||
#if device['ipv6']:
|
||||
# parentString = str(major) + ':'
|
||||
# flowIDstring = str(major) + ':' + str(minor)
|
||||
# shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv6'] + ' --cpu ' + str(currentQueueCounter-1) + ' --classid ' + flowIDstring)
|
||||
device['qdisc'] = str(major) + ':' + str(minor)
|
||||
minor += 1
|
||||
queueMinorCounterDict[currentQueueCounter] = minor
|
||||
|
||||
currentQueueCounter += 1
|
||||
if currentQueueCounter > queuesAvailable:
|
||||
currentQueueCounter = 1
|
||||
|
||||
# Save devices to file to allow for statistics runs
|
||||
with open('devices.json', 'w') as outfile:
|
||||
json.dump(devices, outfile)
|
||||
|
||||
# Done
|
||||
currentTimeString = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
|
||||
print("Successful run completed on " + currentTimeString)
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
print("Program complete")
|
@ -1,22 +0,0 @@
|
||||
# v1.0 (IPv4) (Stable)
|
||||
|
||||
Released: 11 Dec 2021
|
||||
|
||||
## Installation Guide
|
||||
- 📄 [LibreQoS v1.0 Installation & Usage Guide Physical Server and Ubuntu 21.10](https://github.com/rchac/LibreQoS/wiki/LibreQoS-v1.0-Installation-&-Usage-Guide---Physical-Server-and-Ubuntu-21.10)
|
||||
|
||||
## Features
|
||||
|
||||
Can now shape by Site, in addition to by AP and by Client
|
||||
|
||||
## Considerations
|
||||
|
||||
If you shape by Site, each site is tied to a queue and CPU core. Sites are evenly distributed across CPUs. Since each CPU can usually only accommodate up to 4Gbps, ensure any single Site will not require more than 4Gbps throughput.
|
||||
|
||||
If you shape by Acess Point, each Access Point is tied to a queue and CPU core. Access Points are evenly distributed across CPUs. Since each CPU can usually only accommodate up to 4Gbps, ensure any single Access Point will not require more than 4Gbps throughput.
|
||||
|
||||
## Limitations
|
||||
|
||||
As with 0.9, not yet dual stack, clients can only be shaped by IPv4 address until IPv6 support is added to XDP-CPUMAP-TC. Once that happens we can then shape IPv6 as well.
|
||||
|
||||
XDP's cpumap-redirect achieves higher throughput on a server with direct access to the NIC (XDP offloading possible) vs as a VM with bridges (generic XDP).
|
@ -1,6 +0,0 @@
|
||||
ID,AP,MAC,Hostname,IPv4,IPv6,Download Min,Upload Min,Download Max,Upload Max
|
||||
3001,AP1,32:3B:FE:B0:92:C1,CPE-Customer1,100.126.0.77,2001:495:1f0f:58a::4/64,25,8,115,18
|
||||
3002,AP2,AE:EC:D3:70:DD:36,CPE-Customer2,100.126.0.78,2001:495:1f0f:58a::8/64,25,8,115,18
|
||||
3003,AP3,1C:1E:60:69:88:9A,CPE-Customer3,100.126.0.79,2001:495:1f0f:58a::12/64,25,8,115,18
|
||||
3004,AP4,11:B1:63:C4:DA:4C,CPE-Customer4,100.126.0.80,2001:495:1f0f:58a::16/64,25,8,115,18
|
||||
3005,AP5,46:2F:B5:C2:0B:15,CPE-Customer5,100.126.0.81,2001:495:1f0f:58a::20/64,25,8,115,18
|
|
@ -1,5 +0,0 @@
|
||||
Site,Max Download,Max Upload
|
||||
Site1,920,920
|
||||
Site2,920,920
|
||||
Site3,200,30
|
||||
Site4,100,15
|
|
@ -1,36 +0,0 @@
|
||||
#'fq_codel' or 'cake diffserv4'
|
||||
#'cake diffserv4' is recommended
|
||||
|
||||
#fqOrCAKE = 'fq_codel'
|
||||
fqOrCAKE = 'cake diffserv4'
|
||||
|
||||
# How many Mbps are available to the edge of this network
|
||||
upstreamBandwidthCapacityDownloadMbps = 1000
|
||||
upstreamBandwidthCapacityUploadMbps = 1000
|
||||
|
||||
# Traffic from devices not specified in Shaper.csv will be rate limited by an HTB of this many Mbps
|
||||
defaultClassCapacityDownloadMbps = 500
|
||||
defaultClassCapacityUploadMbps = 500
|
||||
|
||||
# Interface connected to core router
|
||||
interfaceA = 'eth1'
|
||||
|
||||
# Interface connected to edge router
|
||||
interfaceB = 'eth2'
|
||||
|
||||
# Shape by Site in addition to by AP and Client
|
||||
shapeBySite = True
|
||||
|
||||
# Allow shell commands. False causes commands print to console only without being executed. MUST BE ENABLED FOR PROGRAM TO FUNCTION
|
||||
enableActualShellCommands = True
|
||||
|
||||
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
|
||||
runShellCommandsAsSudo = False
|
||||
|
||||
# Optional UISP integration
|
||||
# Everything before /nms/ on your UISP instance
|
||||
UISPbaseURL = 'https://examplesite.com'
|
||||
# UISP Auth Token
|
||||
uispAuthToken = ''
|
||||
# UISP | Whether to shape router at customer premises, or instead shape the station radio. When station radio is in router mode, use 'station'. Otherwise, use 'router'.
|
||||
shapeRouterOrStation = 'router'
|
@ -1,11 +0,0 @@
|
||||
import time
|
||||
import schedule
|
||||
from datetime import date
|
||||
from LibreQoS import refreshShapers
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
schedule.every().day.at("04:00").do(refreshShapers)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(60) # wait one minute
|
@ -1 +0,0 @@
|
||||
Subproject commit 888cc7712f2516d386a837aee67c5b05bd04edfa
|
@ -1,244 +0,0 @@
|
||||
# v1.1 beta
|
||||
|
||||
import csv
|
||||
import io
|
||||
import ipaddress
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
import multiprocessing
|
||||
|
||||
from ispConfig import fqOrCAKE, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, \
|
||||
defaultClassCapacityDownloadMbps, defaultClassCapacityUploadMbps, interfaceA, interfaceB, enableActualShellCommands, \
|
||||
runShellCommandsAsSudo
|
||||
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
if runShellCommandsAsSudo:
|
||||
command = 'sudo ' + command
|
||||
commands = command.split(' ')
|
||||
print(command)
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line)
|
||||
else:
|
||||
print(command)
|
||||
|
||||
def clearPriorSettings(interfaceA, interfaceB):
|
||||
if enableActualShellCommands:
|
||||
shell('tc filter delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceA)
|
||||
shell('tc filter delete dev ' + interfaceB)
|
||||
shell('tc filter delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB)
|
||||
|
||||
def refreshShapers():
|
||||
tcpOverheadFactor = 1.09
|
||||
|
||||
# Load Devices
|
||||
devices = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, ParentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if ParentNode == "":
|
||||
ParentNode = "none"
|
||||
ParentNode = ParentNode.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"ParentNode": ParentNode,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": round(int(downloadMin)*tcpOverheadFactor),
|
||||
"uploadMin": round(int(uploadMin)*tcpOverheadFactor),
|
||||
"downloadMax": round(int(downloadMax)*tcpOverheadFactor),
|
||||
"uploadMax": round(int(uploadMax)*tcpOverheadFactor),
|
||||
"qdisc": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
|
||||
#Load network heirarchy
|
||||
with open('network.json', 'r') as j:
|
||||
network = json.loads(j.read())
|
||||
|
||||
#Find the bandwidth minimums for each node by combining mimimums of devices lower in that node's heirarchy
|
||||
def findBandwidthMins(data, depth):
|
||||
tabs = ' ' * depth
|
||||
minDownload = 0
|
||||
minUpload = 0
|
||||
for elem in data:
|
||||
|
||||
for device in devices:
|
||||
if elem == device['ParentNode']:
|
||||
minDownload += device['downloadMin']
|
||||
minUpload += device['uploadMin']
|
||||
if 'children' in data[elem]:
|
||||
minDL, minUL = findBandwidthMins(data[elem]['children'], depth+1)
|
||||
minDownload += minDL
|
||||
minUpload += minUL
|
||||
data[elem]['downloadBandwidthMbpsMin'] = minDownload
|
||||
data[elem]['uploadBandwidthMbpsMin'] = minUpload
|
||||
return minDownload, minUpload
|
||||
|
||||
minDownload, minUpload = findBandwidthMins(network, 0)
|
||||
|
||||
#Clear Prior Settings
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
|
||||
# Find queues and CPU cores available. Use min between those two as queuesAvailable
|
||||
queuesAvailable = 0
|
||||
path = '/sys/class/net/' + interfaceA + '/queues/'
|
||||
directory_contents = os.listdir(path)
|
||||
for item in directory_contents:
|
||||
if "tx-" in str(item):
|
||||
queuesAvailable += 1
|
||||
|
||||
print("NIC queues:\t" + str(queuesAvailable))
|
||||
cpuCount = multiprocessing.cpu_count()
|
||||
print("CPU cores:\t" + str(cpuCount))
|
||||
queuesAvailable = min(queuesAvailable,cpuCount)
|
||||
|
||||
# XDP-CPUMAP-TC
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceA + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceB + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceA + ' --lan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceB + ' --wan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --clear')
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceA)
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceB)
|
||||
|
||||
# Create MQ qdisc for each interface
|
||||
thisInterface = interfaceA
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq')
|
||||
for queue in range(queuesAvailable):
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+1) + ' handle ' + hex(queue+1) + ': htb default 2')
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ': classid ' + hex(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityDownloadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 ' + fqOrCAKE)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if not otherwise classified by the Shaper.csv
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
# Default class can use up to defaultClassCapacityDownloadMbps when that bandwidth isn't used by known hosts.
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 classid ' + hex(queue+1) + ':2 htb rate ' + str(defaultClassCapacityDownloadMbps/4) + 'mbit ceil ' + str(defaultClassCapacityDownloadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':2 ' + fqOrCAKE)
|
||||
|
||||
thisInterface = interfaceB
|
||||
shell('tc qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq')
|
||||
for queue in range(queuesAvailable):
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+1) + ' handle ' + hex(queue+1) + ': htb default 2')
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ': classid ' + hex(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityUploadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps) + 'mbit')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 ' + fqOrCAKE)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if not otherwise classified by the Shaper.csv.
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
# Default class can use up to defaultClassCapacityUploadMbps when that bandwidth isn't used by known hosts.
|
||||
shell('tc class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 classid ' + hex(queue+1) + ':2 htb rate ' + str(defaultClassCapacityUploadMbps/4) + 'mbit ceil ' + str(defaultClassCapacityUploadMbps) + 'mbit prio 5')
|
||||
shell('tc qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':2 ' + fqOrCAKE)
|
||||
print()
|
||||
|
||||
#Parse network.json. For each tier, create corresponding HTB and leaf classes
|
||||
devicesShaped = []
|
||||
parentNodes = []
|
||||
def traverseNetwork(data, depth, major, minor, queue, parentClassID, parentMaxDL, parentMaxUL):
|
||||
tabs = ' ' * depth
|
||||
for elem in data:
|
||||
print(tabs + elem)
|
||||
elemClassID = hex(major) + ':' + hex(minor)
|
||||
#Cap based on this node's max bandwidth, or parent node's max bandwidth, whichever is lower
|
||||
elemDownloadMax = min(data[elem]['downloadBandwidthMbps'],parentMaxDL)
|
||||
elemUploadMax = min(data[elem]['uploadBandwidthMbps'],parentMaxUL)
|
||||
#Based on calculations done in findBandwidthMins(), determine optimal HTB rates (mins) and ceils (maxs)
|
||||
#The max calculation is to avoid 0 values, and the min calculation is to ensure rate is not higher than ceil
|
||||
elemDownloadMin = round(elemDownloadMax*.95)
|
||||
elemUploadMin = round(elemUploadMax*.95)
|
||||
print(tabs + "Download: " + str(elemDownloadMin) + " to " + str(elemDownloadMax) + " Mbps")
|
||||
print(tabs + "Upload: " + str(elemUploadMin) + " to " + str(elemUploadMax) + " Mbps")
|
||||
print(tabs, end='')
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + parentClassID + ' classid ' + hex(minor) + ' htb rate '+ str(round(elemDownloadMin)) + 'mbit ceil '+ str(round(elemDownloadMax)) + 'mbit prio 3')
|
||||
print(tabs, end='')
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + parentClassID + ' classid ' + hex(minor) + ' htb rate '+ str(round(elemUploadMin)) + 'mbit ceil '+ str(round(elemUploadMax)) + 'mbit prio 3')
|
||||
print()
|
||||
thisParentNode = {
|
||||
"parentNodeName": elem,
|
||||
"classID": elemClassID,
|
||||
"downloadMax": elemDownloadMax,
|
||||
"uploadMax": elemUploadMax,
|
||||
}
|
||||
parentNodes.append(thisParentNode)
|
||||
minor += 1
|
||||
for device in devices:
|
||||
#If a device from Shaper.csv lists this elem as its Parent Node, attach it as a leaf to this elem HTB
|
||||
if elem == device['ParentNode']:
|
||||
maxDownload = min(device['downloadMax'],elemDownloadMax)
|
||||
maxUpload = min(device['uploadMax'],elemUploadMax)
|
||||
minDownload = min(device['downloadMin'],maxDownload)
|
||||
minUpload = min(device['uploadMin'],maxUpload)
|
||||
print(tabs + ' ' + device['hostname'])
|
||||
print(tabs + ' ' + "Download: " + str(minDownload) + " to " + str(maxDownload) + " Mbps")
|
||||
print(tabs + ' ' + "Upload: " + str(minUpload) + " to " + str(maxUpload) + " Mbps")
|
||||
print(tabs + ' ', end='')
|
||||
shell('tc class add dev ' + interfaceA + ' parent ' + elemClassID + ' classid ' + hex(minor) + ' htb rate '+ str(minDownload) + 'mbit ceil '+ str(maxDownload) + 'mbit prio 3')
|
||||
print(tabs + ' ', end='')
|
||||
shell('tc qdisc add dev ' + interfaceA + ' parent ' + hex(major) + ':' + hex(minor) + ' ' + fqOrCAKE)
|
||||
print(tabs + ' ', end='')
|
||||
shell('tc class add dev ' + interfaceB + ' parent ' + elemClassID + ' classid ' + hex(minor) + ' htb rate '+ str(minUpload) + 'mbit ceil '+ str(maxUpload) + 'mbit prio 3')
|
||||
print(tabs + ' ', end='')
|
||||
shell('tc qdisc add dev ' + interfaceB + ' parent ' + hex(major) + ':' + hex(minor) + ' ' + fqOrCAKE)
|
||||
if device['ipv4']:
|
||||
parentString = hex(major) + ':'
|
||||
flowIDstring = hex(major) + ':' + hex(minor)
|
||||
if '/' in device['ipv4']:
|
||||
hosts = list(ipaddress.ip_network(device['ipv4']).hosts())
|
||||
for host in hosts:
|
||||
print(tabs + ' ', end='')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + str(host) + ' --cpu ' + hex(queue-1) + ' --classid ' + flowIDstring)
|
||||
else:
|
||||
print(tabs + ' ', end='')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + device['ipv4'] + ' --cpu ' + hex(queue-1) + ' --classid ' + flowIDstring)
|
||||
device['qdisc'] = flowIDstring
|
||||
if device['hostname'] not in devicesShaped:
|
||||
devicesShaped.append(device['hostname'])
|
||||
print()
|
||||
minor += 1
|
||||
#Recursive call this function for children nodes attached to this node
|
||||
if 'children' in data[elem]:
|
||||
#We need to keep tabs on the minor counter, because we can't have repeating class IDs. Here, we bring back the minor counter from the recursive function
|
||||
minor = traverseNetwork(data[elem]['children'], depth+1, major, minor+1, queue, elemClassID, elemDownloadMax, elemUploadMax)
|
||||
#If top level node, increment to next queue / cpu core
|
||||
if depth == 0:
|
||||
if queue >= queuesAvailable:
|
||||
queue = 1
|
||||
major = queue
|
||||
else:
|
||||
queue += 1
|
||||
major += 1
|
||||
return minor
|
||||
|
||||
#Here is the actual call to the recursive traverseNetwork() function. finalMinor is not used.
|
||||
finalMinor = traverseNetwork(network, 0, major=1, minor=3, queue=1, parentClassID="1:1", parentMaxDL=upstreamBandwidthCapacityDownloadMbps, parentMaxUL=upstreamBandwidthCapacityUploadMbps)
|
||||
|
||||
#Recap
|
||||
for device in devices:
|
||||
if device['hostname'] not in devicesShaped:
|
||||
print('Device ' + device['hostname'] + ' was not shaped. Please check to ensure its parent Node is listed in network.json.')
|
||||
|
||||
#Save for stats
|
||||
with open('statsByDevice.json', 'w') as infile:
|
||||
json.dump(devices, infile)
|
||||
with open('statsByParentNode.json', 'w') as infile:
|
||||
json.dump(parentNodes, infile)
|
||||
|
||||
# Done
|
||||
currentTimeString = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
|
||||
print("Successful run completed on " + currentTimeString)
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshShapers()
|
||||
print("Program complete")
|
@ -1,28 +0,0 @@
|
||||
# v1.1 (IPv4) (Beta)
|
||||
|
||||
Released: 2022
|
||||
|
||||
<img alt="LibreQoS" src="https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/v1.1-alpha-preview.jpg"></a>
|
||||
|
||||
## Installation Guide
|
||||
- 📄 [LibreQoS v1.1 Installation & Usage Guide Physical Server and Ubuntu 21.10](https://github.com/rchac/LibreQoS/wiki/LibreQoS-v1.1-Installation-&-Usage-Guide-Physical-Server-and-Ubuntu-21.10)
|
||||
|
||||
## Features
|
||||
|
||||
- Tested up to 11Gbps asymmetrical throughput in real world deployment with 5000+ clients.
|
||||
|
||||
- Network hierarchy can be mapped to the network.json file. This allows for both simple network heirarchies (Site>AP>Client) as well as much more complex ones (Site>Site>Micro-PoP>AP>Site>AP>Client).
|
||||
|
||||
- Graphing of bandwidth to InfluxDB. Parses bandwidth data from "tc -s qdisc show" command, minimizing CPU use.
|
||||
|
||||
- Graphing of TCP latency to InfluxDB - via PPing integration.
|
||||
|
||||
## Considerations
|
||||
|
||||
- Any top-level parent node is tied to a single CPU core. Top-level nodes are evenly distributed across CPUs. Since each CPU can usually only accommodate up to 4Gbps, ensure any single top-level parent node will not require more than 4Gbps throughput.
|
||||
|
||||
## Limitations
|
||||
|
||||
- As with 0.9 and v1.0, not yet dual stack, clients can only be shaped by IPv4 address until IPv6 support is added to XDP-CPUMAP-TC. Once that happens we can then shape IPv6 as well.
|
||||
|
||||
- XDP's cpumap-redirect achieves higher throughput on a server with direct access to the NIC (XDP offloading possible) vs as a VM with bridges (generic XDP).
|
@ -1,12 +0,0 @@
|
||||
ID,AP,MAC,Hostname,IPv4,IPv6,Download Min,Upload Min,Download Max,Upload Max
|
||||
,AP_A,,Device 1,100.64.0.1,,25,5,155,20
|
||||
,AP_A,,Device 2,100.64.0.2,,25,5,105,18
|
||||
,AP_9,,Device 3,100.64.0.3,,25,5,105,18
|
||||
,AP_9,,Device 4,100.64.0.4,,25,5,105,18
|
||||
,AP_11,,Device 5,100.64.0.5,,25,5,105,18
|
||||
,AP_11,,Device 6,100.64.0.6,,25,5,90,15
|
||||
,AP_1,,Device 7,100.64.0.7,,25,5,155,20
|
||||
,AP_1,,Device 8,100.64.0.8,,25,5,105,18
|
||||
,AP_7,,Device 9,100.64.0.9,,25,5,105,18
|
||||
,AP_7,,Device 10,100.64.0.10,,25,5,105,18
|
||||
,Site_1,,Device 11,100.64.0.11,,25,5,105,18
|
|
@ -1,153 +0,0 @@
|
||||
import os
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import io
|
||||
import decimal
|
||||
import json
|
||||
from operator import itemgetter
|
||||
from prettytable import PrettyTable
|
||||
from ispConfig import fqOrCAKE, interfaceA, interfaceB, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl
|
||||
from datetime import date, datetime, timedelta
|
||||
import decimal
|
||||
from itertools import groupby
|
||||
from influxdb_client import InfluxDBClient, Point, Dialect
|
||||
from influxdb_client.client.write_api import SYNCHRONOUS
|
||||
import dateutil.parser
|
||||
|
||||
def getDeviceStats(devices):
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
for interface in interfaces:
|
||||
command = 'tc -j -s qdisc show dev ' + interface
|
||||
commands = command.split(' ')
|
||||
tcShowResults = subprocess.run(commands, stdout=subprocess.PIPE).stdout.decode('utf-8')
|
||||
if interface == interfaceA:
|
||||
interfaceAjson = json.loads(tcShowResults)
|
||||
else:
|
||||
interfaceBjson = json.loads(tcShowResults)
|
||||
for device in devices:
|
||||
if 'timeQueried' in device:
|
||||
device['priorQueryTime'] = device['timeQueried']
|
||||
for interface in interfaces:
|
||||
if interface == interfaceA:
|
||||
jsonVersion = interfaceAjson
|
||||
else:
|
||||
jsonVersion = interfaceBjson
|
||||
for element in jsonVersion:
|
||||
if "parent" in element:
|
||||
if element['parent'] == device['qdisc']:
|
||||
drops = int(element['drops'])
|
||||
packets = int(element['packets'])
|
||||
bytesSent = int(element['bytes'])
|
||||
if interface == interfaceA:
|
||||
if 'bytesSentDownload' in device:
|
||||
device['priorQueryBytesDownload'] = device['bytesSentDownload']
|
||||
device['bytesSentDownload'] = bytesSent
|
||||
else:
|
||||
if 'bytesSentUpload' in device:
|
||||
device['priorQueryBytesUpload'] = device['bytesSentUpload']
|
||||
device['bytesSentUpload'] = bytesSent
|
||||
device['timeQueried'] = datetime.now().isoformat()
|
||||
for device in devices:
|
||||
if 'priorQueryTime' in device:
|
||||
bytesDLSinceLastQuery = device['bytesSentDownload'] - device['priorQueryBytesDownload']
|
||||
bytesULSinceLastQuery = device['bytesSentUpload'] - device['priorQueryBytesUpload']
|
||||
currentQueryTime = datetime.fromisoformat(device['timeQueried'])
|
||||
priorQueryTime = datetime.fromisoformat(device['priorQueryTime'])
|
||||
delta = currentQueryTime - priorQueryTime
|
||||
deltaSeconds = delta.total_seconds()
|
||||
if deltaSeconds > 0:
|
||||
mbpsDownload = ((bytesDLSinceLastQuery/125000))/deltaSeconds
|
||||
mbpsUpload = ((bytesULSinceLastQuery/125000))/deltaSeconds
|
||||
else:
|
||||
mbpsDownload = 0
|
||||
mbpsUpload = 0
|
||||
device['mbpsDownloadSinceLastQuery'] = mbpsDownload
|
||||
device['mbpsUploadSinceLastQuery'] = mbpsUpload
|
||||
else:
|
||||
device['mbpsDownloadSinceLastQuery'] = 0
|
||||
device['mbpsUploadSinceLastQuery'] = 0
|
||||
return (devices)
|
||||
|
||||
def getParentNodeStats(parentNodes, devices):
|
||||
for parentNode in parentNodes:
|
||||
thisNodeMbpsDownload = 0
|
||||
thisNodeMbpsUpload = 0
|
||||
for device in devices:
|
||||
if device['ParentNode'] == parentNode['parentNodeName']:
|
||||
thisNodeMbpsDownload += device['mbpsDownloadSinceLastQuery']
|
||||
thisNodeMbpsUpload += device['mbpsUploadSinceLastQuery']
|
||||
parentNode['mbpsDownloadSinceLastQuery'] = thisNodeMbpsDownload
|
||||
parentNode['mbpsUploadSinceLastQuery'] = thisNodeMbpsUpload
|
||||
return parentNodes
|
||||
|
||||
def refreshGraphs():
|
||||
startTime = datetime.now()
|
||||
with open('statsByParentNode.json', 'r') as j:
|
||||
parentNodes = json.loads(j.read())
|
||||
|
||||
with open('statsByDevice.json', 'r') as j:
|
||||
devices = json.loads(j.read())
|
||||
|
||||
print("Retrieving device statistics")
|
||||
devices = getDeviceStats(devices)
|
||||
print("Computing parent node statistics")
|
||||
parentNodes = getParentNodeStats(parentNodes, devices)
|
||||
print("Writing data to InfluxDB")
|
||||
bucket = influxDBBucket
|
||||
org = influxDBOrg
|
||||
token = influxDBtoken
|
||||
url = influxDBurl
|
||||
client = InfluxDBClient(
|
||||
url=url,
|
||||
token=token,
|
||||
org=org
|
||||
)
|
||||
write_api = client.write_api(write_options=SYNCHRONOUS)
|
||||
|
||||
queriesToSend = []
|
||||
for device in devices:
|
||||
mbpsDownload = float(device['mbpsDownloadSinceLastQuery'])
|
||||
mbpsUpload = float(device['mbpsUploadSinceLastQuery'])
|
||||
if (mbpsDownload > 0) and (mbpsUpload > 0):
|
||||
percentUtilizationDownload = float(mbpsDownload / device['downloadMax'])
|
||||
percentUtilizationUpload = float(mbpsUpload / device['uploadMax'])
|
||||
|
||||
p = Point('Bandwidth').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).field("Download", mbpsDownload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Bandwidth').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).field("Upload", mbpsUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).field("Download", percentUtilizationDownload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).field("Upload", percentUtilizationUpload)
|
||||
queriesToSend.append(p)
|
||||
|
||||
for parentNode in parentNodes:
|
||||
mbpsDownload = float(parentNode['mbpsDownloadSinceLastQuery'])
|
||||
mbpsUpload = float(parentNode['mbpsUploadSinceLastQuery'])
|
||||
if (mbpsDownload > 0) and (mbpsUpload > 0):
|
||||
percentUtilizationDownload = float(mbpsDownload / parentNode['downloadMax'])
|
||||
percentUtilizationUpload = float(mbpsUpload / parentNode['uploadMax'])
|
||||
|
||||
p = Point('Bandwidth').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).field("Download", mbpsDownload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Bandwidth').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).field("Upload", mbpsUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).field("Download", percentUtilizationDownload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).field("Upload", percentUtilizationUpload)
|
||||
|
||||
write_api.write(bucket=bucket, record=queriesToSend)
|
||||
print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
client.close()
|
||||
|
||||
with open('statsByParentNode.json', 'w') as infile:
|
||||
json.dump(parentNodes, infile)
|
||||
|
||||
with open('statsByDevice.json', 'w') as infile:
|
||||
json.dump(devices, infile)
|
||||
endTime = datetime.now()
|
||||
durationSeconds = round((endTime - startTime).total_seconds())
|
||||
print("Graphs updated within " + str(durationSeconds) + " seconds.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshGraphs()
|
@ -1,186 +0,0 @@
|
||||
import subprocess
|
||||
import json
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
|
||||
from influxdb_client import InfluxDBClient, Point
|
||||
from influxdb_client.client.write_api import SYNCHRONOUS
|
||||
|
||||
from ispConfig import interfaceA, interfaceB, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl
|
||||
|
||||
|
||||
def getInterfaceStats(interface):
|
||||
command = 'tc -j -s qdisc show dev ' + interface
|
||||
jsonAr = json.loads(subprocess.run(command.split(' '), stdout=subprocess.PIPE).stdout.decode('utf-8'))
|
||||
jsonDict = {}
|
||||
for element in filter(lambda e: 'parent' in e, jsonAr):
|
||||
flowID = ':'.join(map(lambda p: f'0x{p}', element['parent'].split(':')[0:2]))
|
||||
jsonDict[flowID] = element
|
||||
del jsonAr
|
||||
return jsonDict
|
||||
|
||||
|
||||
def chunk_list(l, n):
|
||||
for i in range(0, len(l), n):
|
||||
yield l[i:i + n]
|
||||
|
||||
|
||||
def getDeviceStats(devices):
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
ifaceStats = list(map(getInterfaceStats, interfaces))
|
||||
|
||||
for device in devices:
|
||||
if 'timeQueried' in device:
|
||||
device['priorQueryTime'] = device['timeQueried']
|
||||
for (interface, stats, dirSuffix) in zip(interfaces, ifaceStats, ['Download', 'Upload']):
|
||||
|
||||
element = stats[device['qdisc']] if device['qdisc'] in stats else False
|
||||
|
||||
if element:
|
||||
|
||||
bytesSent = int(element['bytes'])
|
||||
drops = int(element['drops'])
|
||||
packets = int(element['packets'])
|
||||
|
||||
if 'bytesSent' + dirSuffix in device:
|
||||
device['priorQueryBytes' + dirSuffix] = device['bytesSent' + dirSuffix]
|
||||
device['bytesSent' + dirSuffix] = bytesSent
|
||||
|
||||
if 'dropsSent' + dirSuffix in device:
|
||||
device['priorDropsSent' + dirSuffix] = device['dropsSent' + dirSuffix]
|
||||
device['dropsSent' + dirSuffix] = drops
|
||||
|
||||
if 'packetsSent' + dirSuffix in device:
|
||||
device['priorPacketsSent' + dirSuffix] = device['packetsSent' + dirSuffix]
|
||||
device['packetsSent' + dirSuffix] = packets
|
||||
|
||||
device['timeQueried'] = datetime.now().isoformat()
|
||||
for device in devices:
|
||||
device['bitsDownloadSinceLastQuery'] = device['bitsUploadSinceLastQuery'] = 0
|
||||
if 'priorQueryTime' in device:
|
||||
try:
|
||||
bytesDLSinceLastQuery = device['bytesSentDownload'] - device['priorQueryBytesDownload']
|
||||
bytesULSinceLastQuery = device['bytesSentUpload'] - device['priorQueryBytesUpload']
|
||||
except:
|
||||
bytesDLSinceLastQuery = bytesULSinceLastQuery = 0
|
||||
|
||||
currentQueryTime = datetime.fromisoformat(device['timeQueried'])
|
||||
priorQueryTime = datetime.fromisoformat(device['priorQueryTime'])
|
||||
deltaSeconds = (currentQueryTime - priorQueryTime).total_seconds()
|
||||
|
||||
device['bitsDownloadSinceLastQuery'] = round(
|
||||
((bytesDLSinceLastQuery * 8) / deltaSeconds)) if deltaSeconds > 0 else 0
|
||||
device['bitsUploadSinceLastQuery'] = round(
|
||||
((bytesULSinceLastQuery * 8) / deltaSeconds)) if deltaSeconds > 0 else 0
|
||||
|
||||
return devices
|
||||
|
||||
|
||||
def getParentNodeStats(parentNodes, devices):
|
||||
for parentNode in parentNodes:
|
||||
thisNodeBitsDownload = 0
|
||||
thisNodeBitsUpload = 0
|
||||
for device in devices:
|
||||
if device['ParentNode'] == parentNode['parentNodeName']:
|
||||
thisNodeBitsDownload += device['bitsDownloadSinceLastQuery']
|
||||
thisNodeBitsUpload += device['bitsUploadSinceLastQuery']
|
||||
|
||||
parentNode['bitsDownloadSinceLastQuery'] = thisNodeBitsDownload
|
||||
parentNode['bitsUploadSinceLastQuery'] = thisNodeBitsUpload
|
||||
return parentNodes
|
||||
|
||||
|
||||
def getParentNodeDict(data, depth, parentNodeNameDict):
|
||||
if parentNodeNameDict == None:
|
||||
parentNodeNameDict = {}
|
||||
|
||||
for elem in data:
|
||||
if 'children' in data[elem]:
|
||||
for child in data[elem]['children']:
|
||||
parentNodeNameDict[child] = elem
|
||||
tempDict = getParentNodeDict(data[elem]['children'], depth + 1, parentNodeNameDict)
|
||||
parentNodeNameDict = dict(parentNodeNameDict, **tempDict)
|
||||
return parentNodeNameDict
|
||||
|
||||
|
||||
def parentNodeNameDictPull():
|
||||
# Load network heirarchy
|
||||
with open('network.json', 'r') as j:
|
||||
network = json.loads(j.read())
|
||||
parentNodeNameDict = getParentNodeDict(network, 0, None)
|
||||
return parentNodeNameDict
|
||||
|
||||
def refreshBandwidthGraphs():
|
||||
startTime = datetime.now()
|
||||
with open('statsByParentNode.json', 'r') as j:
|
||||
parentNodes = json.loads(j.read())
|
||||
|
||||
with open('statsByDevice.json', 'r') as j:
|
||||
devices = json.loads(j.read())
|
||||
|
||||
parentNodeNameDict = parentNodeNameDictPull()
|
||||
|
||||
print("Retrieving device statistics")
|
||||
devices = getDeviceStats(devices)
|
||||
print("Computing parent node statistics")
|
||||
parentNodes = getParentNodeStats(parentNodes, devices)
|
||||
print("Writing data to InfluxDB")
|
||||
client = InfluxDBClient(
|
||||
url=influxDBurl,
|
||||
token=influxDBtoken,
|
||||
org=influxDBOrg
|
||||
)
|
||||
write_api = client.write_api(write_options=SYNCHRONOUS)
|
||||
|
||||
chunkedDevices = list(chunk_list(devices, 200))
|
||||
|
||||
queriesToSendCount = 0
|
||||
for chunk in chunkedDevices:
|
||||
queriesToSend = []
|
||||
for device in chunk:
|
||||
bitsDownload = int(device['bitsDownloadSinceLastQuery'])
|
||||
bitsUpload = int(device['bitsUploadSinceLastQuery'])
|
||||
if (bitsDownload > 0) and (bitsUpload > 0):
|
||||
percentUtilizationDownload = round((bitsDownload / round(device['downloadMax'] * 1000000)), 4)
|
||||
percentUtilizationUpload = round((bitsUpload / round(device['uploadMax'] * 1000000)), 4)
|
||||
p = Point('Bandwidth').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).tag("Type", "Device").field("Download", bitsDownload).field("Upload", bitsUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).tag("Type", "Device").field("Download", percentUtilizationDownload).field("Upload", percentUtilizationUpload)
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=influxDBBucket, record=queriesToSend)
|
||||
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
queriesToSendCount += len(queriesToSend)
|
||||
|
||||
queriesToSend = []
|
||||
for parentNode in parentNodes:
|
||||
bitsDownload = int(parentNode['bitsDownloadSinceLastQuery'])
|
||||
bitsUpload = int(parentNode['bitsUploadSinceLastQuery'])
|
||||
if (bitsDownload > 0) and (bitsUpload > 0):
|
||||
percentUtilizationDownload = round((bitsDownload / round(parentNode['downloadMax'] * 1000000)), 4)
|
||||
percentUtilizationUpload = round((bitsUpload / round(parentNode['uploadMax'] * 1000000)), 4)
|
||||
p = Point('Bandwidth').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Download", bitsDownload).field("Upload", bitsUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Download", percentUtilizationDownload).field("Upload", percentUtilizationUpload)
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=influxDBBucket, record=queriesToSend)
|
||||
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
queriesToSendCount += len(queriesToSend)
|
||||
print("Added " + str(queriesToSendCount) + " points to InfluxDB.")
|
||||
|
||||
client.close()
|
||||
|
||||
|
||||
with open('statsByParentNode.json', 'w') as infile:
|
||||
json.dump(parentNodes, infile)
|
||||
|
||||
with open('statsByDevice.json', 'w') as infile:
|
||||
json.dump(devices, infile)
|
||||
|
||||
endTime = datetime.now()
|
||||
durationSeconds = round((endTime - startTime).total_seconds(), 2)
|
||||
print("Graphs updated within " + str(durationSeconds) + " seconds.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshBandwidthGraphs()
|
@ -1,132 +0,0 @@
|
||||
import os
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import io
|
||||
import decimal
|
||||
import json
|
||||
from ispConfig import fqOrCAKE, interfaceA, interfaceB, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl, ppingLocation
|
||||
from datetime import date, datetime, timedelta
|
||||
import decimal
|
||||
from influxdb_client import InfluxDBClient, Point, Dialect
|
||||
from influxdb_client.client.write_api import SYNCHRONOUS
|
||||
import dateutil.parser
|
||||
|
||||
def getLatencies(devices, secondsToRun):
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
tcpLatency = 0
|
||||
listOfAllDiffs = []
|
||||
maxLatencyRecordable = 200
|
||||
matchableIPs = []
|
||||
for device in devices:
|
||||
matchableIPs.append(device['ipv4'])
|
||||
|
||||
rttDict = {}
|
||||
jitterDict = {}
|
||||
#for interface in interfaces:
|
||||
command = "./pping -i " + interfaceA + " -s " + str(secondsToRun) + " -m"
|
||||
commands = command.split(' ')
|
||||
wd = ppingLocation
|
||||
tcShowResults = subprocess.run(command, shell=True, cwd=wd,stdout=subprocess.PIPE, stderr=subprocess.DEVNULL).stdout.decode('utf-8').splitlines()
|
||||
for line in tcShowResults:
|
||||
if len(line) > 59:
|
||||
rtt1 = float(line[18:27])*1000
|
||||
rtt2 = float(line[27:36]) *1000
|
||||
toAndFrom = line[38:].split(' ')[3]
|
||||
fromIP = toAndFrom.split('+')[0].split(':')[0]
|
||||
toIP = toAndFrom.split('+')[1].split(':')[0]
|
||||
matchedIP = ''
|
||||
if fromIP in matchableIPs:
|
||||
matchedIP = fromIP
|
||||
elif toIP in matchableIPs:
|
||||
matchedIP = toIP
|
||||
jitter = rtt1 - rtt2
|
||||
#Cap ceil
|
||||
if rtt1 >= maxLatencyRecordable:
|
||||
rtt1 = 200
|
||||
#Lowest observed rtt
|
||||
if matchedIP in rttDict:
|
||||
if rtt1 < rttDict[matchedIP]:
|
||||
rttDict[matchedIP] = rtt1
|
||||
jitterDict[matchedIP] = jitter
|
||||
else:
|
||||
rttDict[matchedIP] = rtt1
|
||||
jitterDict[matchedIP] = jitter
|
||||
for device in devices:
|
||||
diffsForThisDevice = []
|
||||
if device['ipv4'] in rttDict:
|
||||
device['tcpLatency'] = rttDict[device['ipv4']]
|
||||
else:
|
||||
device['tcpLatency'] = None
|
||||
if device['ipv4'] in jitterDict:
|
||||
device['tcpJitter'] = jitterDict[device['ipv4']]
|
||||
else:
|
||||
device['tcpJitter'] = None
|
||||
return devices
|
||||
|
||||
def getParentNodeStats(parentNodes, devices):
|
||||
for parentNode in parentNodes:
|
||||
acceptableLatencies = []
|
||||
for device in devices:
|
||||
if device['ParentNode'] == parentNode['parentNodeName']:
|
||||
if device['tcpLatency'] != None:
|
||||
acceptableLatencies.append(device['tcpLatency'])
|
||||
|
||||
if len(acceptableLatencies) > 0:
|
||||
parentNode['tcpLatency'] = sum(acceptableLatencies)/len(acceptableLatencies)
|
||||
else:
|
||||
parentNode['tcpLatency'] = None
|
||||
return parentNodes
|
||||
|
||||
def refreshLatencyGraphs(secondsToRun):
|
||||
startTime = datetime.now()
|
||||
with open('statsByParentNode.json', 'r') as j:
|
||||
parentNodes = json.loads(j.read())
|
||||
|
||||
with open('statsByDevice.json', 'r') as j:
|
||||
devices = json.loads(j.read())
|
||||
|
||||
print("Retrieving device statistics")
|
||||
devices = getLatencies(devices, secondsToRun)
|
||||
|
||||
print("Computing parent node statistics")
|
||||
parentNodes = getParentNodeStats(parentNodes, devices)
|
||||
|
||||
print("Writing data to InfluxDB")
|
||||
bucket = influxDBBucket
|
||||
org = influxDBOrg
|
||||
token = influxDBtoken
|
||||
url = influxDBurl
|
||||
client = InfluxDBClient(
|
||||
url=url,
|
||||
token=token,
|
||||
org=org
|
||||
)
|
||||
write_api = client.write_api(write_options=SYNCHRONOUS)
|
||||
|
||||
queriesToSend = []
|
||||
for device in devices:
|
||||
if device['tcpLatency'] != None:
|
||||
p = Point('Latency').tag("Device", device['hostname']).tag("ParentNode", device['ParentNode']).tag("Type", "Device").field("TCP Latency", device['tcpLatency'])
|
||||
queriesToSend.append(p)
|
||||
|
||||
for parentNode in parentNodes:
|
||||
if parentNode['tcpLatency'] != None:
|
||||
p = Point('Latency').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("TCP Latency", parentNode['tcpLatency'])
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=bucket, record=queriesToSend)
|
||||
print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
client.close()
|
||||
|
||||
#with open('statsByParentNode.json', 'w') as infile:
|
||||
# json.dump(parentNodes, infile)
|
||||
|
||||
#with open('statsByDevice.json', 'w') as infile:
|
||||
# json.dump(devices, infile)
|
||||
|
||||
endTime = datetime.now()
|
||||
durationSeconds = round((endTime - startTime).total_seconds())
|
||||
print("Graphs updated within " + str(durationSeconds) + " seconds.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshLatencyGraphs(10)
|
File diff suppressed because it is too large
Load Diff
@ -1,122 +0,0 @@
|
||||
import requests
|
||||
import csv
|
||||
import ipaddress
|
||||
from ispConfig import uispBaseURL, uispAuthToken, shapeRouterOrStation, ignoreSubnets
|
||||
import shutil
|
||||
|
||||
stationModels = ['LBE-5AC-Gen2', 'LBE-5AC-Gen2', 'LBE-5AC-LR', 'AF-LTU5', 'AFLTULR', 'AFLTUPro', 'LTU-LITE']
|
||||
routerModels = ['ACB-AC', 'ACB-ISP']
|
||||
|
||||
def pullShapedDevices():
|
||||
devices = []
|
||||
uispSitesToImport = []
|
||||
url = uispBaseURL + "/nms/api/v2.1/sites?type=client&ucrm=true&ucrmDetails=true"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
jsonData = r.json()
|
||||
uispDevicesToImport = []
|
||||
for uispClientSite in jsonData:
|
||||
if (uispClientSite['identification']['status'] == 'active'):
|
||||
if (uispClientSite['qos']['downloadSpeed']) and (uispClientSite['qos']['uploadSpeed']):
|
||||
downloadSpeedMbps = int(round(uispClientSite['qos']['downloadSpeed']/1000000))
|
||||
uploadSpeedMbps = int(round(uispClientSite['qos']['uploadSpeed']/1000000))
|
||||
address = uispClientSite['description']['address']
|
||||
uispClientSiteID = uispClientSite['id']
|
||||
devicesInUISPsite = getUISPdevicesAtClientSite(uispClientSiteID)
|
||||
UCRMclientID = uispClientSite['ucrm']['client']['id']
|
||||
AP = 'none'
|
||||
thisSiteDevices = []
|
||||
#Look for station devices, use those to find AP name
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station') or (deviceModel in stationModels):
|
||||
if device['attributes']['apDevice']:
|
||||
AP = device['attributes']['apDevice']['name']
|
||||
if shapeRouterOrStation == 'router':
|
||||
#Look for router devices, use those as shaped CPE
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceMAC = device['identification']['mac']
|
||||
deviceIPstring = device['ipAddress']
|
||||
if '/' in deviceIPstring:
|
||||
deviceIPstring = deviceIPstring.split("/")[0]
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'router') or (deviceModel in routerModels):
|
||||
print("Added " + ":\t" + deviceName)
|
||||
devices.append((UCRMclientID, AP,deviceMAC, deviceName, deviceIPstring,'', str(downloadSpeedMbps/4), str(uploadSpeedMbps/4), str(downloadSpeedMbps),str(uploadSpeedMbps)))
|
||||
elif shapeRouterOrStation == 'station':
|
||||
#Look for station devices, use those as shaped CPE
|
||||
for device in devicesInUISPsite:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceMAC = device['identification']['mac']
|
||||
deviceIPstring = device['ipAddress']
|
||||
if '/' in deviceIPstring:
|
||||
deviceIPstring = deviceIPstring.split("/")[0]
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station') or (deviceModel in stationModels):
|
||||
print("Added " + ":\t" + deviceName)
|
||||
devices.append((UCRMclientID, AP,deviceMAC, deviceName, deviceIPstring,'', str(round(downloadSpeedMbps/4)), str(round(uploadSpeedMbps/4)), str(downloadSpeedMbps),str(uploadSpeedMbps)))
|
||||
uispSitesToImport.append(thisSiteDevices)
|
||||
print("Imported " + address)
|
||||
else:
|
||||
print("Failed to import devices from " + uispClientSite['description']['address'] + ". Missing QoS.")
|
||||
return devices
|
||||
|
||||
def getUISPdevicesAtClientSite(siteID):
|
||||
url = uispBaseURL + "/nms/api/v2.1/devices?siteId=" + siteID
|
||||
headers = {'accept':'application/json', 'x-auth-token': UISPuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
return (r.json())
|
||||
|
||||
def updateFromUISP():
|
||||
# Copy file shaper to backup in case of power loss during write of new version
|
||||
shutil.copy('Shaper.csv', 'Shaper.csv.bak')
|
||||
|
||||
devicesFromShaperCSV = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, ParentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
ParentNode = ParentNode.strip()
|
||||
devicesFromShaperCSV.append((deviceID, ParentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax))
|
||||
|
||||
#Make list of IPs, so that we can check if a device being imported is already entered in Shaper.csv
|
||||
devicesPulledFromUISP = pullShapedDevices()
|
||||
mergedDevicesList = devicesFromShaperCSV
|
||||
ipv4List = []
|
||||
ipv6List = []
|
||||
for device in devicesFromShaperCSV:
|
||||
deviceID, ParentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = device
|
||||
if (ipv4 != ''):
|
||||
ipv4List.append(ipv4)
|
||||
if (ipv6 != ''):
|
||||
ipv6List.append(ipv6)
|
||||
|
||||
#For each device brought in from UISP, check if its in excluded subnets. If not, add it to Shaper.csv
|
||||
for device in devicesPulledFromUISP:
|
||||
deviceID, ParentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = device
|
||||
isThisIPexcludable = False
|
||||
for subnet in ignoreSubnets:
|
||||
if ipaddress.ip_address(ipv4) in ipaddress.ip_network(subnet):
|
||||
isThisIPexcludable = True
|
||||
if (isThisIPexcludable == False) and (ipv4 not in ipv4List):
|
||||
mergedDevicesList.append(device)
|
||||
|
||||
with open('Shaper.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(['ID', 'AP', 'MAC', 'Hostname', 'IPv4', 'IPv6', 'Download Min', 'Upload Min', 'Download Max', 'Upload Max'])
|
||||
for device in mergedDevicesList:
|
||||
wr.writerow(device)
|
||||
|
||||
if __name__ == '__main__':
|
||||
updateFromUISP()
|
@ -1,58 +0,0 @@
|
||||
# 'fq_codel' or 'cake diffserv4'
|
||||
# 'cake diffserv4' is recommended
|
||||
|
||||
# fqOrCAKE = 'fq_codel'
|
||||
fqOrCAKE = 'cake diffserv4'
|
||||
|
||||
# How many Mbps are available to the edge of this network
|
||||
upstreamBandwidthCapacityDownloadMbps = 1000
|
||||
upstreamBandwidthCapacityUploadMbps = 1000
|
||||
|
||||
# Traffic from devices not specified in Shaper.csv will be rate limited by an HTB of this many Mbps
|
||||
defaultClassCapacityDownloadMbps = 500
|
||||
defaultClassCapacityUploadMbps = 500
|
||||
|
||||
# Interface connected to core router
|
||||
interfaceA = 'eth1'
|
||||
|
||||
# Interface connected to edge router
|
||||
interfaceB = 'eth2'
|
||||
|
||||
# Shape by Site in addition to by AP and Client
|
||||
# Now deprecated, was only used prior to v1.1
|
||||
# shapeBySite = True
|
||||
|
||||
# Allow shell commands. False causes commands print to console only without being executed. MUST BE ENABLED FOR
|
||||
# PROGRAM TO FUNCTION
|
||||
enableActualShellCommands = True
|
||||
|
||||
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
|
||||
runShellCommandsAsSudo = False
|
||||
|
||||
# Graphing
|
||||
graphingEnabled = True
|
||||
ppingLocation = "pping"
|
||||
influxDBurl = "http://localhost:8086"
|
||||
influxDBBucket = "libreqos"
|
||||
influxDBOrg = "Your ISP Name Here"
|
||||
influxDBtoken = ""
|
||||
|
||||
# NMS/CRM Integration
|
||||
# If a device shows a WAN IP within these subnets, assume they are behind NAT / un-shapable, and ignore them
|
||||
ignoreSubnets = ['192.168.0.0/16']
|
||||
|
||||
# Optional UISP integration
|
||||
automaticImportUISP = False
|
||||
# Everything before /nms/ on your UISP instance
|
||||
uispBaseURL = 'https://examplesite.com'
|
||||
# UISP Auth Token
|
||||
uispAuthToken = ''
|
||||
# UISP | Whether to shape router at customer premises, or instead shape the station radio. When station radio is in
|
||||
# router mode, use 'station'. Otherwise, use 'router'.
|
||||
shapeRouterOrStation = 'router'
|
||||
|
||||
# API Auth
|
||||
apiUsername = "testUser"
|
||||
apiPassword = "changeme8343486806"
|
||||
apiHostIP = "127.0.0.1"
|
||||
apiHostPost = 5000
|
@ -1,356 +0,0 @@
|
||||
from flask import Flask
|
||||
from flask_restful import Resource, Api, reqparse
|
||||
from flask_httpauth import HTTPBasicAuth
|
||||
import ast
|
||||
import csv
|
||||
from werkzeug.security import generate_password_hash, check_password_hash
|
||||
from ispConfig import apiUsername, apiPassword, apiHostIP, apiHostPost
|
||||
from LibreQoS import refreshShapers
|
||||
|
||||
app = Flask(__name__)
|
||||
api = Api(app)
|
||||
auth = HTTPBasicAuth()
|
||||
|
||||
users = {
|
||||
apiUsername: generate_password_hash(apiPassword)
|
||||
}
|
||||
|
||||
@auth.verify_password
|
||||
def verify_password(username, password):
|
||||
if username in users and check_password_hash(users.get(username), password):
|
||||
return username
|
||||
|
||||
class Devices(Resource):
|
||||
# Get
|
||||
@auth.login_required
|
||||
def get(self):
|
||||
devices = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
header_store = next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, parentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if parentNode == "":
|
||||
parentNode = "none"
|
||||
parentNode = parentNode.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"parentNode": parentNode,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": int(downloadMin),
|
||||
"uploadMin": int(uploadMin),
|
||||
"downloadMax": int(downloadMax),
|
||||
"uploadMax": int(uploadMax),
|
||||
"qdisc": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
return {'data': devices}, 200 # return data and 200 OK code
|
||||
|
||||
# Post
|
||||
@auth.login_required
|
||||
def post(self):
|
||||
devices = []
|
||||
idOnlyList = []
|
||||
ipv4onlyList = []
|
||||
ipv6onlyList = []
|
||||
hostnameOnlyList = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
header_store = next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, parentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if parentNode == "":
|
||||
parentNode = "none"
|
||||
parentNode = parentNode.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"parentNode": parentNode,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": int(downloadMin),
|
||||
"uploadMin": int(uploadMin),
|
||||
"downloadMax": int(downloadMax),
|
||||
"uploadMax": int(uploadMax),
|
||||
"qdisc": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
ipv4onlyList.append(ipv4)
|
||||
ipv6onlyList.append(ipv6)
|
||||
idOnlyList.append(deviceID)
|
||||
hostnameOnlyList.append(hostname)
|
||||
|
||||
parser = reqparse.RequestParser() # initialize
|
||||
|
||||
parser.add_argument('id', required=False)
|
||||
parser.add_argument('mac', required=False)
|
||||
parser.add_argument('parentNode', required=False)
|
||||
parser.add_argument('hostname', required=False)
|
||||
parser.add_argument('ipv4', required=False)
|
||||
parser.add_argument('ipv6', required=False)
|
||||
parser.add_argument('downloadMin', required=True)
|
||||
parser.add_argument('uploadMin', required=True)
|
||||
parser.add_argument('downloadMax', required=True)
|
||||
parser.add_argument('uploadMax', required=True)
|
||||
parser.add_argument('qdisc', required=False)
|
||||
|
||||
args = parser.parse_args() # parse arguments to dictionary
|
||||
|
||||
args['downloadMin'] = int(args['downloadMin'])
|
||||
args['uploadMin'] = int(args['uploadMin'])
|
||||
args['downloadMax'] = int(args['downloadMax'])
|
||||
args['uploadMax'] = int(args['uploadMax'])
|
||||
|
||||
if (args['id'] in idOnlyList):
|
||||
return {
|
||||
'message': f"'{args['id']}' already exists."
|
||||
}, 401
|
||||
elif (args['ipv4'] in ipv4onlyList):
|
||||
return {
|
||||
'message': f"'{args['ipv4']}' already exists."
|
||||
}, 401
|
||||
elif (args['ipv6'] in ipv6onlyList):
|
||||
return {
|
||||
'message': f"'{args['ipv6']}' already exists."
|
||||
}, 401
|
||||
elif (args['hostname'] in hostnameOnlyList):
|
||||
return {
|
||||
'message': f"'{args['hostname']}' already exists."
|
||||
}, 401
|
||||
else:
|
||||
if args['parentNode'] == None:
|
||||
args['parentNode'] = "none"
|
||||
|
||||
newDevice = {
|
||||
"id": args['id'],
|
||||
"mac": args['mac'],
|
||||
"parentNode": args['parentNode'],
|
||||
"hostname": args['hostname'],
|
||||
"ipv4": args['ipv4'],
|
||||
"ipv6": args['ipv6'],
|
||||
"downloadMin": int(args['downloadMin']),
|
||||
"uploadMin": int(args['uploadMin']),
|
||||
"downloadMax": int(args['downloadMax']),
|
||||
"uploadMax": int(args['uploadMax']),
|
||||
"qdisc": '',
|
||||
}
|
||||
|
||||
entryExistsAlready = False
|
||||
revisedDevices = []
|
||||
revisedDevices.append(newDevice)
|
||||
|
||||
# create new Shaper.csv containing new values
|
||||
with open('Shaper.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(header_store)
|
||||
for device in revisedDevices:
|
||||
wr.writerow((device['id'], device['parentNode'], device['mac'], device['hostname'] , device['ipv4'], device['ipv6'], device['downloadMin'], device['uploadMin'], device['downloadMax'], device['uploadMax']))
|
||||
|
||||
return {'data': newDevice}, 200 # return data with 200 OK
|
||||
|
||||
# Put
|
||||
@auth.login_required
|
||||
def put(self):
|
||||
devices = []
|
||||
idOnlyList = []
|
||||
ipv4onlyList = []
|
||||
ipv6onlyList = []
|
||||
hostnameOnlyList = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
header_store = next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, parentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if parentNode == "":
|
||||
parentNode = "none"
|
||||
parentNode = parentNode.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"parentNode": parentNode,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": int(downloadMin),
|
||||
"uploadMin": int(uploadMin),
|
||||
"downloadMax": int(downloadMax),
|
||||
"uploadMax": int(uploadMax),
|
||||
"qdisc": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
ipv4onlyList.append(ipv4)
|
||||
ipv6onlyList.append(ipv6)
|
||||
idOnlyList.append(deviceID)
|
||||
hostnameOnlyList.append(hostname)
|
||||
|
||||
parser = reqparse.RequestParser() # initialize
|
||||
|
||||
parser.add_argument('id', required=False)
|
||||
parser.add_argument('mac', required=False)
|
||||
parser.add_argument('parentNode', required=False)
|
||||
parser.add_argument('hostname', required=False)
|
||||
parser.add_argument('ipv4', required=False)
|
||||
parser.add_argument('ipv6', required=False)
|
||||
parser.add_argument('downloadMin', required=True)
|
||||
parser.add_argument('uploadMin', required=True)
|
||||
parser.add_argument('downloadMax', required=True)
|
||||
parser.add_argument('uploadMax', required=True)
|
||||
parser.add_argument('qdisc', required=False)
|
||||
|
||||
args = parser.parse_args() # parse arguments to dictionary
|
||||
|
||||
args['downloadMin'] = int(args['downloadMin'])
|
||||
args['uploadMin'] = int(args['uploadMin'])
|
||||
args['downloadMax'] = int(args['downloadMax'])
|
||||
args['uploadMax'] = int(args['uploadMax'])
|
||||
|
||||
if (args['id'] in idOnlyList) or (args['ipv4'] in ipv4onlyList) or (args['ipv6'] in ipv6onlyList) or (args['hostname'] in hostnameOnlyList):
|
||||
|
||||
if args['parentNode'] == None:
|
||||
args['parentNode'] = "none"
|
||||
|
||||
newDevice = {
|
||||
"id": args['id'],
|
||||
"mac": args['mac'],
|
||||
"parentNode": args['parentNode'],
|
||||
"hostname": args['hostname'],
|
||||
"ipv4": args['ipv4'],
|
||||
"ipv6": args['ipv6'],
|
||||
"downloadMin": int(args['downloadMin']),
|
||||
"uploadMin": int(args['uploadMin']),
|
||||
"downloadMax": int(args['downloadMax']),
|
||||
"uploadMax": int(args['uploadMax']),
|
||||
"qdisc": '',
|
||||
}
|
||||
|
||||
successfullyFoundMatch = False
|
||||
revisedDevices = []
|
||||
for device in devices:
|
||||
if (device['id'] == args['id']) or (device['mac'] == args['mac']) or (device['hostname'] == args['hostname']) or (device['ipv4'] == args['ipv4']) or (device['ipv6'] == args['ipv6']):
|
||||
revisedDevices.append(newDevice)
|
||||
successfullyFoundMatch = True
|
||||
else:
|
||||
revisedDevices.append(device)
|
||||
|
||||
# create new Shaper.csv containing new values
|
||||
with open('Shaper.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(header_store)
|
||||
for device in revisedDevices:
|
||||
wr.writerow((device['id'], device['parentNode'], device['mac'], device['hostname'] , device['ipv4'], device['ipv6'], device['downloadMin'], device['uploadMin'], device['downloadMax'], device['uploadMax']))
|
||||
|
||||
return {'data': newDevice}, 200 # return data with 200 OK
|
||||
else:
|
||||
return {
|
||||
'message': f" Matching device entry not found."
|
||||
}, 404
|
||||
|
||||
# Delete
|
||||
@auth.login_required
|
||||
def delete(self):
|
||||
devices = []
|
||||
idOnlyList = []
|
||||
ipv4onlyList = []
|
||||
ipv6onlyList = []
|
||||
hostnameOnlyList = []
|
||||
with open('Shaper.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
header_store = next(csv_reader)
|
||||
for row in csv_reader:
|
||||
deviceID, parentNode, mac, hostname,ipv4, ipv6, downloadMin, uploadMin, downloadMax, uploadMax = row
|
||||
ipv4 = ipv4.strip()
|
||||
ipv6 = ipv6.strip()
|
||||
if parentNode == "":
|
||||
parentNode = "none"
|
||||
parentNode = parentNode.strip()
|
||||
thisDevice = {
|
||||
"id": deviceID,
|
||||
"mac": mac,
|
||||
"parentNode": parentNode,
|
||||
"hostname": hostname,
|
||||
"ipv4": ipv4,
|
||||
"ipv6": ipv6,
|
||||
"downloadMin": int(downloadMin),
|
||||
"uploadMin": int(uploadMin),
|
||||
"downloadMax": int(downloadMax),
|
||||
"uploadMax": int(uploadMax),
|
||||
"qdisc": '',
|
||||
}
|
||||
devices.append(thisDevice)
|
||||
ipv4onlyList.append(ipv4)
|
||||
ipv6onlyList.append(ipv6)
|
||||
idOnlyList.append(deviceID)
|
||||
hostnameOnlyList.append(hostname)
|
||||
|
||||
parser = reqparse.RequestParser() # initialize
|
||||
|
||||
parser.add_argument('id', required=False)
|
||||
parser.add_argument('mac', required=False)
|
||||
parser.add_argument('parentNode', required=False)
|
||||
parser.add_argument('hostname', required=False)
|
||||
parser.add_argument('ipv4', required=False)
|
||||
parser.add_argument('ipv6', required=False)
|
||||
parser.add_argument('downloadMin', required=False)
|
||||
parser.add_argument('uploadMin', required=False)
|
||||
parser.add_argument('downloadMax', required=False)
|
||||
parser.add_argument('uploadMax', required=False)
|
||||
parser.add_argument('qdisc', required=False)
|
||||
|
||||
args = parser.parse_args() # parse arguments to dictionary
|
||||
|
||||
if (args['id'] in idOnlyList) or (args['ipv4'] in ipv4onlyList) or (args['ipv6'] in ipv6onlyList) or (args['hostname'] in hostnameOnlyList):
|
||||
|
||||
successfullyFoundMatch = False
|
||||
revisedDevices = []
|
||||
for device in devices:
|
||||
if (device['id'] == args['id']) or (device['mac'] == args['mac']) or (device['hostname'] == args['hostname']) or (device['ipv4'] == args['ipv4']) or (device['ipv6'] == args['ipv6']):
|
||||
# Simply do not add device to revisedDevices
|
||||
successfullyFoundMatch = True
|
||||
else:
|
||||
revisedDevices.append(device)
|
||||
|
||||
# create new Shaper.csv containing new values
|
||||
with open('Shaper.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(header_store)
|
||||
for device in revisedDevices:
|
||||
wr.writerow((device['id'], device['parentNode'], device['mac'], device['hostname'] , device['ipv4'], device['ipv6'], device['downloadMin'], device['uploadMin'], device['downloadMax'], device['uploadMax']))
|
||||
|
||||
return {
|
||||
'message': "Matching device entry successfully deleted."
|
||||
}, 200 # return data with 200 OK
|
||||
else:
|
||||
return {
|
||||
'message': f" Matching device entry not found."
|
||||
}, 404
|
||||
|
||||
class Shaper(Resource):
|
||||
# Post
|
||||
@auth.login_required
|
||||
def post(self):
|
||||
parser = reqparse.RequestParser() # initialize
|
||||
parser.add_argument('refresh', required=True)
|
||||
args = parser.parse_args() # parse arguments to dictionary
|
||||
if (args['refresh'] == True):
|
||||
refreshShapers()
|
||||
return {
|
||||
'message': "Successfully refreshed LibreQoS device shaping."
|
||||
}, 200 # return data and 200 OK code
|
||||
|
||||
api.add_resource(Devices, '/devices') # '/devices' is our 1st entry point
|
||||
api.add_resource(Shaper, '/shaper') # '/shaper' is our 2nd entry point
|
||||
|
||||
if __name__ == '__main__':
|
||||
from waitress import serve
|
||||
#app.run(debug=True) # debug mode
|
||||
serve(app, host=apiHostIP, port=apiHostPost)
|
@ -1,78 +0,0 @@
|
||||
{
|
||||
"Site_1":
|
||||
{
|
||||
"downloadBandwidthMbps":1000,
|
||||
"uploadBandwidthMbps":1000,
|
||||
"children":
|
||||
{
|
||||
"AP_A":
|
||||
{
|
||||
"downloadBandwidthMbps":500,
|
||||
"uploadBandwidthMbps":500
|
||||
},
|
||||
"Site_3":
|
||||
{
|
||||
"downloadBandwidthMbps":500,
|
||||
"uploadBandwidthMbps":500,
|
||||
"children":
|
||||
{
|
||||
"PoP_5":
|
||||
{
|
||||
"downloadBandwidthMbps":200,
|
||||
"uploadBandwidthMbps":200,
|
||||
"children":
|
||||
{
|
||||
"AP_9":
|
||||
{
|
||||
"downloadBandwidthMbps":120,
|
||||
"uploadBandwidthMbps":120
|
||||
},
|
||||
"PoP_6":
|
||||
{
|
||||
"downloadBandwidthMbps":60,
|
||||
"uploadBandwidthMbps":60,
|
||||
"children":
|
||||
{
|
||||
"AP_11":
|
||||
{
|
||||
"downloadBandwidthMbps":30,
|
||||
"uploadBandwidthMbps":30
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"Site_2":
|
||||
{
|
||||
"downloadBandwidthMbps":500,
|
||||
"uploadBandwidthMbps":500,
|
||||
"children":
|
||||
{
|
||||
"PoP_1":
|
||||
{
|
||||
"downloadBandwidthMbps":200,
|
||||
"uploadBandwidthMbps":200,
|
||||
"children":
|
||||
{
|
||||
"AP_7":
|
||||
{
|
||||
"downloadBandwidthMbps":100,
|
||||
"uploadBandwidthMbps":100
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"children":
|
||||
{
|
||||
"AP_1":
|
||||
{
|
||||
"downloadBandwidthMbps":150,
|
||||
"uploadBandwidthMbps":150
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,26 +0,0 @@
|
||||
import time
|
||||
import schedule
|
||||
from LibreQoS import refreshShapers
|
||||
from graphBandwidth import refreshBandwidthGraphs
|
||||
from graphLatency import refreshLatencyGraphs
|
||||
from ispConfig import graphingEnabled, automaticImportUISP
|
||||
from integrationUISP import updateFromUISP
|
||||
|
||||
def importandshape():
|
||||
if automaticImportUISP:
|
||||
updateFromUISP()
|
||||
refreshShapers()
|
||||
|
||||
if __name__ == '__main__':
|
||||
importandshape()
|
||||
schedule.every().day.at("04:00").do(importandshape)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
if graphingEnabled:
|
||||
try:
|
||||
refreshBandwidthGraphs()
|
||||
refreshLatencyGraphs(10)
|
||||
except:
|
||||
print("Failed to update graphs")
|
||||
else:
|
||||
time.sleep(60) # wait x seconds
|
@ -1 +0,0 @@
|
||||
Subproject commit 888cc7712f2516d386a837aee67c5b05bd04edfa
|
@ -1,755 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
# v1.2.1
|
||||
|
||||
import csv
|
||||
import io
|
||||
import ipaddress
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import subprocess
|
||||
from subprocess import PIPE, STDOUT
|
||||
from datetime import datetime, timedelta
|
||||
import multiprocessing
|
||||
import warnings
|
||||
import psutil
|
||||
import argparse
|
||||
import logging
|
||||
import shutil
|
||||
import binpacking
|
||||
|
||||
from ispConfig import fqOrCAKE, upstreamBandwidthCapacityDownloadMbps, upstreamBandwidthCapacityUploadMbps, \
|
||||
interfaceA, interfaceB, enableActualShellCommands, \
|
||||
runShellCommandsAsSudo, generatedPNDownloadMbps, generatedPNUploadMbps, queuesAvailableOverride
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
if runShellCommandsAsSudo:
|
||||
command = 'sudo ' + command
|
||||
logging.info(command)
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
logging.info(line)
|
||||
if ("RTNETLINK answers" in line) or ("We have an error talking to the kernel" in line):
|
||||
warnings.warn("Command: '" + command + "' resulted in " + line, stacklevel=2)
|
||||
else:
|
||||
logging.info(command)
|
||||
|
||||
def checkIfFirstRunSinceBoot():
|
||||
if os.path.isfile("lastRun.txt"):
|
||||
with open("lastRun.txt", 'r') as file:
|
||||
lastRun = datetime.strptime(file.read(), "%d-%b-%Y (%H:%M:%S.%f)")
|
||||
systemRunningSince = datetime.fromtimestamp(psutil.boot_time())
|
||||
if systemRunningSince > lastRun:
|
||||
print("First time run since system boot.")
|
||||
return True
|
||||
else:
|
||||
print("Not first time run since system boot.")
|
||||
return False
|
||||
else:
|
||||
print("First time run since system boot.")
|
||||
return True
|
||||
|
||||
def clearPriorSettings(interfaceA, interfaceB):
|
||||
if enableActualShellCommands:
|
||||
# Clear tc filter
|
||||
shell('tc qdisc delete dev ' + interfaceA + ' root')
|
||||
shell('tc qdisc delete dev ' + interfaceB + ' root')
|
||||
#shell('tc qdisc delete dev ' + interfaceA)
|
||||
#shell('tc qdisc delete dev ' + interfaceB)
|
||||
|
||||
def tearDown(interfaceA, interfaceB):
|
||||
# Full teardown of everything for exiting LibreQoS
|
||||
if enableActualShellCommands:
|
||||
# Clear IP filters and remove xdp program from interfaces
|
||||
result = os.system('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --clear')
|
||||
shell('ip link set dev ' + interfaceA + ' xdp off')
|
||||
shell('ip link set dev ' + interfaceB + ' xdp off')
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
|
||||
def findQueuesAvailable():
|
||||
# Find queues and CPU cores available. Use min between those two as queuesAvailable
|
||||
if enableActualShellCommands:
|
||||
if queuesAvailableOverride == 0:
|
||||
queuesAvailable = 0
|
||||
path = '/sys/class/net/' + interfaceA + '/queues/'
|
||||
directory_contents = os.listdir(path)
|
||||
for item in directory_contents:
|
||||
if "tx-" in str(item):
|
||||
queuesAvailable += 1
|
||||
print("NIC queues:\t\t\t" + str(queuesAvailable))
|
||||
else:
|
||||
queuesAvailable = queuesAvailableOverride
|
||||
print("NIC queues (Override):\t\t\t" + str(queuesAvailable))
|
||||
cpuCount = multiprocessing.cpu_count()
|
||||
print("CPU cores:\t\t\t" + str(cpuCount))
|
||||
queuesAvailable = min(queuesAvailable,cpuCount)
|
||||
print("queuesAvailable set to:\t" + str(queuesAvailable))
|
||||
else:
|
||||
print("As enableActualShellCommands is False, CPU core / queue count has been set to 16")
|
||||
logging.info("NIC queues:\t\t\t" + str(16))
|
||||
cpuCount = multiprocessing.cpu_count()
|
||||
logging.info("CPU cores:\t\t\t" + str(16))
|
||||
logging.info("queuesAvailable set to:\t" + str(16))
|
||||
queuesAvailable = 16
|
||||
return queuesAvailable
|
||||
|
||||
def validateNetworkAndDevices():
|
||||
# Verify Network.json is valid json
|
||||
networkValidatedOrNot = True
|
||||
with open('network.json') as file:
|
||||
try:
|
||||
temporaryVariable = json.load(file) # put JSON-data to a variable
|
||||
except json.decoder.JSONDecodeError:
|
||||
warnings.warn("network.json is an invalid JSON file", stacklevel=2) # in case json is invalid
|
||||
networkValidatedOrNot = False
|
||||
if networkValidatedOrNot == True:
|
||||
print("network.json passed validation")
|
||||
# Verify ShapedDevices.csv is valid
|
||||
devicesValidatedOrNot = True # True by default, switches to false if ANY entry in ShapedDevices.csv fails validation
|
||||
rowNum = 2
|
||||
with open('ShapedDevices.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
#Remove comments if any
|
||||
commentsRemoved = []
|
||||
for row in csv_reader:
|
||||
if not row[0].startswith('#'):
|
||||
commentsRemoved.append(row)
|
||||
#Remove header
|
||||
commentsRemoved.pop(0)
|
||||
seenTheseIPsAlready = []
|
||||
for row in commentsRemoved:
|
||||
circuitID, circuitName, deviceID, deviceName, ParentNode, mac, ipv4_input, ipv6_input, downloadMin, uploadMin, downloadMax, uploadMax, comment = row
|
||||
# Each entry in ShapedDevices.csv can have multiple IPv4s or IPv6s seperated by commas. Split them up and parse each to ensure valid
|
||||
ipv4_subnets_and_hosts = []
|
||||
ipv6_subnets_and_hosts = []
|
||||
if ipv4_input != "":
|
||||
try:
|
||||
ipv4_input = ipv4_input.replace(' ','')
|
||||
if "," in ipv4_input:
|
||||
ipv4_list = ipv4_input.split(',')
|
||||
else:
|
||||
ipv4_list = [ipv4_input]
|
||||
for ipEntry in ipv4_list:
|
||||
if ipEntry in seenTheseIPsAlready:
|
||||
warnings.warn("Provided IPv4 '" + ipEntry + "' in ShapedDevices.csv at row " + str(rowNum) + " is duplicate.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
seenTheseIPsAlready.append(ipEntry)
|
||||
else:
|
||||
if (type(ipaddress.ip_network(ipEntry)) is ipaddress.IPv4Network) or (type(ipaddress.ip_address(ipEntry)) is ipaddress.IPv4Address):
|
||||
ipv4_subnets_and_hosts.extend(ipEntry)
|
||||
else:
|
||||
warnings.warn("Provided IPv4 '" + ipEntry + "' in ShapedDevices.csv at row " + str(rowNum) + " is not valid.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
seenTheseIPsAlready.append(ipEntry)
|
||||
except:
|
||||
warnings.warn("Provided IPv4 '" + ipv4_input + "' in ShapedDevices.csv at row " + str(rowNum) + " is not valid.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
if ipv6_input != "":
|
||||
try:
|
||||
ipv6_input = ipv6_input.replace(' ','')
|
||||
if "," in ipv6_input:
|
||||
ipv6_list = ipv6_input.split(',')
|
||||
else:
|
||||
ipv6_list = [ipv6_input]
|
||||
for ipEntry in ipv6_list:
|
||||
if ipEntry in seenTheseIPsAlready:
|
||||
warnings.warn("Provided IPv6 '" + ipEntry + "' in ShapedDevices.csv at row " + str(rowNum) + " is duplicate.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
seenTheseIPsAlready.append(ipEntry)
|
||||
else:
|
||||
if (type(ipaddress.ip_network(ipEntry)) is ipaddress.IPv6Network) or (type(ipaddress.ip_address(ipEntry)) is ipaddress.IPv6Address):
|
||||
ipv6_subnets_and_hosts.extend(ipEntry)
|
||||
else:
|
||||
warnings.warn("Provided IPv6 '" + ipEntry + "' in ShapedDevices.csv at row " + str(rowNum) + " is not valid.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
seenTheseIPsAlready.append(ipEntry)
|
||||
except:
|
||||
warnings.warn("Provided IPv6 '" + ipv6_input + "' in ShapedDevices.csv at row " + str(rowNum) + " is not valid.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
try:
|
||||
a = int(downloadMin)
|
||||
if a < 1:
|
||||
warnings.warn("Provided downloadMin '" + downloadMin + "' in ShapedDevices.csv at row " + str(rowNum) + " is < 1 Mbps.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
except:
|
||||
warnings.warn("Provided downloadMin '" + downloadMin + "' in ShapedDevices.csv at row " + str(rowNum) + " is not a valid integer.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
try:
|
||||
a = int(uploadMin)
|
||||
if a < 1:
|
||||
warnings.warn("Provided uploadMin '" + uploadMin + "' in ShapedDevices.csv at row " + str(rowNum) + " is < 1 Mbps.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
except:
|
||||
warnings.warn("Provided uploadMin '" + uploadMin + "' in ShapedDevices.csv at row " + str(rowNum) + " is not a valid integer.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
try:
|
||||
a = int(downloadMax)
|
||||
if a < 2:
|
||||
warnings.warn("Provided downloadMax '" + downloadMax + "' in ShapedDevices.csv at row " + str(rowNum) + " is < 2 Mbps.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
except:
|
||||
warnings.warn("Provided downloadMax '" + downloadMax + "' in ShapedDevices.csv at row " + str(rowNum) + " is not a valid integer.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
try:
|
||||
a = int(uploadMax)
|
||||
if a < 2:
|
||||
warnings.warn("Provided uploadMax '" + uploadMax + "' in ShapedDevices.csv at row " + str(rowNum) + " is < 2 Mbps.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
except:
|
||||
warnings.warn("Provided uploadMax '" + uploadMax + "' in ShapedDevices.csv at row " + str(rowNum) + " is not a valid integer.", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
|
||||
try:
|
||||
if int(downloadMin) > int(downloadMax):
|
||||
warnings.warn("Provided downloadMin '" + downloadMin + "' in ShapedDevices.csv at row " + str(rowNum) + " is greater than downloadMax", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
if int(uploadMin) > int(uploadMax):
|
||||
warnings.warn("Provided uploadMin '" + downloadMin + "' in ShapedDevices.csv at row " + str(rowNum) + " is greater than uploadMax", stacklevel=2)
|
||||
devicesValidatedOrNot = False
|
||||
except:
|
||||
devicesValidatedOrNot = False
|
||||
|
||||
rowNum += 1
|
||||
if devicesValidatedOrNot == True:
|
||||
print("ShapedDevices.csv passed validation")
|
||||
else:
|
||||
print("ShapedDevices.csv failed validation")
|
||||
|
||||
if (devicesValidatedOrNot == True) and (devicesValidatedOrNot == True):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def refreshShapers():
|
||||
|
||||
# Starting
|
||||
print("refreshShapers starting at " + datetime.now().strftime("%d/%m/%Y %H:%M:%S"))
|
||||
|
||||
|
||||
# Warn user if enableActualShellCommands is False, because that would mean no actual commands are executing
|
||||
if enableActualShellCommands == False:
|
||||
warnings.warn("enableActualShellCommands is set to False. None of the commands below will actually be executed. Simulated run.", stacklevel=2)
|
||||
|
||||
|
||||
# Check if first run since boot
|
||||
isThisFirstRunSinceBoot = checkIfFirstRunSinceBoot()
|
||||
|
||||
|
||||
# Automatically account for TCP overhead of plans. For example a 100Mbps plan needs to be set to 109Mbps for the user to ever see that result on a speed test
|
||||
# Does not apply to nodes of any sort, just endpoint devices
|
||||
tcpOverheadFactor = 1.09
|
||||
|
||||
|
||||
# Files
|
||||
shapedDevicesFile = 'ShapedDevices.csv'
|
||||
networkJSONfile = 'network.json'
|
||||
|
||||
|
||||
# Check validation
|
||||
safeToRunRefresh = False
|
||||
if (validateNetworkAndDevices() == True):
|
||||
shutil.copyfile('ShapedDevices.csv', 'lastGoodConfig.csv')
|
||||
shutil.copyfile('network.json', 'lastGoodConfig.json')
|
||||
print("Backed up good config as lastGoodConfig.csv and lastGoodConfig.json")
|
||||
safeToRunRefresh = True
|
||||
else:
|
||||
if (isThisFirstRunSinceBoot == False):
|
||||
warnings.warn("Validation failed. Because this is not the first run since boot (queues already set up) - will now exit.", stacklevel=2)
|
||||
safeToRunRefresh = False
|
||||
else:
|
||||
warnings.warn("Validation failed. However - because this is the first run since boot - will load queues from last good config", stacklevel=2)
|
||||
shapedDevicesFile = 'lastGoodConfig.csv'
|
||||
networkJSONfile = 'lastGoodConfig.json'
|
||||
safeToRunRefresh = True
|
||||
|
||||
if safeToRunRefresh == True:
|
||||
|
||||
# Load Subscriber Circuits & Devices
|
||||
subscriberCircuits = []
|
||||
knownCircuitIDs = []
|
||||
counterForCircuitsWithoutParentNodes = 0
|
||||
dictForCircuitsWithoutParentNodes = {}
|
||||
with open(shapedDevicesFile) as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
# Remove comments if any
|
||||
commentsRemoved = []
|
||||
for row in csv_reader:
|
||||
if not row[0].startswith('#'):
|
||||
commentsRemoved.append(row)
|
||||
# Remove header
|
||||
commentsRemoved.pop(0)
|
||||
for row in commentsRemoved:
|
||||
circuitID, circuitName, deviceID, deviceName, ParentNode, mac, ipv4_input, ipv6_input, downloadMin, uploadMin, downloadMax, uploadMax, comment = row
|
||||
ipv4_subnets_and_hosts = []
|
||||
# Each entry in ShapedDevices.csv can have multiple IPv4s or IPv6s seperated by commas. Split them up and parse each
|
||||
if ipv4_input != "":
|
||||
ipv4_input = ipv4_input.replace(' ','')
|
||||
if "," in ipv4_input:
|
||||
ipv4_list = ipv4_input.split(',')
|
||||
else:
|
||||
ipv4_list = [ipv4_input]
|
||||
for ipEntry in ipv4_list:
|
||||
ipv4_subnets_and_hosts.append(ipEntry)
|
||||
ipv6_subnets_and_hosts = []
|
||||
if ipv6_input != "":
|
||||
ipv6_input = ipv6_input.replace(' ','')
|
||||
if "," in ipv6_input:
|
||||
ipv6_list = ipv6_input.split(',')
|
||||
else:
|
||||
ipv6_list = [ipv6_input]
|
||||
for ipEntry in ipv6_list:
|
||||
ipv6_subnets_and_hosts.append(ipEntry)
|
||||
# If there is something in the circuit ID field
|
||||
if circuitID != "":
|
||||
# Seen circuit before
|
||||
if circuitID in knownCircuitIDs:
|
||||
for circuit in subscriberCircuits:
|
||||
if circuit['circuitID'] == circuitID:
|
||||
if circuit['ParentNode'] != "none":
|
||||
if circuit['ParentNode'] != ParentNode:
|
||||
errorMessageString = "Device " + deviceName + " with deviceID " + deviceID + " had different Parent Node from other devices of circuit ID #" + circuitID
|
||||
raise ValueError(errorMessageString)
|
||||
if ((circuit['downloadMin'] != round(int(downloadMin)*tcpOverheadFactor))
|
||||
or (circuit['uploadMin'] != round(int(uploadMin)*tcpOverheadFactor))
|
||||
or (circuit['downloadMax'] != round(int(downloadMax)*tcpOverheadFactor))
|
||||
or (circuit['uploadMax'] != round(int(uploadMax)*tcpOverheadFactor))):
|
||||
warnings.warn("Device " + deviceName + " with ID " + deviceID + " had different bandwidth parameters than other devices on this circuit. Will instead use the bandwidth parameters defined by the first device added to its circuit.", stacklevel=2)
|
||||
devicesListForCircuit = circuit['devices']
|
||||
thisDevice = {
|
||||
"deviceID": deviceID,
|
||||
"deviceName": deviceName,
|
||||
"mac": mac,
|
||||
"ipv4s": ipv4_subnets_and_hosts,
|
||||
"ipv6s": ipv6_subnets_and_hosts,
|
||||
"comment": comment
|
||||
}
|
||||
devicesListForCircuit.append(thisDevice)
|
||||
circuit['devices'] = devicesListForCircuit
|
||||
# Have not seen circuit before
|
||||
else:
|
||||
knownCircuitIDs.append(circuitID)
|
||||
if ParentNode == "":
|
||||
ParentNode = "none"
|
||||
ParentNode = ParentNode.strip()
|
||||
deviceListForCircuit = []
|
||||
thisDevice = {
|
||||
"deviceID": deviceID,
|
||||
"deviceName": deviceName,
|
||||
"mac": mac,
|
||||
"ipv4s": ipv4_subnets_and_hosts,
|
||||
"ipv6s": ipv6_subnets_and_hosts,
|
||||
"comment": comment
|
||||
}
|
||||
deviceListForCircuit.append(thisDevice)
|
||||
thisCircuit = {
|
||||
"circuitID": circuitID,
|
||||
"circuitName": circuitName,
|
||||
"ParentNode": ParentNode,
|
||||
"devices": deviceListForCircuit,
|
||||
"downloadMin": round(int(downloadMin)*tcpOverheadFactor),
|
||||
"uploadMin": round(int(uploadMin)*tcpOverheadFactor),
|
||||
"downloadMax": round(int(downloadMax)*tcpOverheadFactor),
|
||||
"uploadMax": round(int(uploadMax)*tcpOverheadFactor),
|
||||
"qdisc": '',
|
||||
"comment": comment
|
||||
}
|
||||
if thisCircuit['ParentNode'] == 'none':
|
||||
thisCircuit['idForCircuitsWithoutParentNodes'] = counterForCircuitsWithoutParentNodes
|
||||
dictForCircuitsWithoutParentNodes[counterForCircuitsWithoutParentNodes] = ((round(int(downloadMax)*tcpOverheadFactor))+(round(int(uploadMax)*tcpOverheadFactor)))
|
||||
counterForCircuitsWithoutParentNodes += 1
|
||||
subscriberCircuits.append(thisCircuit)
|
||||
# If there is nothing in the circuit ID field
|
||||
else:
|
||||
# Copy deviceName to circuitName if none defined already
|
||||
if circuitName == "":
|
||||
circuitName = deviceName
|
||||
if ParentNode == "":
|
||||
ParentNode = "none"
|
||||
ParentNode = ParentNode.strip()
|
||||
deviceListForCircuit = []
|
||||
thisDevice = {
|
||||
"deviceID": deviceID,
|
||||
"deviceName": deviceName,
|
||||
"mac": mac,
|
||||
"ipv4s": ipv4_subnets_and_hosts,
|
||||
"ipv6s": ipv6_subnets_and_hosts,
|
||||
}
|
||||
deviceListForCircuit.append(thisDevice)
|
||||
thisCircuit = {
|
||||
"circuitID": circuitID,
|
||||
"circuitName": circuitName,
|
||||
"ParentNode": ParentNode,
|
||||
"devices": deviceListForCircuit,
|
||||
"downloadMin": round(int(downloadMin)*tcpOverheadFactor),
|
||||
"uploadMin": round(int(uploadMin)*tcpOverheadFactor),
|
||||
"downloadMax": round(int(downloadMax)*tcpOverheadFactor),
|
||||
"uploadMax": round(int(uploadMax)*tcpOverheadFactor),
|
||||
"qdisc": '',
|
||||
"comment": comment
|
||||
}
|
||||
if thisCircuit['ParentNode'] == 'none':
|
||||
thisCircuit['idForCircuitsWithoutParentNodes'] = counterForCircuitsWithoutParentNodes
|
||||
dictForCircuitsWithoutParentNodes[counterForCircuitsWithoutParentNodes] = ((round(int(downloadMax)*tcpOverheadFactor))+(round(int(uploadMax)*tcpOverheadFactor)))
|
||||
counterForCircuitsWithoutParentNodes += 1
|
||||
subscriberCircuits.append(thisCircuit)
|
||||
|
||||
|
||||
# Load network heirarchy
|
||||
with open(networkJSONfile, 'r') as j:
|
||||
network = json.loads(j.read())
|
||||
|
||||
|
||||
# Pull rx/tx queues / CPU cores available
|
||||
queuesAvailable = findQueuesAvailable()
|
||||
|
||||
|
||||
# Generate Parent Nodes. Spread ShapedDevices.csv which lack defined ParentNode across these (balance across CPUs)
|
||||
generatedPNs = []
|
||||
for x in range(queuesAvailable):
|
||||
genPNname = "Generated_PN_" + str(x+1)
|
||||
network[genPNname] = {
|
||||
"downloadBandwidthMbps":generatedPNDownloadMbps,
|
||||
"uploadBandwidthMbps":generatedPNUploadMbps
|
||||
}
|
||||
generatedPNs.append(genPNname)
|
||||
bins = binpacking.to_constant_bin_number(dictForCircuitsWithoutParentNodes, queuesAvailable)
|
||||
genPNcounter = 0
|
||||
for binItem in bins:
|
||||
sumItem = 0
|
||||
logging.info(generatedPNs[genPNcounter] + " will contain " + str(len(binItem)) + " circuits")
|
||||
for key in binItem.keys():
|
||||
for circuit in subscriberCircuits:
|
||||
if circuit['ParentNode'] == 'none':
|
||||
if circuit['idForCircuitsWithoutParentNodes'] == key:
|
||||
circuit['ParentNode'] = generatedPNs[genPNcounter]
|
||||
genPNcounter += 1
|
||||
if genPNcounter >= queuesAvailable:
|
||||
genPNcounter = 0
|
||||
|
||||
|
||||
# Find the bandwidth minimums for each node by combining mimimums of devices lower in that node's heirarchy
|
||||
def findBandwidthMins(data, depth):
|
||||
tabs = ' ' * depth
|
||||
minDownload = 0
|
||||
minUpload = 0
|
||||
for elem in data:
|
||||
for circuit in subscriberCircuits:
|
||||
if elem == circuit['ParentNode']:
|
||||
minDownload += circuit['downloadMin']
|
||||
minUpload += circuit['uploadMin']
|
||||
if 'children' in data[elem]:
|
||||
minDL, minUL = findBandwidthMins(data[elem]['children'], depth+1)
|
||||
minDownload += minDL
|
||||
minUpload += minUL
|
||||
data[elem]['downloadBandwidthMbpsMin'] = minDownload
|
||||
data[elem]['uploadBandwidthMbpsMin'] = minUpload
|
||||
return minDownload, minUpload
|
||||
minDownload, minUpload = findBandwidthMins(network, 0)
|
||||
|
||||
|
||||
# Parse network structure and add devices from ShapedDevices.csv
|
||||
linuxTCcommands = []
|
||||
xdpCPUmapCommands = []
|
||||
parentNodes = []
|
||||
def traverseNetwork(data, depth, major, minor, queue, parentClassID, parentMaxDL, parentMaxUL):
|
||||
for node in data:
|
||||
circuitsForThisNetworkNode = []
|
||||
nodeClassID = hex(major) + ':' + hex(minor)
|
||||
data[node]['classid'] = nodeClassID
|
||||
data[node]['parentClassID'] = parentClassID
|
||||
# Cap based on this node's max bandwidth, or parent node's max bandwidth, whichever is lower
|
||||
data[node]['downloadBandwidthMbps'] = min(data[node]['downloadBandwidthMbps'],parentMaxDL)
|
||||
data[node]['uploadBandwidthMbps'] = min(data[node]['uploadBandwidthMbps'],parentMaxUL)
|
||||
# Calculations are done in findBandwidthMins(), determine optimal HTB rates (mins) and ceils (maxs)
|
||||
# For some reason that doesn't always yield the expected result, so it's better to play with ceil more than rate
|
||||
# Here we override the rate as 95% of ceil.
|
||||
data[node]['downloadBandwidthMbpsMin'] = round(data[node]['downloadBandwidthMbps']*.95)
|
||||
data[node]['uploadBandwidthMbpsMin'] = round(data[node]['uploadBandwidthMbps']*.95)
|
||||
data[node]['classMajor'] = hex(major)
|
||||
data[node]['classMinor'] = hex(minor)
|
||||
data[node]['cpuNum'] = hex(queue-1)
|
||||
thisParentNode = {
|
||||
"parentNodeName": node,
|
||||
"classID": nodeClassID,
|
||||
"downloadMax": data[node]['downloadBandwidthMbps'],
|
||||
"uploadMax": data[node]['uploadBandwidthMbps'],
|
||||
}
|
||||
parentNodes.append(thisParentNode)
|
||||
minor += 1
|
||||
for circuit in subscriberCircuits:
|
||||
#If a device from ShapedDevices.csv lists this node as its Parent Node, attach it as a leaf to this node HTB
|
||||
if node == circuit['ParentNode']:
|
||||
if circuit['downloadMax'] > data[node]['downloadBandwidthMbps']:
|
||||
warnings.warn("downloadMax of Circuit ID [" + circuit['circuitID'] + "] exceeded that of its parent node. Reducing to that of its parent node now.", stacklevel=2)
|
||||
if circuit['uploadMax'] > data[node]['uploadBandwidthMbps']:
|
||||
warnings.warn("uploadMax of Circuit ID [" + circuit['circuitID'] + "] exceeded that of its parent node. Reducing to that of its parent node now.", stacklevel=2)
|
||||
parentString = hex(major) + ':'
|
||||
flowIDstring = hex(major) + ':' + hex(minor)
|
||||
circuit['qdisc'] = flowIDstring
|
||||
# Create circuit dictionary to be added to network structure, eventually output as queuingStructure.json
|
||||
maxDownload = min(circuit['downloadMax'],data[node]['downloadBandwidthMbps'])
|
||||
maxUpload = min(circuit['uploadMax'],data[node]['uploadBandwidthMbps'])
|
||||
minDownload = min(circuit['downloadMin'],maxDownload)
|
||||
minUpload = min(circuit['uploadMin'],maxUpload)
|
||||
thisNewCircuitItemForNetwork = {
|
||||
'maxDownload' : maxDownload,
|
||||
'maxUpload' : maxUpload,
|
||||
'minDownload' : minDownload,
|
||||
'minUpload' : minUpload,
|
||||
"circuitID": circuit['circuitID'],
|
||||
"circuitName": circuit['circuitName'],
|
||||
"ParentNode": circuit['ParentNode'],
|
||||
"devices": circuit['devices'],
|
||||
"qdisc": flowIDstring,
|
||||
"classMajor": hex(major),
|
||||
"classMinor": hex(minor),
|
||||
"comment": circuit['comment']
|
||||
}
|
||||
# Generate TC commands to be executed later
|
||||
thisNewCircuitItemForNetwork['devices'] = circuit['devices']
|
||||
circuitsForThisNetworkNode.append(thisNewCircuitItemForNetwork)
|
||||
minor += 1
|
||||
if len(circuitsForThisNetworkNode) > 0:
|
||||
data[node]['circuits'] = circuitsForThisNetworkNode
|
||||
# Recursive call this function for children nodes attached to this node
|
||||
if 'children' in data[node]:
|
||||
# We need to keep tabs on the minor counter, because we can't have repeating class IDs. Here, we bring back the minor counter from the recursive function
|
||||
minor = traverseNetwork(data[node]['children'], depth+1, major, minor+1, queue, nodeClassID, data[node]['downloadBandwidthMbps'], data[node]['uploadBandwidthMbps'])
|
||||
# If top level node, increment to next queue / cpu core
|
||||
if depth == 0:
|
||||
if queue >= queuesAvailable:
|
||||
queue = 1
|
||||
major = queue
|
||||
else:
|
||||
queue += 1
|
||||
major += 1
|
||||
return minor
|
||||
# Here is the actual call to the recursive traverseNetwork() function. finalMinor is not used.
|
||||
finalMinor = traverseNetwork(network, 0, major=1, minor=3, queue=1, parentClassID="1:1", parentMaxDL=upstreamBandwidthCapacityDownloadMbps, parentMaxUL=upstreamBandwidthCapacityUploadMbps)
|
||||
|
||||
|
||||
linuxTCcommands = []
|
||||
xdpCPUmapCommands = []
|
||||
devicesShaped = []
|
||||
# Root HTB Setup
|
||||
# Create MQ qdisc for each CPU core / rx-tx queue (XDP method - requires IPv4)
|
||||
thisInterface = interfaceA
|
||||
logging.info("# MQ Setup for " + thisInterface)
|
||||
command = 'qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq'
|
||||
linuxTCcommands.append(command)
|
||||
for queue in range(queuesAvailable):
|
||||
command = 'qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+1) + ' handle ' + hex(queue+1) + ': htb default 2'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ': classid ' + hex(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityDownloadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps) + 'mbit'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 ' + fqOrCAKE
|
||||
linuxTCcommands.append(command)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if it enters the top HTB without a specific class.
|
||||
# Technically, that should not even happen. So don't expect much if any traffic in this default class.
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 classid ' + hex(queue+1) + ':2 htb rate ' + str(round((upstreamBandwidthCapacityDownloadMbps-1)/4)) + 'mbit ceil ' + str(upstreamBandwidthCapacityDownloadMbps-1) + 'mbit prio 5'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':2 ' + fqOrCAKE
|
||||
linuxTCcommands.append(command)
|
||||
|
||||
thisInterface = interfaceB
|
||||
logging.info("# MQ Setup for " + thisInterface)
|
||||
command = 'qdisc replace dev ' + thisInterface + ' root handle 7FFF: mq'
|
||||
linuxTCcommands.append(command)
|
||||
for queue in range(queuesAvailable):
|
||||
command = 'qdisc add dev ' + thisInterface + ' parent 7FFF:' + hex(queue+1) + ' handle ' + hex(queue+1) + ': htb default 2'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ': classid ' + hex(queue+1) + ':1 htb rate '+ str(upstreamBandwidthCapacityUploadMbps) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps) + 'mbit'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 ' + fqOrCAKE
|
||||
linuxTCcommands.append(command)
|
||||
# Default class - traffic gets passed through this limiter with lower priority if it enters the top HTB without a specific class.
|
||||
# Technically, that should not even happen. So don't expect much if any traffic in this default class.
|
||||
# Only 1/4 of defaultClassCapacity is guarenteed (to prevent hitting ceiling of upstream), for the most part it serves as an "up to" ceiling.
|
||||
command = 'class add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':1 classid ' + hex(queue+1) + ':2 htb rate ' + str(round((upstreamBandwidthCapacityUploadMbps-1)/4)) + 'mbit ceil ' + str(upstreamBandwidthCapacityUploadMbps-1) + 'mbit prio 5'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'qdisc add dev ' + thisInterface + ' parent ' + hex(queue+1) + ':2 ' + fqOrCAKE
|
||||
linuxTCcommands.append(command)
|
||||
|
||||
|
||||
# Parse network structure. For each tier, generate commands to create corresponding HTB and leaf classes. Prepare commands for execution later
|
||||
# Define lists for hash filters
|
||||
def traverseNetwork(data):
|
||||
for node in data:
|
||||
command = 'class add dev ' + interfaceA + ' parent ' + data[node]['parentClassID'] + ' classid ' + data[node]['classMinor'] + ' htb rate '+ str(data[node]['downloadBandwidthMbpsMin']) + 'mbit ceil '+ str(data[node]['downloadBandwidthMbps']) + 'mbit prio 3' + " # Node: " + node
|
||||
linuxTCcommands.append(command)
|
||||
command = 'class add dev ' + interfaceB + ' parent ' + data[node]['parentClassID'] + ' classid ' + data[node]['classMinor'] + ' htb rate '+ str(data[node]['uploadBandwidthMbpsMin']) + 'mbit ceil '+ str(data[node]['uploadBandwidthMbps']) + 'mbit prio 3'
|
||||
linuxTCcommands.append(command)
|
||||
if 'circuits' in data[node]:
|
||||
for circuit in data[node]['circuits']:
|
||||
# Generate TC commands to be executed later
|
||||
comment = " # CircuitID: " + circuit['circuitID'] + " DeviceIDs: "
|
||||
for device in circuit['devices']:
|
||||
comment = comment + device['deviceID'] + ', '
|
||||
if 'devices' in circuit:
|
||||
if 'comment' in circuit['devices'][0]:
|
||||
comment = comment + '| Comment: ' + circuit['devices'][0]['comment']
|
||||
command = 'class add dev ' + interfaceA + ' parent ' + data[node]['classid'] + ' classid ' + circuit['classMinor'] + ' htb rate '+ str(circuit['minDownload']) + 'mbit ceil '+ str(circuit['maxDownload']) + 'mbit prio 3' + comment
|
||||
linuxTCcommands.append(command)
|
||||
command = 'qdisc add dev ' + interfaceA + ' parent ' + circuit['classMajor'] + ':' + circuit['classMinor'] + ' ' + fqOrCAKE
|
||||
linuxTCcommands.append(command)
|
||||
command = 'class add dev ' + interfaceB + ' parent ' + data[node]['classid'] + ' classid ' + circuit['classMinor'] + ' htb rate '+ str(circuit['minUpload']) + 'mbit ceil '+ str(circuit['maxUpload']) + 'mbit prio 3'
|
||||
linuxTCcommands.append(command)
|
||||
command = 'qdisc add dev ' + interfaceB + ' parent ' + circuit['classMajor'] + ':' + circuit['classMinor'] + ' ' + fqOrCAKE
|
||||
linuxTCcommands.append(command)
|
||||
for device in circuit['devices']:
|
||||
if device['ipv4s']:
|
||||
for ipv4 in device['ipv4s']:
|
||||
xdpCPUmapCommands.append('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + str(ipv4) + ' --cpu ' + data[node]['cpuNum'] + ' --classid ' + circuit['qdisc'])
|
||||
if device['ipv6s']:
|
||||
for ipv6 in device['ipv6s']:
|
||||
xdpCPUmapCommands.append('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --add --ip ' + str(ipv6) + ' --cpu ' + data[node]['cpuNum'] + ' --classid ' + circuit['qdisc'])
|
||||
if device['deviceName'] not in devicesShaped:
|
||||
devicesShaped.append(device['deviceName'])
|
||||
# Recursive call this function for children nodes attached to this node
|
||||
if 'children' in data[node]:
|
||||
traverseNetwork(data[node]['children'])
|
||||
# Here is the actual call to the recursive traverseNetwork() function. finalResult is not used.
|
||||
traverseNetwork(network)
|
||||
|
||||
|
||||
# Save queuingStructure
|
||||
with open('queuingStructure.json', 'w') as infile:
|
||||
json.dump(network, infile, indent=4)
|
||||
|
||||
|
||||
# Record start time of actual filter reload
|
||||
reloadStartTime = datetime.now()
|
||||
|
||||
|
||||
# Clear Prior Settings
|
||||
clearPriorSettings(interfaceA, interfaceB)
|
||||
|
||||
|
||||
# Setup XDP and disable XPS regardless of whether it is first run or not (necessary to handle cases where systemctl stop was used)
|
||||
xdpStartTime = datetime.now()
|
||||
if enableActualShellCommands:
|
||||
# Here we use os.system for the command, because otherwise it sometimes gltiches out with Popen in shell()
|
||||
result = os.system('./xdp-cpumap-tc/src/xdp_iphash_to_cpu_cmdline --clear')
|
||||
# Set up XDP-CPUMAP-TC
|
||||
logging.info("# XDP Setup")
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceA + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/bin/xps_setup.sh -d ' + interfaceB + ' --default --disable')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceA + ' --lan')
|
||||
shell('./xdp-cpumap-tc/src/xdp_iphash_to_cpu --dev ' + interfaceB + ' --wan')
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceA)
|
||||
shell('./xdp-cpumap-tc/src/tc_classify --dev-egress ' + interfaceB)
|
||||
xdpEndTime = datetime.now()
|
||||
|
||||
|
||||
# Execute actual Linux TC commands
|
||||
tcStartTime = datetime.now()
|
||||
print("Executing linux TC class/qdisc commands")
|
||||
with open('linux_tc.txt', 'w') as f:
|
||||
for command in linuxTCcommands:
|
||||
logging.info(command)
|
||||
f.write(f"{command}\n")
|
||||
if logging.DEBUG <= logging.root.level:
|
||||
# Do not --force in debug mode, so we can see any errors
|
||||
shell("/sbin/tc -b linux_tc.txt")
|
||||
else:
|
||||
shell("/sbin/tc -f -b linux_tc.txt")
|
||||
tcEndTime = datetime.now()
|
||||
print("Executed " + str(len(linuxTCcommands)) + " linux TC class/qdisc commands")
|
||||
|
||||
|
||||
# Execute actual XDP-CPUMAP-TC filter commands
|
||||
xdpFilterStartTime = datetime.now()
|
||||
print("Executing XDP-CPUMAP-TC IP filter commands")
|
||||
if enableActualShellCommands:
|
||||
for command in xdpCPUmapCommands:
|
||||
logging.info(command)
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.DEVNULL)
|
||||
else:
|
||||
for command in xdpCPUmapCommands:
|
||||
logging.info(command)
|
||||
print("Executed " + str(len(xdpCPUmapCommands)) + " XDP-CPUMAP-TC IP filter commands")
|
||||
xdpFilterEndTime = datetime.now()
|
||||
|
||||
|
||||
# Record end time of all reload commands
|
||||
reloadEndTime = datetime.now()
|
||||
|
||||
|
||||
# Recap - warn operator if devices were skipped
|
||||
devicesSkipped = []
|
||||
for circuit in subscriberCircuits:
|
||||
for device in circuit['devices']:
|
||||
if device['deviceName'] not in devicesShaped:
|
||||
devicesSkipped.append((device['deviceName'],device['deviceID']))
|
||||
if len(devicesSkipped) > 0:
|
||||
warnings.warn('Some devices were not shaped. Please check to ensure they have a valid ParentNode listed in ShapedDevices.csv:', stacklevel=2)
|
||||
print("Devices not shaped:")
|
||||
for entry in devicesSkipped:
|
||||
name, idNum = entry
|
||||
print('DeviceID: ' + idNum + '\t DeviceName: ' + name)
|
||||
|
||||
|
||||
# Save for stats
|
||||
with open('statsByCircuit.json', 'w') as f:
|
||||
f.write(json.dumps(subscriberCircuits, indent=4))
|
||||
with open('statsByParentNode.json', 'w') as f:
|
||||
f.write(json.dumps(parentNodes, indent=4))
|
||||
|
||||
|
||||
# Record time this run completed at
|
||||
# filename = os.path.join(_here, 'lastRun.txt')
|
||||
with open("lastRun.txt", 'w') as file:
|
||||
file.write(datetime.now().strftime("%d-%b-%Y (%H:%M:%S.%f)"))
|
||||
|
||||
|
||||
# Report reload time
|
||||
reloadTimeSeconds = ((reloadEndTime - reloadStartTime).seconds) + (((reloadEndTime - reloadStartTime).microseconds) / 1000000)
|
||||
tcTimeSeconds = ((tcEndTime - tcStartTime).seconds) + (((tcEndTime - tcStartTime).microseconds) / 1000000)
|
||||
xdpSetupTimeSeconds = ((xdpEndTime - xdpStartTime).seconds) + (((xdpEndTime - xdpStartTime).microseconds) / 1000000)
|
||||
xdpFilterTimeSeconds = ((xdpFilterEndTime - xdpFilterStartTime).seconds) + (((xdpFilterEndTime - xdpFilterStartTime).microseconds) / 1000000)
|
||||
print("Queue and IP filter reload completed in " + "{:g}".format(round(reloadTimeSeconds,1)) + " seconds")
|
||||
print("\tTC commands: \t" + "{:g}".format(round(tcTimeSeconds,1)) + " seconds")
|
||||
print("\tXDP setup: \t " + "{:g}".format(round(xdpSetupTimeSeconds,1)) + " seconds")
|
||||
print("\tXDP filters: \t " + "{:g}".format(round(xdpFilterTimeSeconds,1)) + " seconds")
|
||||
|
||||
|
||||
# Done
|
||||
print("refreshShapers completed on " + datetime.now().strftime("%d/%m/%Y %H:%M:%S"))
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
'-d', '--debug',
|
||||
help="Print lots of debugging statements",
|
||||
action="store_const", dest="loglevel", const=logging.DEBUG,
|
||||
default=logging.WARNING,
|
||||
)
|
||||
parser.add_argument(
|
||||
'-v', '--verbose',
|
||||
help="Be verbose",
|
||||
action="store_const", dest="loglevel", const=logging.INFO,
|
||||
)
|
||||
parser.add_argument(
|
||||
'--validate',
|
||||
help="Just validate network.json and ShapedDevices.csv",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
)
|
||||
parser.add_argument(
|
||||
'--clearrules',
|
||||
help="Clear ip filters, qdiscs, and xdp setup if any",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
)
|
||||
args = parser.parse_args()
|
||||
logging.basicConfig(level=args.loglevel)
|
||||
|
||||
if args.validate:
|
||||
status = validateNetworkAndDevices()
|
||||
elif args.clearrules:
|
||||
tearDown(interfaceA, interfaceB)
|
||||
else:
|
||||
# Refresh and/or set up queues
|
||||
refreshShapers()
|
@ -1,53 +0,0 @@
|
||||
# v1.2 (IPv4 + IPv6) (Stable)
|
||||
|
||||
<img alt="LibreQoS" src="https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/v1.1-alpha-preview.jpg"></a>
|
||||
|
||||
## Installation Guide
|
||||
- 📄 [LibreQoS v1.2 Installation & Usage Guide Physical Server and Ubuntu 22.04](https://github.com/rchac/LibreQoS/wiki/LibreQoS-v1.2-Installation-&-Usage-Guide-Physical-Server-and-Ubuntu-22.04)
|
||||
|
||||
## Features
|
||||
|
||||
- Support for multiple devices per subscriber circuit. This allows for multiple IPv4s to be filtered into the same queue, without necessarily being in the same subnet.
|
||||
|
||||
- Support for multiple IPv4s or IPv6s per device
|
||||
|
||||
- Reduced reload time by 80%. Actual packet loss is <25ms on reload of queues.
|
||||
|
||||
- Command line arguments ```--debug```, ```--verbose```, ```--clearrules``` and ```--validate```.
|
||||
|
||||
- lqTools.py - ```change-circuit-bandwidth```, ```change-circuit-bandwidth-using-ip```, ```show-active-plan-from-ip```, ```tc-statistics-from-ip```
|
||||
|
||||
- Validation of ShapedDevices.csv and network.json during load. If either fails validation, LibreQoS pulls from the last known good configuration (lastGoodConfig.csv and lastGoodConfig.json).
|
||||
|
||||
## ShapedDevices.csv
|
||||
Shaper.csv is now ShapedDevices.csv
|
||||
|
||||
New minimums apply to upload and download parameters:
|
||||
|
||||
* Download minimum must be 1Mbps or more
|
||||
* Upload minimum must be 1Mbps or more
|
||||
* Download maximum must be 2Mbps or more
|
||||
* Upload maximum must be 2Mbps or more
|
||||
|
||||
ShapedDevices.csv now has a field for Circuit ID. If the listed Circuit ID is the same between two or more devices, those devices will all be placed into the same queue. If a Circuit ID is not provided for a device, it gets its own circuit. Circuit Name is optional, but recommended. The client's service loction address might be good to use as the Circuit Name.
|
||||
|
||||
## IPv6 Support
|
||||
Full, XDP accelerated made possible by [@thebracket](https://github.com/thebracket)
|
||||
|
||||
## UISP Integration
|
||||
This integration fully maps out your entire UISP network.
|
||||
Add UISP info under "Optional UISP integration" in ispConfig.py
|
||||
|
||||
To use:
|
||||
1. Delete network.json and, if you have it, integrationUISPbandwidths.csv
|
||||
2. run ```python3 integrationUISP.py```
|
||||
|
||||
It will create a network.json with approximated bandwidths for APs based on UISP's reported capacities, and fixed bandwidth of 1000/1000 for sites.
|
||||
You can modify integrationUISPbandwidths.csv to correct bandwidth rates. It will load integrationUISPbandwidths.csv on each run and use those listed bandwidths to create network.json. It will always overwrite ShapedDevices.csv on each run by pulling devices from UISP.
|
||||
|
||||
### UISP Integration - IPv6 Support
|
||||
This will match IPv4 MAC addresses in the DHCP server leases of your mikrotik to DHCPv6 bindings, and include those IPv6 addresses with their respective devices.
|
||||
|
||||
To enable:
|
||||
* Edit mikrotikDHCPRouterList.csv to list of your mikrotik DHCPv6 servers
|
||||
* Set findIPv6usingMikrotik in ispConfig.py to True
|
@ -1,14 +0,0 @@
|
||||
#LibreQoS - autogenerated file - START
|
||||
Circuit ID,Circuit Name,Device ID,Device Name,Parent Node,MAC,IPv4,IPv6,Download Min Mbps,Upload Min Mbps,Download Max Mbps,Upload Max Mbps,Comment
|
||||
,"968 Circle St., Gurnee, IL 60031",1,Device 1,AP_A,,"100.64.0.1, 100.64.0.14",,25,5,155,20,
|
||||
,"31 Marconi Street, Lake In The Hills, IL 60156",2,Device 2,AP_A,,100.64.0.2,,25,5,105,18,
|
||||
,"255 NW. Newport Ave., Jamestown, NY 14701",3,Device 3,AP_9,,100.64.0.3,,25,5,105,18,
|
||||
,"8493 Campfire Street, Peabody, MA 01960",4,Device 4,AP_9,,100.64.0.4,,25,5,105,18,
|
||||
2794,"6 Littleton Drive, Ringgold, GA 30736",5,Device 5,AP_11,,100.64.0.5,,25,5,105,18,
|
||||
2794,"6 Littleton Drive, Ringgold, GA 30736",6,Device 6,AP_11,,100.64.0.6,,25,5,105,18,
|
||||
,"93 Oklahoma Ave., Parsippany, NJ 07054",7,Device 7,AP_1,,100.64.0.7,,25,5,155,20,
|
||||
,"74 Bishop Ave., Bakersfield, CA 93306",8,Device 8,AP_1,,100.64.0.8,,25,5,105,18,
|
||||
,"9598 Peg Shop Drive, Lutherville Timonium, MD 21093",9,Device 9,AP_7,,100.64.0.9,,25,5,105,18,
|
||||
,"115 Gartner Rd., Gettysburg, PA 17325",10,Device 10,AP_7,,100.64.0.10,,25,5,105,18,
|
||||
,"525 Birchpond St., Romulus, MI 48174",11,Device 11,Site_1,,100.64.0.11,,25,5,105,18,
|
||||
#LibreQoS - autogenerated file - EOF
|
Can't render this file because it has a wrong number of fields in line 2.
|
@ -1,418 +0,0 @@
|
||||
import subprocess
|
||||
import json
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
from influxdb_client import InfluxDBClient, Point
|
||||
from influxdb_client.client.write_api import SYNCHRONOUS
|
||||
|
||||
from ispConfig import interfaceA, interfaceB, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl, fqOrCAKE
|
||||
|
||||
|
||||
def getInterfaceStats(interface):
|
||||
command = 'tc -j -s qdisc show dev ' + interface
|
||||
jsonAr = json.loads(subprocess.run(command.split(' '), stdout=subprocess.PIPE).stdout.decode('utf-8'))
|
||||
jsonDict = {}
|
||||
for element in filter(lambda e: 'parent' in e, jsonAr):
|
||||
flowID = ':'.join(map(lambda p: f'0x{p}', element['parent'].split(':')[0:2]))
|
||||
jsonDict[flowID] = element
|
||||
del jsonAr
|
||||
return jsonDict
|
||||
|
||||
|
||||
def chunk_list(l, n):
|
||||
for i in range(0, len(l), n):
|
||||
yield l[i:i + n]
|
||||
|
||||
def getsubscriberCircuitstats(subscriberCircuits, tinsStats):
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
ifaceStats = list(map(getInterfaceStats, interfaces))
|
||||
|
||||
for circuit in subscriberCircuits:
|
||||
if 'stats' not in circuit:
|
||||
circuit['stats'] = {}
|
||||
if 'currentQuery' in circuit['stats']:
|
||||
circuit['stats']['priorQuery'] = circuit['stats']['currentQuery']
|
||||
circuit['stats']['currentQuery'] = {}
|
||||
circuit['stats']['sinceLastQuery'] = {}
|
||||
else:
|
||||
#circuit['stats']['priorQuery'] = {}
|
||||
#circuit['stats']['priorQuery']['time'] = datetime.now().isoformat()
|
||||
circuit['stats']['currentQuery'] = {}
|
||||
circuit['stats']['sinceLastQuery'] = {}
|
||||
|
||||
#for entry in tinsStats:
|
||||
if 'currentQuery' in tinsStats:
|
||||
tinsStats['priorQuery'] = tinsStats['currentQuery']
|
||||
tinsStats['currentQuery'] = {}
|
||||
tinsStats['sinceLastQuery'] = {}
|
||||
else:
|
||||
tinsStats['currentQuery'] = {}
|
||||
tinsStats['sinceLastQuery'] = {}
|
||||
|
||||
tinsStats['currentQuery'] = { 'Bulk': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
'BestEffort': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
'Video': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
'Voice': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
}
|
||||
tinsStats['sinceLastQuery'] = { 'Bulk': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
'BestEffort': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
'Video': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
'Voice': {'Download': {'sent_packets': 0.0, 'drops': 0.0}, 'Upload': {'sent_packets': 0.0, 'drops': 0.0}},
|
||||
}
|
||||
|
||||
for circuit in subscriberCircuits:
|
||||
for (interface, stats, dirSuffix) in zip(interfaces, ifaceStats, ['Download', 'Upload']):
|
||||
|
||||
element = stats[circuit['qdisc']] if circuit['qdisc'] in stats else False
|
||||
|
||||
if element:
|
||||
bytesSent = float(element['bytes'])
|
||||
drops = float(element['drops'])
|
||||
packets = float(element['packets'])
|
||||
if (element['drops'] > 0) and (element['packets'] > 0):
|
||||
overloadFactor = float(round(element['drops']/element['packets'],3))
|
||||
else:
|
||||
overloadFactor = 0.0
|
||||
|
||||
if 'cake diffserv4' in fqOrCAKE:
|
||||
tinCounter = 1
|
||||
for tin in element['tins']:
|
||||
sent_packets = float(tin['sent_packets'])
|
||||
ack_drops = float(tin['ack_drops'])
|
||||
ecn_mark = float(tin['ecn_mark'])
|
||||
tinDrops = float(tin['drops'])
|
||||
trueDrops = ecn_mark + tinDrops - ack_drops
|
||||
if tinCounter == 1:
|
||||
tinsStats['currentQuery']['Bulk'][dirSuffix]['sent_packets'] += sent_packets
|
||||
tinsStats['currentQuery']['Bulk'][dirSuffix]['drops'] += trueDrops
|
||||
elif tinCounter == 2:
|
||||
tinsStats['currentQuery']['BestEffort'][dirSuffix]['sent_packets'] += sent_packets
|
||||
tinsStats['currentQuery']['BestEffort'][dirSuffix]['drops'] += trueDrops
|
||||
elif tinCounter == 3:
|
||||
tinsStats['currentQuery']['Video'][dirSuffix]['sent_packets'] += sent_packets
|
||||
tinsStats['currentQuery']['Video'][dirSuffix]['drops'] += trueDrops
|
||||
elif tinCounter == 4:
|
||||
tinsStats['currentQuery']['Voice'][dirSuffix]['sent_packets'] += sent_packets
|
||||
tinsStats['currentQuery']['Voice'][dirSuffix]['drops'] += trueDrops
|
||||
tinCounter += 1
|
||||
|
||||
circuit['stats']['currentQuery']['bytesSent' + dirSuffix] = bytesSent
|
||||
circuit['stats']['currentQuery']['packetDrops' + dirSuffix] = drops
|
||||
circuit['stats']['currentQuery']['packetsSent' + dirSuffix] = packets
|
||||
circuit['stats']['currentQuery']['overloadFactor' + dirSuffix] = overloadFactor
|
||||
|
||||
#if 'cake diffserv4' in fqOrCAKE:
|
||||
# circuit['stats']['currentQuery']['tins'] = theseTins
|
||||
|
||||
circuit['stats']['currentQuery']['time'] = datetime.now().isoformat()
|
||||
|
||||
allPacketsDownload = 0.0
|
||||
allPacketsUpload = 0.0
|
||||
for circuit in subscriberCircuits:
|
||||
circuit['stats']['sinceLastQuery']['bitsDownload'] = circuit['stats']['sinceLastQuery']['bitsUpload'] = 0.0
|
||||
circuit['stats']['sinceLastQuery']['bytesSentDownload'] = circuit['stats']['sinceLastQuery']['bytesSentUpload'] = 0.0
|
||||
circuit['stats']['sinceLastQuery']['packetDropsDownload'] = circuit['stats']['sinceLastQuery']['packetDropsUpload'] = 0.0
|
||||
circuit['stats']['sinceLastQuery']['packetsSentDownload'] = circuit['stats']['sinceLastQuery']['packetsSentUpload'] = 0.0
|
||||
|
||||
try:
|
||||
circuit['stats']['sinceLastQuery']['bytesSentDownload'] = circuit['stats']['currentQuery']['bytesSentDownload'] - circuit['stats']['priorQuery']['bytesSentDownload']
|
||||
circuit['stats']['sinceLastQuery']['bytesSentUpload'] = circuit['stats']['currentQuery']['bytesSentUpload'] - circuit['stats']['priorQuery']['bytesSentUpload']
|
||||
except:
|
||||
circuit['stats']['sinceLastQuery']['bytesSentDownload'] = 0.0
|
||||
circuit['stats']['sinceLastQuery']['bytesSentUpload'] = 0.0
|
||||
try:
|
||||
circuit['stats']['sinceLastQuery']['packetDropsDownload'] = circuit['stats']['currentQuery']['packetDropsDownload'] - circuit['stats']['priorQuery']['packetDropsDownload']
|
||||
circuit['stats']['sinceLastQuery']['packetDropsUpload'] = circuit['stats']['currentQuery']['packetDropsUpload'] - circuit['stats']['priorQuery']['packetDropsUpload']
|
||||
except:
|
||||
circuit['stats']['sinceLastQuery']['packetDropsDownload'] = 0.0
|
||||
circuit['stats']['sinceLastQuery']['packetDropsUpload'] = 0.0
|
||||
try:
|
||||
circuit['stats']['sinceLastQuery']['packetsSentDownload'] = circuit['stats']['currentQuery']['packetsSentDownload'] - circuit['stats']['priorQuery']['packetsSentDownload']
|
||||
circuit['stats']['sinceLastQuery']['packetsSentUpload'] = circuit['stats']['currentQuery']['packetsSentUpload'] - circuit['stats']['priorQuery']['packetsSentUpload']
|
||||
except:
|
||||
circuit['stats']['sinceLastQuery']['packetsSentDownload'] = 0.0
|
||||
circuit['stats']['sinceLastQuery']['packetsSentUpload'] = 0.0
|
||||
|
||||
allPacketsDownload += circuit['stats']['sinceLastQuery']['packetsSentDownload']
|
||||
allPacketsUpload += circuit['stats']['sinceLastQuery']['packetsSentUpload']
|
||||
|
||||
if 'priorQuery' in circuit['stats']:
|
||||
if 'time' in circuit['stats']['priorQuery']:
|
||||
currentQueryTime = datetime.fromisoformat(circuit['stats']['currentQuery']['time'])
|
||||
priorQueryTime = datetime.fromisoformat(circuit['stats']['priorQuery']['time'])
|
||||
deltaSeconds = (currentQueryTime - priorQueryTime).total_seconds()
|
||||
circuit['stats']['sinceLastQuery']['bitsDownload'] = round(
|
||||
((circuit['stats']['sinceLastQuery']['bytesSentDownload'] * 8) / deltaSeconds)) if deltaSeconds > 0 else 0
|
||||
circuit['stats']['sinceLastQuery']['bitsUpload'] = round(
|
||||
((circuit['stats']['sinceLastQuery']['bytesSentUpload'] * 8) / deltaSeconds)) if deltaSeconds > 0 else 0
|
||||
else:
|
||||
circuit['stats']['sinceLastQuery']['bitsDownload'] = (circuit['stats']['sinceLastQuery']['bytesSentDownload'] * 8)
|
||||
circuit['stats']['sinceLastQuery']['bitsUpload'] = (circuit['stats']['sinceLastQuery']['bytesSentUpload'] * 8)
|
||||
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['Bulk']['Upload']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['Video']['Upload']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['dropPercentage'] = tinsStats['sinceLastQuery']['Voice']['Upload']['dropPercentage'] = 0.0
|
||||
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['percentage'] = tinsStats['sinceLastQuery']['Bulk']['Upload']['percentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['percentage'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['percentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['percentage'] = tinsStats['sinceLastQuery']['Video']['Upload']['percentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['percentage'] = tinsStats['sinceLastQuery']['Voice']['Upload']['percentage'] = 0.0
|
||||
|
||||
try:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets'] = tinsStats['currentQuery']['Bulk']['Download']['sent_packets'] - tinsStats['priorQuery']['Bulk']['Download']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets'] = tinsStats['currentQuery']['BestEffort']['Download']['sent_packets'] - tinsStats['priorQuery']['BestEffort']['Download']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['sent_packets'] = tinsStats['currentQuery']['Video']['Download']['sent_packets'] - tinsStats['priorQuery']['Video']['Download']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets'] = tinsStats['currentQuery']['Voice']['Download']['sent_packets'] - tinsStats['priorQuery']['Voice']['Download']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets'] = tinsStats['currentQuery']['Bulk']['Upload']['sent_packets'] - tinsStats['priorQuery']['Bulk']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets'] = tinsStats['currentQuery']['BestEffort']['Upload']['sent_packets'] - tinsStats['priorQuery']['BestEffort']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets'] = tinsStats['currentQuery']['Video']['Upload']['sent_packets'] - tinsStats['priorQuery']['Video']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets'] = tinsStats['currentQuery']['Voice']['Upload']['sent_packets'] - tinsStats['priorQuery']['Voice']['Upload']['sent_packets']
|
||||
except:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets'] = tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['sent_packets'] = tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets'] = tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets'] = 0.0
|
||||
|
||||
try:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['drops'] = tinsStats['currentQuery']['Bulk']['Download']['drops'] - tinsStats['priorQuery']['Bulk']['Download']['drops']
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['drops'] = tinsStats['currentQuery']['BestEffort']['Download']['drops'] - tinsStats['priorQuery']['BestEffort']['Download']['drops']
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['drops'] = tinsStats['currentQuery']['Video']['Download']['drops'] - tinsStats['priorQuery']['Video']['Download']['drops']
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['drops'] = tinsStats['currentQuery']['Voice']['Download']['drops'] - tinsStats['priorQuery']['Voice']['Download']['drops']
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['drops'] = tinsStats['currentQuery']['Bulk']['Upload']['drops'] - tinsStats['priorQuery']['Bulk']['Upload']['drops']
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Upload']['drops'] = tinsStats['currentQuery']['BestEffort']['Upload']['drops'] - tinsStats['priorQuery']['BestEffort']['Upload']['drops']
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['drops'] = tinsStats['currentQuery']['Video']['Upload']['drops'] - tinsStats['priorQuery']['Video']['Upload']['drops']
|
||||
tinsStats['sinceLastQuery']['Voice']['Upload']['drops'] = tinsStats['currentQuery']['Voice']['Upload']['drops'] - tinsStats['priorQuery']['Voice']['Upload']['drops']
|
||||
except:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['drops'] = tinsStats['sinceLastQuery']['BestEffort']['Download']['drops'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['drops'] = tinsStats['sinceLastQuery']['Voice']['Download']['drops'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['drops'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['drops'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['drops'] = tinsStats['sinceLastQuery']['Voice']['Upload']['drops'] = 0.0
|
||||
|
||||
try:
|
||||
dlPerc = tinsStats['sinceLastQuery']['Bulk']['Download']['drops'] / tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets']
|
||||
ulPerc = tinsStats['sinceLastQuery']['Bulk']['Upload']['drops'] / tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
|
||||
|
||||
dlPerc = tinsStats['sinceLastQuery']['BestEffort']['Download']['drops'] / tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets']
|
||||
ulPerc = tinsStats['sinceLastQuery']['BestEffort']['Upload']['drops'] / tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
|
||||
|
||||
dlPerc = tinsStats['sinceLastQuery']['Video']['Download']['drops'] / tinsStats['sinceLastQuery']['Video']['Download']['sent_packets']
|
||||
ulPerc = tinsStats['sinceLastQuery']['Video']['Upload']['drops'] / tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
|
||||
|
||||
dlPerc = tinsStats['sinceLastQuery']['Voice']['Download']['drops'] / tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets']
|
||||
ulPerc = tinsStats['sinceLastQuery']['Voice']['Upload']['drops'] / tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets']
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['dropPercentage'] = max(round(dlPerc * 100.0, 3),0.0)
|
||||
tinsStats['sinceLastQuery']['Voice']['Upload']['dropPercentage'] = max(round(ulPerc * 100.0, 3),0.0)
|
||||
except:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Upload']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['dropPercentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Voice']['Upload']['dropPercentage'] = 0.0
|
||||
|
||||
try:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['Bulk']['Download']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['Bulk']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['Bulk']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['BestEffort']['Download']['sent_packets']/allPacketsDownload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['BestEffort']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['Video']['Download']['sent_packets']/allPacketsDownload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['Video']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['Video']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['percentage'] = min(round((tinsStats['sinceLastQuery']['Voice']['Download']['sent_packets']/allPacketsDownload)*100.0, 3),100.0)
|
||||
tinsStats['sinceLastQuery']['Voice']['Upload']['percentage'] = min(round((tinsStats['sinceLastQuery']['Voice']['Upload']['sent_packets']/allPacketsUpload)*100.0, 3),100.0)
|
||||
except:
|
||||
tinsStats['sinceLastQuery']['Bulk']['Download']['percentage'] = tinsStats['sinceLastQuery']['Bulk']['Upload']['percentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['BestEffort']['Download']['percentage'] = tinsStats['sinceLastQuery']['BestEffort']['Upload']['percentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Video']['Download']['percentage'] = tinsStats['sinceLastQuery']['Video']['Upload']['percentage'] = 0.0
|
||||
tinsStats['sinceLastQuery']['Voice']['Download']['percentage'] = tinsStats['sinceLastQuery']['Voice']['Upload']['percentage'] = 0.0
|
||||
|
||||
return subscriberCircuits, tinsStats
|
||||
|
||||
|
||||
def getParentNodeStats(parentNodes, subscriberCircuits):
|
||||
for parentNode in parentNodes:
|
||||
thisNodeDropsDownload = 0
|
||||
thisNodeDropsUpload = 0
|
||||
thisNodeDropsTotal = 0
|
||||
thisNodeBitsDownload = 0
|
||||
thisNodeBitsUpload = 0
|
||||
packetsSentDownloadAggregate = 0.0
|
||||
packetsSentUploadAggregate = 0.0
|
||||
packetsSentTotalAggregate = 0.0
|
||||
circuitsMatched = 0
|
||||
thisParentNodeStats = {'sinceLastQuery': {}}
|
||||
for circuit in subscriberCircuits:
|
||||
if circuit['ParentNode'] == parentNode['parentNodeName']:
|
||||
thisNodeBitsDownload += circuit['stats']['sinceLastQuery']['bitsDownload']
|
||||
thisNodeBitsUpload += circuit['stats']['sinceLastQuery']['bitsUpload']
|
||||
#thisNodeDropsDownload += circuit['packetDropsDownloadSinceLastQuery']
|
||||
#thisNodeDropsUpload += circuit['packetDropsUploadSinceLastQuery']
|
||||
thisNodeDropsTotal += (circuit['stats']['sinceLastQuery']['packetDropsDownload'] + circuit['stats']['sinceLastQuery']['packetDropsUpload'])
|
||||
packetsSentDownloadAggregate += circuit['stats']['sinceLastQuery']['packetsSentDownload']
|
||||
packetsSentUploadAggregate += circuit['stats']['sinceLastQuery']['packetsSentUpload']
|
||||
packetsSentTotalAggregate += (circuit['stats']['sinceLastQuery']['packetsSentDownload'] + circuit['stats']['sinceLastQuery']['packetsSentUpload'])
|
||||
circuitsMatched += 1
|
||||
if (packetsSentDownloadAggregate > 0) and (packetsSentUploadAggregate > 0):
|
||||
#overloadFactorDownloadSinceLastQuery = float(round((thisNodeDropsDownload/packetsSentDownloadAggregate)*100.0, 3))
|
||||
#overloadFactorUploadSinceLastQuery = float(round((thisNodeDropsUpload/packetsSentUploadAggregate)*100.0, 3))
|
||||
overloadFactorTotalSinceLastQuery = float(round((thisNodeDropsTotal/packetsSentTotalAggregate)*100.0, 1))
|
||||
else:
|
||||
#overloadFactorDownloadSinceLastQuery = 0.0
|
||||
#overloadFactorUploadSinceLastQuery = 0.0
|
||||
overloadFactorTotalSinceLastQuery = 0.0
|
||||
|
||||
thisParentNodeStats['sinceLastQuery']['bitsDownload'] = thisNodeBitsDownload
|
||||
thisParentNodeStats['sinceLastQuery']['bitsUpload'] = thisNodeBitsUpload
|
||||
thisParentNodeStats['sinceLastQuery']['packetDropsTotal'] = thisNodeDropsTotal
|
||||
thisParentNodeStats['sinceLastQuery']['overloadFactorTotal'] = overloadFactorTotalSinceLastQuery
|
||||
parentNode['stats'] = thisParentNodeStats
|
||||
|
||||
return parentNodes
|
||||
|
||||
|
||||
def getParentNodeDict(data, depth, parentNodeNameDict):
|
||||
if parentNodeNameDict == None:
|
||||
parentNodeNameDict = {}
|
||||
|
||||
for elem in data:
|
||||
if 'children' in data[elem]:
|
||||
for child in data[elem]['children']:
|
||||
parentNodeNameDict[child] = elem
|
||||
tempDict = getParentNodeDict(data[elem]['children'], depth + 1, parentNodeNameDict)
|
||||
parentNodeNameDict = dict(parentNodeNameDict, **tempDict)
|
||||
return parentNodeNameDict
|
||||
|
||||
|
||||
def parentNodeNameDictPull():
|
||||
# Load network heirarchy
|
||||
with open('network.json', 'r') as j:
|
||||
network = json.loads(j.read())
|
||||
parentNodeNameDict = getParentNodeDict(network, 0, None)
|
||||
return parentNodeNameDict
|
||||
|
||||
def refreshBandwidthGraphs():
|
||||
startTime = datetime.now()
|
||||
with open('statsByParentNode.json', 'r') as j:
|
||||
parentNodes = json.loads(j.read())
|
||||
|
||||
with open('statsByCircuit.json', 'r') as j:
|
||||
subscriberCircuits = json.loads(j.read())
|
||||
|
||||
fileLoc = Path("tinsStats.json")
|
||||
if fileLoc.is_file():
|
||||
with open(fileLoc, 'r') as j:
|
||||
tinsStats = json.loads(j.read())
|
||||
else:
|
||||
tinsStats = {}
|
||||
|
||||
fileLoc = Path("longTermStats.json")
|
||||
if fileLoc.is_file():
|
||||
with open(fileLoc, 'r') as j:
|
||||
longTermStats = json.loads(j.read())
|
||||
droppedPacketsAllTime = longTermStats['droppedPacketsTotal']
|
||||
else:
|
||||
longTermStats = {}
|
||||
longTermStats['droppedPacketsTotal'] = 0.0
|
||||
droppedPacketsAllTime = 0.0
|
||||
|
||||
parentNodeNameDict = parentNodeNameDictPull()
|
||||
|
||||
print("Retrieving circuit statistics")
|
||||
subscriberCircuits, tinsStats = getsubscriberCircuitstats(subscriberCircuits, tinsStats)
|
||||
print("Computing parent node statistics")
|
||||
parentNodes = getParentNodeStats(parentNodes, subscriberCircuits)
|
||||
print("Writing data to InfluxDB")
|
||||
client = InfluxDBClient(
|
||||
url=influxDBurl,
|
||||
token=influxDBtoken,
|
||||
org=influxDBOrg
|
||||
)
|
||||
write_api = client.write_api(write_options=SYNCHRONOUS)
|
||||
|
||||
chunkedsubscriberCircuits = list(chunk_list(subscriberCircuits, 200))
|
||||
|
||||
queriesToSendCount = 0
|
||||
for chunk in chunkedsubscriberCircuits:
|
||||
queriesToSend = []
|
||||
for circuit in chunk:
|
||||
bitsDownload = float(circuit['stats']['sinceLastQuery']['bitsDownload'])
|
||||
bitsUpload = float(circuit['stats']['sinceLastQuery']['bitsUpload'])
|
||||
if (bitsDownload > 0) and (bitsUpload > 0):
|
||||
percentUtilizationDownload = round((bitsDownload / round(circuit['downloadMax'] * 1000000))*100.0, 1)
|
||||
percentUtilizationUpload = round((bitsUpload / round(circuit['uploadMax'] * 1000000))*100.0, 1)
|
||||
p = Point('Bandwidth').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("Download", bitsDownload).field("Upload", bitsUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Circuit", circuit['circuitName']).tag("ParentNode", circuit['ParentNode']).tag("Type", "Circuit").field("Download", percentUtilizationDownload).field("Upload", percentUtilizationUpload)
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=influxDBBucket, record=queriesToSend)
|
||||
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
queriesToSendCount += len(queriesToSend)
|
||||
|
||||
queriesToSend = []
|
||||
for parentNode in parentNodes:
|
||||
bitsDownload = float(parentNode['stats']['sinceLastQuery']['bitsDownload'])
|
||||
bitsUpload = float(parentNode['stats']['sinceLastQuery']['bitsUpload'])
|
||||
dropsTotal = float(parentNode['stats']['sinceLastQuery']['packetDropsTotal'])
|
||||
overloadFactor = float(parentNode['stats']['sinceLastQuery']['overloadFactorTotal'])
|
||||
droppedPacketsAllTime += dropsTotal
|
||||
if (bitsDownload > 0) and (bitsUpload > 0):
|
||||
percentUtilizationDownload = round((bitsDownload / round(parentNode['downloadMax'] * 1000000))*100.0, 1)
|
||||
percentUtilizationUpload = round((bitsUpload / round(parentNode['uploadMax'] * 1000000))*100.0, 1)
|
||||
p = Point('Bandwidth').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Download", bitsDownload).field("Upload", bitsUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Utilization').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Download", percentUtilizationDownload).field("Upload", percentUtilizationUpload)
|
||||
queriesToSend.append(p)
|
||||
p = Point('Overload').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("Overload", overloadFactor)
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=influxDBBucket, record=queriesToSend)
|
||||
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
queriesToSendCount += len(queriesToSend)
|
||||
|
||||
if 'cake diffserv4' in fqOrCAKE:
|
||||
queriesToSend = []
|
||||
listOfTins = ['Bulk', 'BestEffort', 'Video', 'Voice']
|
||||
for tin in listOfTins:
|
||||
p = Point('Tin Drop Percentage').tag("Type", "Tin").tag("Tin", tin).field("Download", tinsStats['sinceLastQuery'][tin]['Download']['dropPercentage']).field("Upload", tinsStats['sinceLastQuery'][tin]['Upload']['dropPercentage'])
|
||||
queriesToSend.append(p)
|
||||
p = Point('Tins Assigned').tag("Type", "Tin").tag("Tin", tin).field("Download", tinsStats['sinceLastQuery'][tin]['Download']['percentage']).field("Upload", tinsStats['sinceLastQuery'][tin]['Upload']['percentage'])
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=influxDBBucket, record=queriesToSend)
|
||||
# print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
queriesToSendCount += len(queriesToSend)
|
||||
|
||||
print("Added " + str(queriesToSendCount) + " points to InfluxDB.")
|
||||
|
||||
client.close()
|
||||
|
||||
with open('statsByParentNode.json', 'w') as f:
|
||||
f.write(json.dumps(parentNodes, indent=4))
|
||||
|
||||
with open('statsByCircuit.json', 'w') as f:
|
||||
f.write(json.dumps(subscriberCircuits, indent=4))
|
||||
|
||||
longTermStats['droppedPacketsTotal'] = droppedPacketsAllTime
|
||||
with open('longTermStats.json', 'w') as f:
|
||||
f.write(json.dumps(longTermStats, indent=4))
|
||||
|
||||
with open('tinsStats.json', 'w') as f:
|
||||
f.write(json.dumps(tinsStats, indent=4))
|
||||
|
||||
endTime = datetime.now()
|
||||
durationSeconds = round((endTime - startTime).total_seconds(), 2)
|
||||
print("Graphs updated within " + str(durationSeconds) + " seconds.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshBandwidthGraphs()
|
@ -1,137 +0,0 @@
|
||||
import os
|
||||
import subprocess
|
||||
from subprocess import PIPE
|
||||
import io
|
||||
import decimal
|
||||
import json
|
||||
from ispConfig import fqOrCAKE, interfaceA, interfaceB, influxDBBucket, influxDBOrg, influxDBtoken, influxDBurl, ppingLocation
|
||||
from datetime import date, datetime, timedelta
|
||||
import decimal
|
||||
from influxdb_client import InfluxDBClient, Point, Dialect
|
||||
from influxdb_client.client.write_api import SYNCHRONOUS
|
||||
import dateutil.parser
|
||||
|
||||
def getLatencies(subscriberCircuits, secondsToRun):
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
tcpLatency = 0
|
||||
listOfAllDiffs = []
|
||||
maxLatencyRecordable = 200
|
||||
matchableIPs = []
|
||||
for circuit in subscriberCircuits:
|
||||
for device in circuit['devices']:
|
||||
matchableIPs.append(device['ipv4'])
|
||||
|
||||
rttDict = {}
|
||||
jitterDict = {}
|
||||
#for interface in interfaces:
|
||||
command = "./pping -i " + interfaceA + " -s " + str(secondsToRun) + " -m"
|
||||
commands = command.split(' ')
|
||||
wd = ppingLocation
|
||||
tcShowResults = subprocess.run(command, shell=True, cwd=wd,stdout=subprocess.PIPE, stderr=subprocess.DEVNULL).stdout.decode('utf-8').splitlines()
|
||||
for line in tcShowResults:
|
||||
if len(line) > 59:
|
||||
rtt1 = float(line[18:27])*1000
|
||||
rtt2 = float(line[27:36]) *1000
|
||||
toAndFrom = line[38:].split(' ')[3]
|
||||
fromIP = toAndFrom.split('+')[0].split(':')[0]
|
||||
toIP = toAndFrom.split('+')[1].split(':')[0]
|
||||
matchedIP = ''
|
||||
if fromIP in matchableIPs:
|
||||
matchedIP = fromIP
|
||||
elif toIP in matchableIPs:
|
||||
matchedIP = toIP
|
||||
jitter = rtt1 - rtt2
|
||||
#Cap ceil
|
||||
if rtt1 >= maxLatencyRecordable:
|
||||
rtt1 = 200
|
||||
#Lowest observed rtt
|
||||
if matchedIP in rttDict:
|
||||
if rtt1 < rttDict[matchedIP]:
|
||||
rttDict[matchedIP] = rtt1
|
||||
jitterDict[matchedIP] = jitter
|
||||
else:
|
||||
rttDict[matchedIP] = rtt1
|
||||
jitterDict[matchedIP] = jitter
|
||||
for circuit in subscriberCircuits:
|
||||
for device in circuit['devices']:
|
||||
diffsForThisDevice = []
|
||||
if device['ipv4'] in rttDict:
|
||||
device['tcpLatency'] = rttDict[device['ipv4']]
|
||||
else:
|
||||
device['tcpLatency'] = None
|
||||
if device['ipv4'] in jitterDict:
|
||||
device['tcpJitter'] = jitterDict[device['ipv4']]
|
||||
else:
|
||||
device['tcpJitter'] = None
|
||||
return subscriberCircuits
|
||||
|
||||
def getParentNodeStats(parentNodes, subscriberCircuits):
|
||||
for parentNode in parentNodes:
|
||||
acceptableLatencies = []
|
||||
for circuit in subscriberCircuits:
|
||||
for device in circuit['devices']:
|
||||
if device['ParentNode'] == parentNode['parentNodeName']:
|
||||
if device['tcpLatency'] != None:
|
||||
acceptableLatencies.append(device['tcpLatency'])
|
||||
|
||||
if len(acceptableLatencies) > 0:
|
||||
parentNode['tcpLatency'] = sum(acceptableLatencies)/len(acceptableLatencies)
|
||||
else:
|
||||
parentNode['tcpLatency'] = None
|
||||
return parentNodes
|
||||
|
||||
def refreshLatencyGraphs(secondsToRun):
|
||||
startTime = datetime.now()
|
||||
with open('statsByParentNode.json', 'r') as j:
|
||||
parentNodes = json.loads(j.read())
|
||||
|
||||
with open('statsByCircuit.json', 'r') as j:
|
||||
subscriberCircuits = json.loads(j.read())
|
||||
|
||||
print("Retrieving circuit statistics")
|
||||
subscriberCircuits = getLatencies(subscriberCircuits, secondsToRun)
|
||||
|
||||
print("Computing parent node statistics")
|
||||
parentNodes = getParentNodeStats(parentNodes, devices)
|
||||
|
||||
print("Writing data to InfluxDB")
|
||||
bucket = influxDBBucket
|
||||
org = influxDBOrg
|
||||
token = influxDBtoken
|
||||
url = influxDBurl
|
||||
client = InfluxDBClient(
|
||||
url=url,
|
||||
token=token,
|
||||
org=org
|
||||
)
|
||||
write_api = client.write_api(write_options=SYNCHRONOUS)
|
||||
|
||||
queriesToSend = []
|
||||
|
||||
for circuit in subscriberCircuits:
|
||||
for device in circuit['devices']:
|
||||
if device['tcpLatency'] != None:
|
||||
p = Point('Latency').tag("Device", device['deviceName']).tag("ParentNode", device['ParentNode']).tag("Type", "Device").field("TCP Latency", device['tcpLatency'])
|
||||
queriesToSend.append(p)
|
||||
|
||||
for parentNode in parentNodes:
|
||||
if parentNode['tcpLatency'] != None:
|
||||
p = Point('Latency').tag("Device", parentNode['parentNodeName']).tag("ParentNode", parentNode['parentNodeName']).tag("Type", "Parent Node").field("TCP Latency", parentNode['tcpLatency'])
|
||||
queriesToSend.append(p)
|
||||
|
||||
write_api.write(bucket=bucket, record=queriesToSend)
|
||||
print("Added " + str(len(queriesToSend)) + " points to InfluxDB.")
|
||||
client.close()
|
||||
|
||||
#with open('statsByParentNode.json', 'w') as infile:
|
||||
# json.dump(parentNodes, infile)
|
||||
|
||||
#with open('statsByDevice.json', 'w') as infile:
|
||||
# json.dump(devices, infile)
|
||||
|
||||
endTime = datetime.now()
|
||||
durationSeconds = round((endTime - startTime).total_seconds())
|
||||
print("Graphs updated within " + str(durationSeconds) + " seconds.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
refreshLatencyGraphs(10)
|
File diff suppressed because it is too large
Load Diff
@ -1,265 +0,0 @@
|
||||
import requests
|
||||
import os
|
||||
import csv
|
||||
import ipaddress
|
||||
from ispConfig import UISPbaseURL, uispAuthToken, shapeRouterOrStation, allowedSubnets, ignoreSubnets, excludeSites, findIPv6usingMikrotik, bandwidthOverheadFactor, exceptionCPEs
|
||||
import shutil
|
||||
import json
|
||||
if findIPv6usingMikrotik == True:
|
||||
from mikrotikFindIPv6 import pullMikrotikIPv6
|
||||
|
||||
knownRouterModels = ['ACB-AC', 'ACB-ISP']
|
||||
knownAPmodels = ['LTU-Rocket', 'RP-5AC', 'RP-5AC-Gen2', 'LAP-GPS', 'Wave-AP']
|
||||
|
||||
def isInAllowedSubnets(inputIP):
|
||||
isAllowed = False
|
||||
if '/' in inputIP:
|
||||
inputIP = inputIP.split('/')[0]
|
||||
for subnet in allowedSubnets:
|
||||
if (ipaddress.ip_address(inputIP) in ipaddress.ip_network(subnet)):
|
||||
isAllowed = True
|
||||
return isAllowed
|
||||
|
||||
def createTree(sites,accessPoints,bandwidthDL,bandwidthUL,siteParentDict,siteIDtoName,sitesWithParents,currentNode):
|
||||
currentNodeName = list(currentNode.items())[0][0]
|
||||
childrenList = []
|
||||
for site in sites:
|
||||
try:
|
||||
thisOnesParent = siteIDtoName[site['identification']['parent']['id']]
|
||||
if thisOnesParent == currentNodeName:
|
||||
childrenList.append(site['id'])
|
||||
except:
|
||||
thisOnesParent = None
|
||||
aps = []
|
||||
for ap in accessPoints:
|
||||
if ap['device']['site'] is None:
|
||||
print("Unable to read site information for: " + ap['device']['name'])
|
||||
else:
|
||||
thisOnesParent = ap['device']['site']['name']
|
||||
if thisOnesParent == currentNodeName:
|
||||
if ap['device']['model'] in knownAPmodels:
|
||||
aps.append(ap['device']['name'])
|
||||
apDict = {}
|
||||
for ap in aps:
|
||||
maxDL = min(bandwidthDL[ap],bandwidthDL[currentNodeName])
|
||||
maxUL = min(bandwidthUL[ap],bandwidthUL[currentNodeName])
|
||||
apStruct = {
|
||||
ap :
|
||||
{
|
||||
"downloadBandwidthMbps": maxDL,
|
||||
"uploadBandwidthMbps": maxUL,
|
||||
}
|
||||
}
|
||||
apDictNew = apDict | apStruct
|
||||
apDict = apDictNew
|
||||
if bool(apDict):
|
||||
currentNode[currentNodeName]['children'] = apDict
|
||||
counter = 0
|
||||
tempChildren = {}
|
||||
for child in childrenList:
|
||||
name = siteIDtoName[child]
|
||||
maxDL = min(bandwidthDL[name],bandwidthDL[currentNodeName])
|
||||
maxUL = min(bandwidthUL[name],bandwidthUL[currentNodeName])
|
||||
childStruct = {
|
||||
name :
|
||||
{
|
||||
"downloadBandwidthMbps": maxDL,
|
||||
"uploadBandwidthMbps": maxUL,
|
||||
}
|
||||
}
|
||||
childStruct = createTree(sites,accessPoints,bandwidthDL,bandwidthUL,siteParentDict,siteIDtoName,sitesWithParents,childStruct)
|
||||
tempChildren = tempChildren | childStruct
|
||||
counter += 1
|
||||
if tempChildren != {}:
|
||||
if 'children' in currentNode[currentNodeName]:
|
||||
currentNode[currentNodeName]['children'] = currentNode[currentNodeName]['children'] | tempChildren
|
||||
else:
|
||||
currentNode[currentNodeName]['children'] = tempChildren
|
||||
return currentNode
|
||||
|
||||
def createNetworkJSON():
|
||||
if os.path.isfile("network.json"):
|
||||
print("network.json already exists. Leaving in place.")
|
||||
else:
|
||||
print("Generating network.json")
|
||||
bandwidthDL = {}
|
||||
bandwidthUL = {}
|
||||
url = UISPbaseURL + "/nms/api/v2.1/sites?type=site"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
sites = r.json()
|
||||
url = UISPbaseURL + "/nms/api/v2.1/devices/aps/profiles"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
apProfiles = r.json()
|
||||
listOfTopLevelParentNodes = []
|
||||
if os.path.isfile("integrationUISPbandwidths.csv"):
|
||||
with open('integrationUISPbandwidths.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
name, download, upload = row
|
||||
download = int(download)
|
||||
upload = int(upload)
|
||||
listOfTopLevelParentNodes.append(name)
|
||||
bandwidthDL[name] = download
|
||||
bandwidthUL[name] = upload
|
||||
for ap in apProfiles:
|
||||
name = ap['device']['name']
|
||||
model = ap['device']['model']
|
||||
apID = ap['device']['id']
|
||||
if model in knownAPmodels:
|
||||
url = UISPbaseURL + "/nms/api/v2.1/devices/airmaxes/" + apID + '?withStations=false'
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
thisAPairmax = r.json()
|
||||
downloadCap = int(round(thisAPairmax['overview']['downlinkCapacity']/1000000))
|
||||
uploadCap = int(round(thisAPairmax['overview']['uplinkCapacity']/1000000))
|
||||
# If operator already included bandwidth definitions for this ParentNode, do not overwrite what they set
|
||||
if name not in listOfTopLevelParentNodes:
|
||||
print("Found " + name)
|
||||
listOfTopLevelParentNodes.append(name)
|
||||
bandwidthDL[name] = downloadCap
|
||||
bandwidthUL[name] = uploadCap
|
||||
for site in sites:
|
||||
name = site['identification']['name']
|
||||
if name not in excludeSites:
|
||||
# If operator already included bandwidth definitions for this ParentNode, do not overwrite what they set
|
||||
if name not in listOfTopLevelParentNodes:
|
||||
print("Found " + name)
|
||||
listOfTopLevelParentNodes.append(name)
|
||||
bandwidthDL[name] = 1000
|
||||
bandwidthUL[name] = 1000
|
||||
with open('integrationUISPbandwidths.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(['ParentNode', 'Download Mbps', 'Upload Mbps'])
|
||||
for device in listOfTopLevelParentNodes:
|
||||
entry = (device, bandwidthDL[device], bandwidthUL[device])
|
||||
wr.writerow(entry)
|
||||
url = UISPbaseURL + "/nms/api/v2.1/devices?role=ap"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
accessPoints = r.json()
|
||||
siteIDtoName = {}
|
||||
siteParentDict = {}
|
||||
sitesWithParents = []
|
||||
topLevelSites = []
|
||||
for site in sites:
|
||||
siteIDtoName[site['id']] = site['identification']['name']
|
||||
try:
|
||||
siteParentDict[site['id']] = site['identification']['parent']['id']
|
||||
sitesWithParents.append(site['id'])
|
||||
except:
|
||||
siteParentDict[site['id']] = None
|
||||
if site['identification']['name'] not in excludeSites:
|
||||
topLevelSites.append(site['id'])
|
||||
tLname = siteIDtoName[topLevelSites.pop()]
|
||||
topLevelNode = {
|
||||
tLname :
|
||||
{
|
||||
"downloadBandwidthMbps": bandwidthDL[tLname],
|
||||
"uploadBandwidthMbps": bandwidthUL[tLname],
|
||||
}
|
||||
}
|
||||
tree = createTree(sites,apProfiles, bandwidthDL, bandwidthUL, siteParentDict,siteIDtoName,sitesWithParents,topLevelNode)
|
||||
with open('network.json', 'w') as f:
|
||||
json.dump(tree, f, indent=4)
|
||||
|
||||
def createShaper():
|
||||
print("Creating ShapedDevices.csv")
|
||||
devicesToImport = []
|
||||
url = UISPbaseURL + "/nms/api/v2.1/sites?type=site"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
sites = r.json()
|
||||
siteIDtoName = {}
|
||||
for site in sites:
|
||||
siteIDtoName[site['id']] = site['identification']['name']
|
||||
url = UISPbaseURL + "/nms/api/v2.1/sites?type=client&ucrm=true&ucrmDetails=true"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
clientSites = r.json()
|
||||
url = UISPbaseURL + "/nms/api/v2.1/devices"
|
||||
headers = {'accept':'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
allDevices = r.json()
|
||||
ipv4ToIPv6 = {}
|
||||
if findIPv6usingMikrotik:
|
||||
ipv4ToIPv6 = pullMikrotikIPv6()
|
||||
for uispClientSite in clientSites:
|
||||
#if (uispClientSite['identification']['status'] == 'active') and (uispClientSite['identification']['suspended'] == False):
|
||||
if (uispClientSite['identification']['suspended'] == False):
|
||||
foundCPEforThisClientSite = False
|
||||
if (uispClientSite['qos']['downloadSpeed']) and (uispClientSite['qos']['uploadSpeed']):
|
||||
downloadSpeedMbps = int(round(uispClientSite['qos']['downloadSpeed']/1000000))
|
||||
uploadSpeedMbps = int(round(uispClientSite['qos']['uploadSpeed']/1000000))
|
||||
address = uispClientSite['description']['address']
|
||||
uispClientSiteID = uispClientSite['id']
|
||||
|
||||
UCRMclientID = uispClientSite['ucrm']['client']['id']
|
||||
siteName = uispClientSite['identification']['name']
|
||||
AP = 'none'
|
||||
thisSiteDevices = []
|
||||
#Look for station devices, use those to find AP name
|
||||
for device in allDevices:
|
||||
if device['identification']['site'] != None:
|
||||
if device['identification']['site']['id'] == uispClientSite['id']:
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
if (deviceRole == 'station'):
|
||||
if device['attributes']['apDevice']:
|
||||
AP = device['attributes']['apDevice']['name']
|
||||
#Look for router devices, use those as shaped CPE
|
||||
for device in allDevices:
|
||||
if device['identification']['site'] != None:
|
||||
if device['identification']['site']['id'] == uispClientSite['id']:
|
||||
deviceModel = device['identification']['model']
|
||||
deviceName = device['identification']['name']
|
||||
deviceRole = device['identification']['role']
|
||||
if device['identification']['mac']:
|
||||
deviceMAC = device['identification']['mac'].upper()
|
||||
else:
|
||||
deviceMAC = ''
|
||||
if (deviceRole == 'router') or (deviceModel in knownRouterModels):
|
||||
ipv4 = device['ipAddress']
|
||||
if '/' in ipv4:
|
||||
ipv4 = ipv4.split("/")[0]
|
||||
ipv6 = ''
|
||||
if ipv4 in ipv4ToIPv6.keys():
|
||||
ipv6 = ipv4ToIPv6[ipv4]
|
||||
if isInAllowedSubnets(ipv4):
|
||||
deviceModel = device['identification']['model']
|
||||
deviceModelName = device['identification']['modelName']
|
||||
maxSpeedDown = round(bandwidthOverheadFactor*downloadSpeedMbps)
|
||||
maxSpeedUp = round(bandwidthOverheadFactor*uploadSpeedMbps)
|
||||
minSpeedDown = min(round(maxSpeedDown*.98),maxSpeedDown)
|
||||
minSpeedUp = min(round(maxSpeedUp*.98),maxSpeedUp)
|
||||
#Customers directly connected to Sites
|
||||
if deviceName in exceptionCPEs.keys():
|
||||
AP = exceptionCPEs[deviceName]
|
||||
if AP == 'none':
|
||||
try:
|
||||
AP = siteIDtoName[uispClientSite['identification']['parent']['id']]
|
||||
except:
|
||||
AP = 'none'
|
||||
devicesToImport.append((uispClientSiteID, address, '', deviceName, AP, deviceMAC, ipv4, ipv6, str(minSpeedDown), str(minSpeedUp), str(maxSpeedDown),str(maxSpeedUp),''))
|
||||
foundCPEforThisClientSite = True
|
||||
else:
|
||||
print("Failed to import devices from " + uispClientSite['description']['address'] + ". Missing QoS.")
|
||||
if foundCPEforThisClientSite != True:
|
||||
print("Failed to import devices for " + uispClientSite['description']['address'])
|
||||
|
||||
with open('ShapedDevices.csv', 'w') as csvfile:
|
||||
wr = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
|
||||
wr.writerow(['Circuit ID', 'Circuit Name', 'Device ID', 'Device Name', 'Parent Node', 'MAC', 'IPv4', 'IPv6', 'Download Min', 'Upload Min', 'Download Max', 'Upload Max', 'Comment'])
|
||||
for device in devicesToImport:
|
||||
wr.writerow(device)
|
||||
|
||||
def importFromUISP():
|
||||
createNetworkJSON()
|
||||
createShaper()
|
||||
|
||||
if __name__ == '__main__':
|
||||
importFromUISP()
|
@ -1,74 +0,0 @@
|
||||
# 'fq_codel' or 'cake diffserv4'
|
||||
# 'cake diffserv4' is recommended
|
||||
|
||||
# fqOrCAKE = 'fq_codel'
|
||||
fqOrCAKE = 'cake diffserv4'
|
||||
|
||||
# How many Mbps are available to the edge of this network
|
||||
upstreamBandwidthCapacityDownloadMbps = 1000
|
||||
upstreamBandwidthCapacityUploadMbps = 1000
|
||||
|
||||
# Devices in ShapedDevices.csv without a defined ParentNode will be placed under a generated
|
||||
# parent node, evenly spread out across CPU cores. Here, define the bandwidth limit for each
|
||||
# of those generated parent nodes.
|
||||
generatedPNDownloadMbps = 1000
|
||||
generatedPNUploadMbps = 1000
|
||||
|
||||
# Interface connected to core router
|
||||
interfaceA = 'eth1'
|
||||
|
||||
# Interface connected to edge router
|
||||
interfaceB = 'eth2'
|
||||
|
||||
# Allow shell commands. False causes commands print to console only without being executed.
|
||||
# MUST BE ENABLED FOR PROGRAM TO FUNCTION
|
||||
enableActualShellCommands = True
|
||||
|
||||
# Add 'sudo' before execution of any shell commands. May be required depending on distribution and environment.
|
||||
runShellCommandsAsSudo = False
|
||||
|
||||
# Allows overriding queues / CPU cores used. When set to 0, the max possible queues / CPU cores are utilized. Please leave as 0.
|
||||
queuesAvailableOverride = 0
|
||||
|
||||
# Bandwidth Graphing
|
||||
bandwidthGraphingEnabled = True
|
||||
influxDBurl = "http://localhost:8086"
|
||||
influxDBBucket = "libreqos"
|
||||
influxDBOrg = "Your ISP Name Here"
|
||||
influxDBtoken = ""
|
||||
|
||||
# Latency Graphing
|
||||
latencyGraphingEnabled = False
|
||||
ppingLocation = "pping"
|
||||
|
||||
# NMS/CRM Integration
|
||||
# If a device shows a WAN IP within these subnets, assume they are behind NAT / un-shapable, and ignore them
|
||||
ignoreSubnets = ['192.168.0.0/16']
|
||||
allowedSubnets = ['100.64.0.0/10']
|
||||
# Optional UISP integration
|
||||
automaticImportUISP = False
|
||||
# Everything before /nms/ on your UISP instance
|
||||
uispBaseURL = 'https://examplesite.com'
|
||||
# UISP Auth Token
|
||||
uispAuthToken = ''
|
||||
# UISP | Whether to shape router at customer premises, or instead shape the station radio. When station radio is in
|
||||
# router mode, use 'station'. Otherwise, use 'router'.
|
||||
shapeRouterOrStation = 'router'
|
||||
# List any sites that should not be included, with each site name surrounded by '' and seperated by commas
|
||||
excludeSites = []
|
||||
# If you use IPv6, this can be used to find associated IPv6 prefixes for your clients' IPv4 addresses, and match them to those devices
|
||||
findIPv6usingMikrotik = False
|
||||
# If you want to provide a safe cushion for speed test results to prevent customer complains, you can set this to 1.15 (15% above plan rate).
|
||||
# If not, you can leave as 1.0
|
||||
bandwidthOverheadFactor = 1.0
|
||||
# For edge cases, set the respective ParentNode for these CPEs
|
||||
exceptionCPEs = {}
|
||||
# 'CPE-SomeLocation1': 'AP-SomeLocation1',
|
||||
# 'CPE-SomeLocation2': 'AP-SomeLocation2',
|
||||
#}
|
||||
|
||||
# API Auth
|
||||
apiUsername = "testUser"
|
||||
apiPassword = "changeme8343486806"
|
||||
apiHostIP = "127.0.0.1"
|
||||
apiHostPost = 5000
|
@ -1,213 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
import csv
|
||||
import io
|
||||
import ipaddress
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import subprocess
|
||||
import warnings
|
||||
import argparse
|
||||
from ispConfig import interfaceA, interfaceB, enableActualShellCommands
|
||||
|
||||
def shell(command):
|
||||
if enableActualShellCommands:
|
||||
logging.info(command)
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line)
|
||||
else:
|
||||
print(command)
|
||||
|
||||
def safeShell(command):
|
||||
safelyRan = True
|
||||
if enableActualShellCommands:
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
#logging.info(line)
|
||||
print(line)
|
||||
if ("RTNETLINK answers" in line) or ("We have an error talking to the kernel" in line):
|
||||
safelyRan = False
|
||||
else:
|
||||
print(command)
|
||||
safelyRan = True
|
||||
return safelyRan
|
||||
|
||||
def getQdiscForIPaddress(ipAddress):
|
||||
qDiscID = ''
|
||||
foundQdisc = False
|
||||
with open('statsByCircuit.json', 'r') as j:
|
||||
subscriberCircuits = json.loads(j.read())
|
||||
for circuit in subscriberCircuits:
|
||||
for device in circuit['devices']:
|
||||
for ipv4 in device['ipv4s']:
|
||||
if ipv4 == ipAddress:
|
||||
qDiscID = circuit['qdisc']
|
||||
foundQdisc = True
|
||||
for ipv6 in device['ipv6s']:
|
||||
if ipv6 == ipAddress:
|
||||
qDiscID = circuit['qdisc']
|
||||
foundQdisc = True
|
||||
if foundQdisc:
|
||||
return qDiscID
|
||||
else:
|
||||
return None
|
||||
|
||||
def printStatsFromIP(ipAddress):
|
||||
qDiscID = getQdiscForIPaddress(ipAddress)
|
||||
if qDiscID != None:
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
for interface in interfaces:
|
||||
command = 'tc -s qdisc show dev ' + interface + ' parent ' + qDiscID
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
print(line.replace('\n',''))
|
||||
else:
|
||||
print("Invalid IP address provided")
|
||||
|
||||
def printCircuitClassInfo(ipAddress):
|
||||
qDiscID = getQdiscForIPaddress(ipAddress)
|
||||
if qDiscID != None:
|
||||
print("IP: " + ipAddress + " | Class ID: " + qDiscID)
|
||||
print()
|
||||
theClassID = ''
|
||||
interfaces = [interfaceA, interfaceB]
|
||||
downloadMin = ''
|
||||
downloadMax = ''
|
||||
uploadMin = ''
|
||||
uploadMax = ''
|
||||
cburst = ''
|
||||
burst = ''
|
||||
for interface in interfaces:
|
||||
command = 'tc class show dev ' + interface + ' classid ' + qDiscID
|
||||
commands = command.split(' ')
|
||||
proc = subprocess.Popen(commands, stdout=subprocess.PIPE)
|
||||
for line in io.TextIOWrapper(proc.stdout, encoding="utf-8"): # or another encoding
|
||||
if "htb" in line:
|
||||
listOfThings = line.split(" ")
|
||||
if interface == interfaceA:
|
||||
downloadMin = line.split(' rate ')[1].split(' ')[0]
|
||||
downloadMax = line.split(' ceil ')[1].split(' ')[0]
|
||||
burst = line.split(' burst ')[1].split(' ')[0]
|
||||
cburst = line.split(' cburst ')[1].replace('\n','')
|
||||
else:
|
||||
uploadMin = line.split(' rate ')[1].split(' ')[0]
|
||||
uploadMax = line.split(' ceil ')[1].split(' ')[0]
|
||||
print("Download rate/ceil: " + downloadMin + "/" + downloadMax)
|
||||
print("Upload rate/ceil: " + uploadMin + "/" + uploadMax)
|
||||
print("burst/cburst: " + burst + "/" + cburst)
|
||||
else:
|
||||
print("Invalid IP address provided")
|
||||
|
||||
def findClassIDForCircuitByIP(data, inputIP, classID):
|
||||
for node in data:
|
||||
if 'circuits' in data[node]:
|
||||
for circuit in data[node]['circuits']:
|
||||
for device in circuit['devices']:
|
||||
if device['ipv4s']:
|
||||
for ipv4 in device['ipv4s']:
|
||||
if ipv4 == inputIP:
|
||||
classID = circuit['qdisc']
|
||||
if device['ipv6s']:
|
||||
for ipv6 in device['ipv6s']:
|
||||
if inputIP == ipv6:
|
||||
classID = circuit['qdisc']
|
||||
# Recursive call this function for children nodes attached to this node
|
||||
if 'children' in data[node]:
|
||||
classID = findClassIDForCircuitByIP(data[node]['children'], inputIP, classID)
|
||||
return classID
|
||||
|
||||
def changeQueuingStructureCircuitBandwidth(data, classid, minDownload, minUpload, maxDownload, maxUpload):
|
||||
for node in data:
|
||||
if 'circuits' in data[node]:
|
||||
for circuit in data[node]['circuits']:
|
||||
if circuit['qdisc'] == classid:
|
||||
circuit['minDownload'] = minDownload
|
||||
circuit['minUpload'] = minUpload
|
||||
circuit['maxDownload'] = maxDownload
|
||||
circuit['maxUpload'] = maxUpload
|
||||
# Recursive call this function for children nodes attached to this node
|
||||
if 'children' in data[node]:
|
||||
data[node]['children'] = changeQueuingStructureCircuitBandwidth(data[node]['children'], classid, minDownload, minUpload, maxDownload, maxUpload)
|
||||
return data
|
||||
|
||||
def findClassIDForCircuitByID(data, inputID, classID):
|
||||
for node in data:
|
||||
if 'circuits' in data[node]:
|
||||
for circuit in data[node]['circuits']:
|
||||
if circuit['circuitID'] == inputID:
|
||||
classID = circuit['qdisc']
|
||||
# Recursive call this function for children nodes attached to this node
|
||||
if 'children' in data[node]:
|
||||
classID = findClassIDForCircuitByID(data[node]['children'], inputID, classID)
|
||||
return classID
|
||||
|
||||
def changeCircuitBandwidthGivenID(circuitID, minDownload, minUpload, maxDownload, maxUpload):
|
||||
with open('queuingStructure.json') as file:
|
||||
queuingStructure = json.load(file)
|
||||
classID = findClassIDForCircuitByID(queuingStructure, circuitID, None)
|
||||
if classID:
|
||||
didThisCommandRunSafely_1 = safeShell("tc class change dev " + interfaceA + " classid " + classID + " htb rate " + str(minDownload) + "Mbit ceil " + str(maxDownload) + "Mbit")
|
||||
didThisCommandRunSafely_2 = safeShell("tc class change dev " + interfaceB + " classid " + classID + " htb rate " + str(minUpload) + "Mbit ceil " + str(maxUpload) + "Mbit")
|
||||
if (didThisCommandRunSafely_1 == False) or (didThisCommandRunSafely_2 == False):
|
||||
raise ValueError('Execution had errors. Halting now.')
|
||||
queuingStructure = changeQueuingStructureCircuitBandwidth(queuingStructure, classID, minDownload, minUpload, maxDownload, maxUpload)
|
||||
with open('queuingStructure.json', 'w') as infile:
|
||||
json.dump(queuingStructure, infile, indent=4)
|
||||
else:
|
||||
print("Unable to find associated Class ID")
|
||||
|
||||
def changeCircuitBandwidthGivenIP(ipAddress, minDownload, minUpload, maxDownload, maxUpload):
|
||||
with open('queuingStructure.json') as file:
|
||||
queuingStructure = json.load(file)
|
||||
classID = findClassIDForCircuitByIP(queuingStructure, ipAddress, None)
|
||||
if classID:
|
||||
didThisCommandRunSafely_1 = safeShell("tc class change dev " + interfaceA + " classid " + classID + " htb rate " + str(minDownload) + "Mbit ceil " + str(maxDownload) + "Mbit")
|
||||
didThisCommandRunSafely_2 = safeShell("tc class change dev " + interfaceB + " classid " + classID + " htb rate " + str(minUpload) + "Mbit ceil " + str(maxUpload) + "Mbit")
|
||||
if (didThisCommandRunSafely_1 == False) or (didThisCommandRunSafely_2 == False):
|
||||
raise ValueError('Execution had errors. Halting now.')
|
||||
queuingStructure = changeQueuingStructureCircuitBandwidth(queuingStructure, classID, minDownload, minUpload, maxDownload, maxUpload)
|
||||
with open('queuingStructure.json', 'w') as infile:
|
||||
json.dump(queuingStructure, infile, indent=4)
|
||||
else:
|
||||
print("Unable to find associated Class ID")
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers(dest='command')
|
||||
|
||||
changeBW = subparsers.add_parser('change-circuit-bandwidth', help='Change bandwidth rates of a given circuit using circuit ID')
|
||||
changeBW.add_argument('min-download', type=int, )
|
||||
changeBW.add_argument('min-upload', type=int,)
|
||||
changeBW.add_argument('max-download', type=int,)
|
||||
changeBW.add_argument('max-upload', type=int,)
|
||||
changeBW.add_argument('circuit-id', type=str,)
|
||||
|
||||
changeBWip = subparsers.add_parser('change-circuit-bandwidth-using-ip', help='Change bandwidth rates of a given circuit using IP')
|
||||
changeBWip.add_argument('min-download', type=int,)
|
||||
changeBWip.add_argument('min-upload', type=int,)
|
||||
changeBWip.add_argument('max-download', type=int,)
|
||||
changeBWip.add_argument('max-upload', type=int,)
|
||||
changeBWip.add_argument('ip-address', type=str,)
|
||||
|
||||
planFromIP = subparsers.add_parser('show-active-plan-from-ip', help="Provide tc class info by IP",)
|
||||
planFromIP.add_argument('ip', type=str,)
|
||||
statsFromIP = subparsers.add_parser('tc-statistics-from-ip', help="Provide tc qdisc stats by IP",)
|
||||
statsFromIP.add_argument('ip', type=str,)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if (args.command == 'change-circuit-bandwidth'):
|
||||
changeCircuitBandwidthGivenID(getattr(args, 'circuit-id'), getattr(args, 'min-download'), getattr(args, 'min-upload'), getattr(args, 'max-download'), getattr(args, 'max-upload'))
|
||||
elif(args.command == 'change-circuit-bandwidth-using-ip'):
|
||||
changeCircuitBandwidthGivenIP(getattr(args, 'ip'), getattr(args, 'min-download'), getattr(args, 'min-upload'), getattr(args, 'max-download'), getattr(args, 'max-upload'))
|
||||
elif (args.command == 'tc-statistics-from-ip'):
|
||||
printStatsFromIP(args.ip)
|
||||
elif (args.command == 'show-active-plan-from-ip'):
|
||||
printCircuitClassInfo(args.ip)
|
||||
else:
|
||||
print("Invalid parameters. Use --help to learn more.")
|
@ -1,2 +0,0 @@
|
||||
Router Name / ID,IP,API Username,API Password, API Port
|
||||
main,100.64.0.1,admin,password,8728
|
|
@ -1,52 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
import routeros_api
|
||||
import csv
|
||||
|
||||
def pullMikrotikIPv6():
|
||||
ipv4ToIPv6 = {}
|
||||
routerList = []
|
||||
with open('mikrotikDHCPRouterList.csv') as csv_file:
|
||||
csv_reader = csv.reader(csv_file, delimiter=',')
|
||||
next(csv_reader)
|
||||
for row in csv_reader:
|
||||
RouterName, IP, Username, Password, apiPort = row
|
||||
routerList.append((RouterName, IP, Username, Password, apiPort))
|
||||
for router in routerList:
|
||||
RouterName, IP, inputUsername, inputPassword = router
|
||||
connection = routeros_api.RouterOsApiPool(IP, username=inputUsername, password=inputPassword, port=apiPort, use_ssl=False, ssl_verify=False, ssl_verify_hostname=False, plaintext_login=True)
|
||||
api = connection.get_api()
|
||||
macToIPv4 = {}
|
||||
macToIPv6 = {}
|
||||
clientAddressToIPv6 = {}
|
||||
list_dhcp = api.get_resource('/ip/dhcp-server/lease')
|
||||
entries = list_dhcp.get()
|
||||
for entry in entries:
|
||||
try:
|
||||
macToIPv4[entry['mac-address']] = entry['address']
|
||||
except:
|
||||
pass
|
||||
list_dhcp = api.get_resource('/ipv6/dhcp-server/binding')
|
||||
entries = list_dhcp.get()
|
||||
for entry in entries:
|
||||
try:
|
||||
clientAddressToIPv6[entry['client-address']] = entry['address']
|
||||
except:
|
||||
pass
|
||||
list_dhcp = api.get_resource('/ipv6/neighbor')
|
||||
entries = list_dhcp.get()
|
||||
for entry in entries:
|
||||
try:
|
||||
realIPv6 = clientAddressToIPv6[entry['address']]
|
||||
macToIPv6[entry['mac-address']] = realIPv6
|
||||
except:
|
||||
pass
|
||||
for mac, ipv6 in macToIPv6.items():
|
||||
try:
|
||||
ipv4 = macToIPv4[mac]
|
||||
ipv4ToIPv6[ipv4] = ipv6
|
||||
except:
|
||||
print('Failed to find associated IPv4 for ' + ipv6)
|
||||
return ipv4ToIPv6
|
||||
|
||||
if __name__ == '__main__':
|
||||
print(pullMikrotikIPv6())
|
@ -1,75 +0,0 @@
|
||||
{
|
||||
"Site_1":
|
||||
{
|
||||
"downloadBandwidthMbps":1000,
|
||||
"uploadBandwidthMbps":1000,
|
||||
"children":
|
||||
{
|
||||
"AP_A":
|
||||
{
|
||||
"downloadBandwidthMbps":500,
|
||||
"uploadBandwidthMbps":500
|
||||
},
|
||||
"Site_3":
|
||||
{
|
||||
"downloadBandwidthMbps":500,
|
||||
"uploadBandwidthMbps":500,
|
||||
"children":
|
||||
{
|
||||
"PoP_5":
|
||||
{
|
||||
"downloadBandwidthMbps":200,
|
||||
"uploadBandwidthMbps":200,
|
||||
"children":
|
||||
{
|
||||
"AP_9":
|
||||
{
|
||||
"downloadBandwidthMbps":120,
|
||||
"uploadBandwidthMbps":120
|
||||
},
|
||||
"PoP_6":
|
||||
{
|
||||
"downloadBandwidthMbps":60,
|
||||
"uploadBandwidthMbps":60,
|
||||
"children":
|
||||
{
|
||||
"AP_11":
|
||||
{
|
||||
"downloadBandwidthMbps":30,
|
||||
"uploadBandwidthMbps":30
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"Site_2":
|
||||
{
|
||||
"downloadBandwidthMbps":500,
|
||||
"uploadBandwidthMbps":500,
|
||||
"children":
|
||||
{
|
||||
"PoP_1":
|
||||
{
|
||||
"downloadBandwidthMbps":200,
|
||||
"uploadBandwidthMbps":200,
|
||||
"children":
|
||||
{
|
||||
"AP_7":
|
||||
{
|
||||
"downloadBandwidthMbps":100,
|
||||
"uploadBandwidthMbps":100
|
||||
}
|
||||
}
|
||||
},
|
||||
"AP_1":
|
||||
{
|
||||
"downloadBandwidthMbps":150,
|
||||
"uploadBandwidthMbps":150
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -1,34 +0,0 @@
|
||||
import time
|
||||
import schedule
|
||||
from LibreQoS import refreshShapers
|
||||
from graphBandwidth import refreshBandwidthGraphs
|
||||
from graphLatency import refreshLatencyGraphs
|
||||
from ispConfig import bandwidthGraphingEnabled, latencyGraphingEnabled, automaticImportUISP
|
||||
if automaticImportUISP:
|
||||
from integrationUISP import importFromUISP
|
||||
|
||||
def importandshape():
|
||||
if automaticImportUISP:
|
||||
try:
|
||||
importFromUISP()
|
||||
except:
|
||||
print("Failed to import from UISP")
|
||||
refreshShapers()
|
||||
|
||||
if __name__ == '__main__':
|
||||
importandshape()
|
||||
schedule.every().day.at("04:00").do(importandshape)
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
if bandwidthGraphingEnabled:
|
||||
try:
|
||||
refreshBandwidthGraphs()
|
||||
except:
|
||||
print("Failed to update bandwidth graphs")
|
||||
if latencyGraphingEnabled:
|
||||
try:
|
||||
refreshLatencyGraphs(10)
|
||||
except:
|
||||
print("Failed to update latency graphs")
|
||||
else:
|
||||
time.sleep(10)
|
@ -1 +0,0 @@
|
||||
Subproject commit 12ea5973bac554eb5a8a808af4d836798dcb9d12
|
@ -16,7 +16,7 @@ def spylnxRequest(target, headers):
|
||||
# Sends a REST GET request to Spylnx and returns the
|
||||
# result in JSON
|
||||
url = splynx_api_url + "/api/2.0/" + target
|
||||
r = requests.get(url, headers=headers)
|
||||
r = requests.get(url, headers=headers, timeout=10)
|
||||
return r.json()
|
||||
|
||||
def getTariffs(headers):
|
||||
|
@ -11,7 +11,7 @@ def uispRequest(target):
|
||||
from ispConfig import UISPbaseURL, uispAuthToken
|
||||
url = UISPbaseURL + "/nms/api/v2.1/" + target
|
||||
headers = {'accept': 'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
r = requests.get(url, headers=headers, timeout=10)
|
||||
return r.json()
|
||||
|
||||
def buildFlatGraph():
|
||||
|
@ -1,14 +1,14 @@
|
||||
#LibreQoS - autogenerated file - START
|
||||
Circuit ID,Circuit Name,Device ID,Device Name,Parent Node,MAC,IPv4,IPv6,Download Min Mbps,Upload Min Mbps,Download Max Mbps,Upload Max Mbps,Comment
|
||||
1,"968 Circle St., Gurnee, IL 60031",1,Device 1,AP_A,,"100.64.0.1, 100.64.0.14",,25,5,155,20,
|
||||
2,"31 Marconi Street, Lake In The Hills, IL 60156",2,Device 2,AP_A,,100.64.0.2,,25,5,105,18,
|
||||
3,"255 NW. Newport Ave., Jamestown, NY 14701",3,Device 3,AP_9,,100.64.0.3,,25,5,105,18,
|
||||
4,"8493 Campfire Street, Peabody, MA 01960",4,Device 4,AP_9,,100.64.0.4,,25,5,105,18,
|
||||
2794,"6 Littleton Drive, Ringgold, GA 30736",5,Device 5,AP_11,,100.64.0.5,,25,5,105,18,
|
||||
2794,"6 Littleton Drive, Ringgold, GA 30736",6,Device 6,AP_11,,100.64.0.6,,25,5,105,18,
|
||||
5,"93 Oklahoma Ave., Parsippany, NJ 07054",7,Device 7,AP_1,,100.64.0.7,,25,5,155,20,
|
||||
6,"74 Bishop Ave., Bakersfield, CA 93306",8,Device 8,AP_1,,100.64.0.8,,25,5,105,18,
|
||||
7,"9598 Peg Shop Drive, Lutherville Timonium, MD 21093",9,Device 9,AP_7,,100.64.0.9,,25,5,105,18,
|
||||
8,"115 Gartner Rd., Gettysburg, PA 17325",10,Device 10,AP_7,,100.64.0.10,,25,5,105,18,
|
||||
9,"525 Birchpond St., Romulus, MI 48174",11,Device 11,Site_1,,100.64.0.11,,25,5,105,18,
|
||||
1,"968 Circle St., Gurnee, IL 60031",1,Device 1,AP_A,,"100.64.0.1, 100.64.0.14",fdd7:b724:0:100::/56,25,5,155,20,
|
||||
2,"31 Marconi Street, Lake In The Hills, IL 60156",2,Device 2,AP_A,,100.64.0.2,fdd7:b724:0:200::/56,25,5,105,18,
|
||||
3,"255 NW. Newport Ave., Jamestown, NY 14701",3,Device 3,AP_9,,100.64.0.3,fdd7:b724:0:300::/56,25,5,105,18,
|
||||
4,"8493 Campfire Street, Peabody, MA 01960",4,Device 4,AP_9,,100.64.0.4,fdd7:b724:0:400::/56,25,5,105,18,
|
||||
2794,"6 Littleton Drive, Ringgold, GA 30736",5,Device 5,AP_11,,100.64.0.5,fdd7:b724:0:500::/56,25,5,105,18,
|
||||
2794,"6 Littleton Drive, Ringgold, GA 30736",6,Device 6,AP_11,,100.64.0.6,fdd7:b724:0:600::/56,25,5,105,18,
|
||||
5,"93 Oklahoma Ave., Parsippany, NJ 07054",7,Device 7,AP_1,,100.64.0.7,fdd7:b724:0:700::/56,25,5,155,20,
|
||||
6,"74 Bishop Ave., Bakersfield, CA 93306",8,Device 8,AP_1,,100.64.0.8,fdd7:b724:0:800::/56,25,5,105,18,
|
||||
7,"9598 Peg Shop Drive, Lutherville Timonium, MD 21093",9,Device 9,AP_7,,100.64.0.9,fdd7:b724:0:900::/56,25,5,105,18,
|
||||
8,"115 Gartner Rd., Gettysburg, PA 17325",10,Device 10,AP_7,,100.64.0.10,fdd7:b724:0:a00::/56,25,5,105,18,
|
||||
9,"525 Birchpond St., Romulus, MI 48174",11,Device 11,Site_1,,100.64.0.11,fdd7:b724:0:b00::/56,25,5,105,18,
|
||||
#LibreQoS - autogenerated file - EOF
|
||||
|
Can't render this file because it has a wrong number of fields in line 2.
|
@ -25,7 +25,7 @@ def createShaper():
|
||||
|
||||
requestConfig = objects.defaults_deep({'params': {}}, restconf.get('requestsConfig'), requestsBaseConfig)
|
||||
|
||||
raw = get(devicesURL, **requestConfig)
|
||||
raw = get(devicesURL, **requestConfig, timeout=10)
|
||||
|
||||
if raw.status_code != 200:
|
||||
print('Failed to request ' + devicesURL + ', got ' + str(raw.status_code))
|
||||
@ -51,7 +51,7 @@ def createShaper():
|
||||
|
||||
networkURL = restconf['baseURL'] + '/' + restconf['networkURI'].strip('/')
|
||||
|
||||
raw = get(networkURL, **requestConfig)
|
||||
raw = get(networkURL, **requestConfig, timeout=10)
|
||||
|
||||
if raw.status_code != 200:
|
||||
print('Failed to request ' + networkURL + ', got ' + str(raw.status_code))
|
||||
|
@ -19,7 +19,7 @@ def spylnxRequest(target, headers):
|
||||
# Sends a REST GET request to Spylnx and returns the
|
||||
# result in JSON
|
||||
url = splynx_api_url + "/api/2.0/" + target
|
||||
r = requests.get(url, headers=headers)
|
||||
r = requests.get(url, headers=headers, timeout=10)
|
||||
return r.json()
|
||||
|
||||
def getTariffs(headers):
|
||||
|
@ -13,7 +13,7 @@ def uispRequest(target):
|
||||
from ispConfig import UISPbaseURL, uispAuthToken
|
||||
url = UISPbaseURL + "/nms/api/v2.1/" + target
|
||||
headers = {'accept': 'application/json', 'x-auth-token': uispAuthToken}
|
||||
r = requests.get(url, headers=headers)
|
||||
r = requests.get(url, headers=headers, timeout=10)
|
||||
return r.json()
|
||||
|
||||
def buildFlatGraph():
|
||||
|
Loading…
Reference in New Issue
Block a user