* Bring the per-client buffer size back down to a reasonable 2k.
* Divide submission batches into groups and submit those. It's
still MASSIVELY faster, but it can't fall victim to guessing
the number of batches incorrectly.
a 2048000 size read buffer will support 80k queues being installed
in a batch.
H/T Lake Linx and Herbert for this improvement. Perhaps it too
can be improved to have no limit, or the right limit, in the future.
Step 1 of the network funnel
* network.json reader now tags throughput entries with their tree
location and parents to whom data should be applied.
* Data flows "up the tree", giving totals all the way up.
* Simple network map page for displaying the data while it's worked
on.
ShapedDevices.csv is now monitored in lqosd. This brings some
advantages:
* The Tracked Devices list now knows the circuit id association
for every tracked IP.
* The associations auto-update after a ShapedDevices reload.
* The webserver is no longer doing Trie lookups to figure
out what name to display.
Moving forwards, this will allow for stats gathering to group
IPs by circuit, and allow calculation of the "funnel".
* Add a new Python-exported class ('BatchedCommands') to the
Python-Rust library.
* Replace direct calls to xdp_iphash_cpu_cmdline with batched
commands.
* Execute the single batch and obtain counts from the batch.
Replace:
min(data[node]['downloadBandwidthMbps'],parentMaxDL)
With:
min(int(data[node]['downloadBandwidthMbps']),int(parentMaxDL))
Python thought my current configuration contained a string for one
of the numbers. It was a string representing an int. Force strong
typing (a non-numeric will fail, but it would fail anyway).
Rather than having lots of "cpu.replace("0x", "").str_radix(...)
calls around, move to a single, unit-tested function in lqos_utils
and use it repeatedly.
This one was pretty funny. Any line that contained interfaceA in
ispConfig.example.py was transformed into an interfaceA statement.
I forgot to check for comments, so the comment on how to use
onAStick configuration *also* generated an interface statement.
It now just copies comments verbatim.
1) When calculating median latency, reject any entry that doesn't
have at least 5 data points. From local testing, 5 appears
to be the magic number (when combined with sampling time) that
ignores the "idle" traffic from CPEs, routers and long-poll
sessions on devices.
2) Filter out RTT 0 from best/worst reports.
3) Note that no data is discarded - it's just filtered for display.
This results in a much cleaner display of RTT times in the
reporting interface, giving a much better ability to "zero in"
on problem areas without being distracted by poor RTT - but
basically no traffic - hosts that are idle.
* Adds a new Rust program, `lqos_setup`.
* If no /etc/lqos.conf is found, prompts for interfaces and
creates a dual-interface XDP bridge setup.
* If no /opt/libreqos/src/ispConfig.py is found, prompts
for bandwidth and creates one (using the interfaces also)
* Same for ShapedDevices.csv and network.json
* If no webusers are found, prompts to make one.
* Adds build_dbpkg.sh
* Creates a new directory named `dist`
* Builds the Rust components in a portable mode.
* Creates a list of dependencies and DEBIAN directory
with control and postinst files.
* Handles PIP dependencies in postinst
* Calls the new `lqos_setup` program for final
configuration.
* Sets up the daemons in systemd and enables them.
In very brief testing, I had a working XDP bridge with
1 fake user and a total bandwidth limit configured and
working after running:
dpkg -i 1.4-1.dpkg
apt -f install
Could still use some tweaking.
The partial reload mechanism *really* doesn't work with OnAStick
configurations at present. There's a lot of work required to make
it function. In the meantime, warn the poor user that this
isn't going to work.
Affects ISSUE #129