mirror of
https://github.com/LibreQoE/LibreQoS.git
synced 2025-02-25 18:55:32 -06:00
Lts2 client (#594)
# 1.5 Beta 4 Changes * MASSIVE reduction in memory usage, and no more explosive RAM growth over time. * Significant reduction in CPU usage for the UI and analysis threads. * Documentation improvements. * Netflow: * Timestamps were invalid, causing UISP to reject the input. * Move to a lock-free collection system, eliminating delays and lowering CPU usage. * Fixed automatic provisioning of Long-Term Stats subscribers. * User Interface: * “Top X” tables in the UI are now buffered for smoother output. * Improvements to graph consistency in UI. * Network panel on customers now buffers and fades entries out, so it’s possible to read rather than continually jumping. * TCP Retransmits are now displayed as a percentage of TCP packets, giving a much better *relative* scale of where issues are occurring. * Up/Down is now labeled on packet counts. * Improved output scaling on numbers trims “.0” * IPv6 prefix detection is *much* better and won’t cause UI errors. * Configuration editor now handles floats vs ints correctly. * Fixed a login error if no authentication cookie is set. * Improved Sankey redaction. * Fixed an occasional error that would make the “sponsor us” toast appear incorrectly. * Sankey coloration fixes. * Geo.bin file now won’t self-update very often (it tracks ASN data), and only if it has changed on the server-side. * LTS2: If you’re invited to help us test it, we’re starting early alpha testing. This version includes the client to make that possible. LTS1 will continue to function/gather data.
This commit is contained in:
parent
11d8059fa1
commit
0e0ee5daf7
20
README.md
20
README.md
@ -1,10 +1,18 @@
|
||||
<a href="https://libreqos.io/"><img alt="LibreQoS" src="https://user-images.githubusercontent.com/22501920/202913614-4ff2e506-e645-4a94-9918-d512905ab290.png"></a>
|
||||
|
||||
LibreQoS is a Quality of Experience (QoE) Smart Queue Management (SQM) system designed for Internet Service Providers to optimize the flow of their network traffic and thus reduce bufferbloat, keep the network responsive, and improve the end-user experience.
|
||||
LibreQoS is software used by Internet Service Providers (ISPs) and companies to improve the Quality of Experience (QoE) of end-users on their networks.
|
||||
|
||||
Servers running LibreQoS can shape traffic for thousands of customers. On higher-end servers, LibreQoS is capable of shaping 50-80 Gbps of traffic.
|
||||
LibreQoS works by monitoring network performance and reducing latency for interactive online applications. ISPs regularly report that trouble calls reduce substantially after deploying LibreQoS on networks.
|
||||
|
||||
Learn more at [LibreQoS.io](https://libreqos.io/)!
|
||||
For an ISP or other company to deploy LibreQoS – no changes are required for existing Routers, Switches or Access points: LibreQoS runs on a server that acts as a managed bridge sitting between the edge router and the core of the network. LibreQoS servers can shape traffic for tens of thousands of customers.
|
||||
|
||||
LibreQoS benefits network operation and planning by measuring metrics such as end-to-end TCP round trip time for each Subscriber, Access Point, and Site on a network – saving the relevant data for later analysis on the LibreQoS Long Term Stats (LTS) system.
|
||||
|
||||
LibreQoS code is Open Source. It improves Quality of Experience by implementing state of the art Flow Queueing (FQ) and Active Queue Management (AQM) algorithms. LibreQoS offers paid support packages and subscriptions to additional analysis service features available through [LTS](https://libreqos.io/lts/).
|
||||
|
||||
|
||||
|
||||
To learn more, please visit [LibreQoS.io](https://libreqos.io/).
|
||||
|
||||
<img alt="LibreQoS" src="https://user-images.githubusercontent.com/22501920/223866474-603e1112-e2e6-4c67-93e4-44c17b1b7c43.png"></a>
|
||||
|
||||
@ -38,11 +46,9 @@ Our Zulip chat server is available at [https://chat.libreqos.io/join/fvu3cerayya
|
||||
|
||||
## Long-Term Stats (LTS)
|
||||
|
||||
Long-Term Stats (LTS) is an analytics service built for LibreQoS that revolutionizes the way you track and analyze your network.
|
||||
With flexible time window views ranging from 5 minutes to 1 month, LTS gives you comprehensive insights into your network's performance.
|
||||
Built from the ground-up for performance and efficiency, LTS greatly outperforms our original InfluxDB plugin, and gives you rapidly rendered data to help you maximize your network performance.
|
||||
LibreQoS Long-Term Stats (LTS) is an analytics service that overhauls the way you track and analyze your network. With flexible time window views ranging from 5 minutes to 1 month, LTS gives you comprehensive insights into your network’s performance. Built from the ground-up for performance and efficiency, LTS greatly outperforms other solutions, and gives you rapidly rendered data to help you maximize your network performance.
|
||||
|
||||
We provide a free 30-day trial of LTS, after which the rate is $0.30 USD per shaped subscriber.
|
||||
We provide a free 30-day trial of LTS, after which the rate is $0.30 USD per shaped subscriber per month.
|
||||
You can enroll in the 30-day free trial by [upgrading to the latest version of LibreQoS v1.4](https://libreqos.readthedocs.io/en/latest/docs/Updates/update.html) and selecting "Start Stats Free Trial" in the top-right corner of the local LibreQoS WebUI.
|
||||
|
||||
<img alt="LibreQoS Long Term Stats" src="https://i0.wp.com/libreqos.io/wp-content/uploads/2023/11/01-Dashboard.png"></a>
|
||||
|
@ -7,49 +7,56 @@
|
||||
* 2 or more CPU cores
|
||||
* A CPU with solid [single-thread performance](https://www.cpubenchmark.net/singleThread.html#server-thread) within your budget. Queuing is very CPU-intensive, and requires high single-thread performance.
|
||||
|
||||
Single-thread CPU performance will determine the max throughput of a single HTB (cpu core), and in turn, what max speed plan you can offer customers.
|
||||
Single-thread CPU performance will determine the maximum capacity of a single HTB (cpu core), and in turn, the maximum capacity of any top level node in the network hierarchy (for example, top-level sites in your network). This also impacts the maximum speed plan you can offer customers within safe margins.
|
||||
|
||||
| Customer Max Plan | Passmark Single-Thread |
|
||||
| Top Level Node Max | Single-Thread Score |
|
||||
| --------------------| ------------------------ |
|
||||
| 1 Gbps | 1000 |
|
||||
| 2 Gbps | 1500 |
|
||||
| 3 Gbps | 2000 |
|
||||
| 5 Gbps | 4000 |
|
||||
|
||||
| Customer Max Plan | Single-Thread Score |
|
||||
| --------------------| ------------------------ |
|
||||
| 100 Mbps | 1000 |
|
||||
| 250 Mbps | 1250 |
|
||||
| 500 Mbps | 1500 |
|
||||
| 1 Gbps | 2000 |
|
||||
| 2.5 Gbps | 3000 |
|
||||
| 1 Gbps | 1750 |
|
||||
| 2.5 Gbps | 2000 |
|
||||
| 5 Gbps | 4000 |
|
||||
|
||||
Below is a table of approximate aggregate throughput capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 2700 / 4000:
|
||||
Below is a table of approximate aggregate capacity, assuming a a CPU with a [single thread](https://www.cpubenchmark.net/singleThread.html#server-thread) performance of 1000 / 2000 / 4000:
|
||||
|
||||
| Aggregate Throughput | CPU Cores Needed (>2700 single-thread) | CPU Cores Needed (>4000 single-thread) |
|
||||
| ------------------------| -------------------------------------- | -------------------------------------- |
|
||||
| 500 Mbps | 2 | 2 |
|
||||
| 1 Gbps | 4 | 2 |
|
||||
| 5 Gbps | 6 | 4 |
|
||||
| 10 Gbps | 8 | 6 |
|
||||
| 20 Gbps | 16 | 8 |
|
||||
| 50 Gbps | 32 | 16 |
|
||||
| 100 Gbps | 64 | 32 |
|
||||
|
||||
So for example, an ISP delivering 1Gbps service plans with 10Gbps aggregate throughput would choose a CPU with a 2500+ single-thread score and 8 cores, such as the Intel Xeon E-2388G @ 3.20GHz.
|
||||
| CPU Cores | Single-Thread Score = 1000 | Single-Thread Score = 2000 | Single-Thread Score = 4000 |
|
||||
|-----------|----------------------------|----------------------------|----------------------------|
|
||||
| 2 | 1 Gbps | 3 Gbps | 7 Gbps |
|
||||
| 4 | 3 Gbps | 5 Gbps | 13 Gbps |
|
||||
| 6 | 4 Gbps | 8 Gbps | 20 Gbps |
|
||||
| 8 | 5 Gbps | 10 Gbps | 27 Gbps |
|
||||
| 16 | 10 Gbps | 21 Gbps | 54 Gbps |
|
||||
| 32 | 21 Gbps | 42 Gbps | 108 Gbps |
|
||||
| 64 | 42 Gbps | 83 Gbps | 216 Gbps |
|
||||
|
||||
### Memory
|
||||
* Recommended RAM:
|
||||
|
||||
| Subscribers | RAM |
|
||||
| ------------- | ------------- |
|
||||
| 100 | 8 GB |
|
||||
| 1,000 | 16 GB |
|
||||
| 500 | 16 GB |
|
||||
| 1,000 | 32 GB |
|
||||
| 5,000 | 64 GB |
|
||||
| 10,000 | 128 GB |
|
||||
| 20,000 | 256 GB |
|
||||
|
||||
### Server Recommendations
|
||||
Here are some convenient, off-the-shelf server options to consider:
|
||||
| Throughput | Model | CPU Option | RAM Option | NIC Option | Extras | Temp Range |
|
||||
| --- | --- | --- | --- | --- | --- | --- |
|
||||
| 2.5 Gbps | [Supermicro SYS-E102-13R-E](https://store.supermicro.com/us_en/compact-embedded-iot-i5-1350pe-sys-e102-13r-e.html) | Default | 2x8GB | Built-in | [USB-C RJ45](https://www.amazon.com/Anker-Ethernet-PowerExpand-Aluminum-Portable/dp/B08CK9X9Z8/)| 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
| 10 Gbps | [Supermicro AS -1115S-FWTRT](https://store.supermicro.com/us_en/1u-amd-epyc-8004-compact-server-as-1115s-fwtrt.html) | 8124P | 2x16GB | Mellanox (2 x SFP28) | | 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
| 25 Gbps | [Supermicro AS -1115S-FWTRT](https://store.supermicro.com/us_en/1u-amd-epyc-8004-compact-server-as-1115s-fwtrt.html) | 8534P | 4x16GB | Mellanox (2 x SFP28) | | 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
| Throughput | Per Node / Per CPU Core| Model | CPU Option | RAM Option | NIC Option | Extras | Temp Range |
|
||||
| --- | --- | --- | --- | --- | --- | --- | --- |
|
||||
| 2.5 Gbps | 1 Gbps | [Supermicro SYS-E102-13R-E](https://store.supermicro.com/us_en/compact-embedded-iot-i5-1350pe-sys-e102-13r-e.html) | Default | 2x8GB | Built-in | [USB-C RJ45](https://www.amazon.com/Anker-Ethernet-PowerExpand-Aluminum-Portable/dp/B08CK9X9Z8/)| 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
| 10 Gbps | 3 Gbps | [Supermicro AS-1115S-FWTRT](https://store.supermicro.com/us_en/1u-amd-epyc-8004-compact-server-as-1115s-fwtrt.html) | 8124P | 2x16GB | Mellanox (2 x SFP28) | | 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
| 10 Gbps | 5 Gbps | [Supermicro SYS-511R-M](https://store.supermicro.com/us_en/mainstream-1u-sys-511r-m.html) | E-2488 | 2x32GB | 10-Gigabit X710-BM2 (2 x SFP+) | | 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
| 10 Gbps | 5 Gbps | [Dell PowerEdge R260](https://www.dell.com/en-us/shop/dell-poweredge-servers/new-poweredge-r260-rack-server/spd/poweredge-r260/pe_r260_tm_vi_vp_sb?configurationid=2cd33e43-57a3-4f82-aa72-9d5f45c9e24c) | E-2456 | 2x32GB | Intel X710-T2L (2 x 10G RJ45) | | 5–40°C (41–104°F) |
|
||||
| 25 Gbps | 3 Gbps | [Supermicro AS-1115S-FWTRT](https://store.supermicro.com/us_en/1u-amd-epyc-8004-compact-server-as-1115s-fwtrt.html) | 8534P | 4x16GB | Mellanox (2 x SFP28) | | 0°C ~ 40°C (32°F ~ 104°F) |
|
||||
|
||||
### Network Interface Requirements
|
||||
* One management network interface completely separate from the traffic shaping interfaces. Usually this would be the Ethernet interface built in to the motherboard.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Install LibreQoS 1.5
|
||||
# Install LibreQoS 2.0
|
||||
|
||||
## Step 1 - Validate Network Design Assumptions and Hardware Selection
|
||||
|
||||
@ -17,7 +17,7 @@ Donwload the latest .deb from [libreqos.io/#download](https://libreqos.io/#downl
|
||||
|
||||
Unzip the .zip file and transfer the .deb to your LibreQoS box, installing with:
|
||||
```
|
||||
sudo apt install [deb file name]
|
||||
sudo apt install ./deb_file_name.deb
|
||||
```
|
||||
|
||||
### Git Install (For Developers Only - Not Recommended)
|
@ -4,8 +4,36 @@
|
||||
|
||||
First, set the relevant parameters for Splynx (splynx_api_key, splynx_api_secret, etc.) in `/etc/lqos.conf`.
|
||||
|
||||
### Splynx API Access
|
||||
|
||||
The Splynx Integration uses Basic authentication. For using this type of authentication, please make sure you enable [Unsecure access](https://splynx.docs.apiary.io/#introduction/authentication) in your Splynx API key settings. Also the Splynx API key should be granted access to the necessary permissions.
|
||||
|
||||
* Tariff Plans -> Internet -> view
|
||||
* Tariff Plans -> Bundle -> view
|
||||
* Tariff Plans -> One time -> view
|
||||
* Tariff Plans -> Recurring -> view
|
||||
* FUP -> Counter -> view
|
||||
* FUP -> Compiler -> view
|
||||
* FUP -> Policies -> view
|
||||
* FUP -> Capped Data -> view
|
||||
* FUP -> CAP Tariff -> view
|
||||
* FUP -> FUP Limits -> view
|
||||
* FUP -> Traffic Usage -> view
|
||||
* Customers -> customer -> view
|
||||
* Customers -> customer information -> view
|
||||
* Customers -> Customers online -> view
|
||||
* Customers -> customer bundle services -> view
|
||||
* Customers -> customer internet services -> view
|
||||
* Customers -> traffic counter -> view
|
||||
* Customers -> customer recurring services -> view
|
||||
* Customers -> bonus traffic counter -> view
|
||||
* Customers -> CAP history -> view
|
||||
* Networking -> routers -> view
|
||||
* Networking -> network sites >view
|
||||
* Networking -> router contention -> view
|
||||
* Networking -> IPv4 networks -> view
|
||||
* Networking -> IPv4 networks IP -> view
|
||||
|
||||
To test the Splynx Integration, use
|
||||
|
||||
```shell
|
||||
@ -119,6 +147,22 @@ sudo cp /opt/libreqos/src/integrationUISPbandwidths.template.csv /opt/libreqos/s
|
||||
```
|
||||
And edit the CSV using LibreOffice or your preferred CSV editor.
|
||||
|
||||
#### UISP Route Overrides
|
||||
|
||||
The default cost between nodes is 10.
|
||||
|
||||
Say you have Site 1, Site 2, and Site 3.
|
||||
A backup path exists between Site 1 and Site 3, but is not the preferred path.
|
||||
Your preference is Site 1 > Site 2 > Site 3, but the integration by default connects Site 1 > Site 3 directly.
|
||||
|
||||
To fix this, add a cost above the default for the path between Site 1 and Site 3.
|
||||
```
|
||||
Site 1, Site 3, 100
|
||||
```
|
||||
With this, data will flow Site 1 > Site 2 > Site 3.
|
||||
|
||||
To make the change, perform a reload of the integration with ```sudo systemctl restart lqos_scheduler```.
|
||||
|
||||
## Powercode Integration
|
||||
|
||||
First, set the relevant parameters for Powercode (powercode_api_key, powercode_api_url, etc.) in `/etc/lqos.conf`.
|
@ -1,4 +1,4 @@
|
||||
# Updating 1.5 To Latest Version
|
||||
# Updating To The Latest Version
|
||||
|
||||
```{warning}
|
||||
If you use the XDP bridge, traffic will briefly stop passing through the bridge when lqosd restarts (XDP bridge is only operating while lqosd runs).
|
||||
@ -10,7 +10,7 @@ Donwload the latest .deb from [libreqos.io/#download](https://libreqos.io/#downl
|
||||
|
||||
Unzip the .zip file and transfer the .deb to your LibreQoS box, installing with:
|
||||
```
|
||||
sudo apt install [deb file name]
|
||||
sudo apt install ./[deb file name]
|
||||
```
|
||||
|
||||
Now run:
|
31
index.rst
31
index.rst
@ -45,25 +45,18 @@ Welcome to the LibreQoS documentation!
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: v1.5:
|
||||
:caption: v2.0:
|
||||
|
||||
docs/v1.5/Quickstart/quickstart-libreqos-1.5
|
||||
docs/v1.5/Quickstart/quickstart-prereq
|
||||
docs/v1.5/Quickstart/configuration
|
||||
docs/v1.5/Quickstart/services-and-run
|
||||
docs/v1.5/Quickstart/share
|
||||
docs/v2.0/Quickstart/quickstart-libreqos-2.0
|
||||
docs/v2.0/Quickstart/quickstart-prereq
|
||||
docs/v2.0/Quickstart/configuration
|
||||
docs/v2.0/Quickstart/services-and-run
|
||||
docs/v2.0/Quickstart/share
|
||||
|
||||
docs/v1.5/Updates/update
|
||||
|
||||
docs/v1.5/TechnicalDocs/complex-install
|
||||
docs/v1.5/TechnicalDocs/troubleshooting
|
||||
docs/v1.5/TechnicalDocs/integrations
|
||||
docs/v1.5/TechnicalDocs/extras
|
||||
docs/v1.5/TechnicalDocs/performance-tuning
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Legacy:
|
||||
|
||||
docs/Legacy/v1.3.1
|
||||
docs/v2.0/Updates/update
|
||||
|
||||
docs/v2.0/TechnicalDocs/complex-install
|
||||
docs/v2.0/TechnicalDocs/troubleshooting
|
||||
docs/v2.0/TechnicalDocs/integrations
|
||||
docs/v2.0/TechnicalDocs/extras
|
||||
docs/v2.0/TechnicalDocs/performance-tuning
|
||||
|
@ -1 +1 @@
|
||||
1.5-BETA2
|
||||
1.5-BETA4
|
@ -44,14 +44,34 @@ pushd rust > /dev/null
|
||||
#cargo clean
|
||||
for prog in $PROGS
|
||||
do
|
||||
pushd $prog > /dev/null
|
||||
cargo build $BUILD_FLAGS
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cargo build failed. Exiting with code 1."
|
||||
exit 1
|
||||
fi
|
||||
popd > /dev/null
|
||||
# If prog is lqosd
|
||||
if [ $prog == "lqosd" ]; then
|
||||
# If the environment variable FLAMEGRAPHS is set, set the FEATURE variable to flamegraph, otherwise it's empty
|
||||
if [ -n "$FLAMEGRAPHS" ]; then
|
||||
echo "Building lqosd with flamegraph support"
|
||||
FEATURE="-F flamegraphs"
|
||||
else
|
||||
echo "Building lqosd without flamegraph support"
|
||||
FEATURE=""
|
||||
fi
|
||||
echo "Building lqosd"
|
||||
pushd lqosd > /dev/null
|
||||
cargo build $BUILD_FLAGS $FEATURE
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cargo build failed. Exiting with code 1."
|
||||
exit 1
|
||||
fi
|
||||
popd > /dev/null
|
||||
else
|
||||
pushd $prog > /dev/null
|
||||
cargo build $BUILD_FLAGS
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Cargo build failed. Exiting with code 1."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
done
|
||||
popd > /dev/null
|
||||
|
||||
echo "Installing new binaries into bin folder."
|
||||
for prog in $PROGS
|
||||
|
@ -303,4 +303,4 @@ def importFromSplynx():
|
||||
createShaper()
|
||||
|
||||
if __name__ == '__main__':
|
||||
importFromSplynx()
|
||||
importFromSplynx()
|
||||
|
1021
src/rust/Cargo.lock
generated
1021
src/rust/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@ -6,7 +6,7 @@ license = "GPL-2.0-only"
|
||||
|
||||
[profile.release]
|
||||
strip = "debuginfo"
|
||||
lto = "thin"
|
||||
lto = "off"
|
||||
incremental = true
|
||||
|
||||
[profile.release.build-override]
|
||||
@ -35,6 +35,7 @@ members = [
|
||||
"uisp", # REST support for the UISP API
|
||||
"uisp_integration", # UISP Integration in Rust
|
||||
"lqos_support_tool", # A Helper tool to make it easier to request/receive support
|
||||
"lts2_sys", # Linkage to the proprietary LTS2 system
|
||||
]
|
||||
|
||||
[dependencies]
|
||||
@ -57,7 +58,7 @@ ip_network_table = "0"
|
||||
ip_network = "0"
|
||||
sha2 = "0"
|
||||
uuid = { version = "1", features = ["v4", "fast-rng" ] }
|
||||
dashmap = "5.1.0"
|
||||
dashmap = "^5.5.3"
|
||||
toml = "0.8.8"
|
||||
zerocopy = {version = "0.8.5", features = [ "derive", "zerocopy-derive", "simd" ] }
|
||||
sysinfo = { version = "0", default-features = false, features = [ "system" ] }
|
||||
|
@ -20,6 +20,10 @@ tracing = { workspace = true }
|
||||
nix = { workspace = true }
|
||||
serde_cbor = { workspace = true }
|
||||
|
||||
# For memory debugging
|
||||
allocative = { version = "0.3.3", features = [ "dashmap" ] }
|
||||
allocative_derive = "0.3.3"
|
||||
|
||||
[dev-dependencies]
|
||||
criterion = { version = "0", features = [ "html_reports", "async_tokio"] }
|
||||
|
||||
|
@ -10,7 +10,7 @@ pub use client::bus_request;
|
||||
use tracing::error;
|
||||
pub use persistent_client::BusClient;
|
||||
pub use reply::BusReply;
|
||||
pub use request::{BusRequest, StatsRequest, TopFlowType};
|
||||
pub use request::{BusRequest, StatsRequest, TopFlowType, BlackboardSystem};
|
||||
pub use response::BusResponse;
|
||||
pub use session::BusSession;
|
||||
use thiserror::Error;
|
||||
|
@ -204,6 +204,39 @@ pub enum BusRequest {
|
||||
|
||||
/// IP Protocol Summary
|
||||
IpProtocolSummary,
|
||||
|
||||
/// Submit a piece of information to the blackboard
|
||||
BlackboardData {
|
||||
/// The subsystem to which the data applies
|
||||
subsystem: BlackboardSystem,
|
||||
/// The key for the data
|
||||
key: String,
|
||||
/// The value for the data
|
||||
value: String,
|
||||
},
|
||||
|
||||
/// Submit binary data to the blackboard
|
||||
BlackboardBlob {
|
||||
tag: String,
|
||||
part: usize,
|
||||
blob: Vec<u8>,
|
||||
},
|
||||
|
||||
/// Finish a blackboard session
|
||||
BlackboardFinish,
|
||||
}
|
||||
|
||||
/// Defines the parts of the blackboard
|
||||
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq, Copy)]
|
||||
pub enum BlackboardSystem {
|
||||
/// The system as a whole
|
||||
System,
|
||||
/// A specific site
|
||||
Site,
|
||||
/// A specific circuit
|
||||
Circuit,
|
||||
/// A specific device
|
||||
Device,
|
||||
}
|
||||
|
||||
/// Defines the type of "top" flow being requested
|
||||
|
@ -28,6 +28,15 @@ pub enum BusResponse {
|
||||
/// In pps
|
||||
packets_per_second: DownUpOrder<u64>,
|
||||
|
||||
/// PPS TCP only
|
||||
tcp_packets_per_second: DownUpOrder<u64>,
|
||||
|
||||
/// PPS UDP only
|
||||
udp_packets_per_second: DownUpOrder<u64>,
|
||||
|
||||
/// PPS ICMP only
|
||||
icmp_packets_per_second: DownUpOrder<u64>,
|
||||
|
||||
/// How much of the response has been subject to the shaper?
|
||||
shaped_bits_per_second: DownUpOrder<u64>,
|
||||
},
|
||||
|
@ -12,7 +12,7 @@ use tokio::{
|
||||
|
||||
use super::BUS_SOCKET_DIRECTORY;
|
||||
|
||||
const READ_BUFFER_SIZE: usize = 20_480;
|
||||
const READ_BUFFER_SIZE: usize = 200_480_000;
|
||||
|
||||
/// Implements a Tokio-friendly server using Unix Sockets and the bus protocol.
|
||||
/// Requests are handled and then forwarded to the handler.
|
||||
|
@ -27,7 +27,7 @@ pub struct IpStats {
|
||||
pub tc_handle: TcHandle,
|
||||
|
||||
/// TCP Retransmits for this host at the current time.
|
||||
pub tcp_retransmits: DownUpOrder<u64>,
|
||||
pub tcp_retransmits: (f64, f64),
|
||||
}
|
||||
|
||||
/// Represents an IP Mapping in the XDP IP to TC/CPU mapping system.
|
||||
|
@ -22,7 +22,7 @@ pub use bus::{
|
||||
bus_request, decode_request, decode_response, encode_request,
|
||||
encode_response, BusClient, BusReply, BusRequest, BusResponse, BusSession,
|
||||
CakeDiffTinTransit, CakeDiffTransit, CakeTransit, QueueStoreTransit,
|
||||
UnixSocketServer, BUS_SOCKET_PATH, StatsRequest, TopFlowType,
|
||||
UnixSocketServer, BUS_SOCKET_PATH, StatsRequest, TopFlowType, BlackboardSystem
|
||||
};
|
||||
pub use tc_handle::TcHandle;
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
use allocative_derive::Allocative;
|
||||
use tracing::error;
|
||||
use lqos_utils::hex_string::read_hex_string;
|
||||
use serde::{Deserialize, Serialize};
|
||||
@ -5,7 +6,7 @@ use thiserror::Error;
|
||||
|
||||
/// Provides consistent handling of TC handle types.
|
||||
#[derive(
|
||||
Copy, Clone, Serialize, Deserialize, Debug, Default, PartialEq, Eq, Hash
|
||||
Copy, Clone, Serialize, Deserialize, Debug, Default, PartialEq, Eq, Hash, Allocative
|
||||
)]
|
||||
pub struct TcHandle(u32);
|
||||
|
||||
|
@ -15,9 +15,12 @@ ip_network = { workspace = true }
|
||||
sha2 = { workspace = true }
|
||||
uuid = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
dashmap = { workspace = true }
|
||||
pyo3 = { workspace = true }
|
||||
toml = { workspace = true }
|
||||
lqos_utils = { path = "../lqos_utils" }
|
||||
arc-swap = { workspace = true }
|
||||
once_cell = { workspace = true }
|
||||
once_cell = { workspace = true }
|
||||
|
||||
# For memory debugging
|
||||
allocative = { version = "0.3.3", features = [ "dashmap" ] }
|
||||
allocative_derive = "0.3.3"
|
@ -4,6 +4,7 @@ use serde::{Deserialize, Serialize};
|
||||
use toml_edit::{DocumentMut, value};
|
||||
use std::{fs, path::Path};
|
||||
use thiserror::Error;
|
||||
use crate::{load_config, update_config};
|
||||
|
||||
/// Represents the top-level of the `/etc/lqos.conf` file. Serialization
|
||||
/// structure.
|
||||
@ -222,47 +223,18 @@ impl EtcLqos {
|
||||
/// config file - ONLY if one doesn't already exist.
|
||||
#[allow(dead_code)]
|
||||
pub fn enable_long_term_stats(license_key: String) {
|
||||
if let Ok(raw) = std::fs::read_to_string("/etc/lqos.conf") {
|
||||
let document = raw.parse::<DocumentMut>();
|
||||
match document {
|
||||
Err(e) => {
|
||||
error!("Unable to parse TOML from /etc/lqos.conf");
|
||||
error!("Full error: {:?}", e);
|
||||
return;
|
||||
}
|
||||
Ok(mut config_doc) => {
|
||||
let cfg = toml_edit::de::from_document::<EtcLqos>(config_doc.clone());
|
||||
match cfg {
|
||||
Ok(cfg) => {
|
||||
// Now we enable LTS if its not present
|
||||
if let Ok(isp_config) = crate::load_config() {
|
||||
if cfg.long_term_stats.is_none() {
|
||||
|
||||
let mut new_section = toml_edit::table();
|
||||
new_section["gather_stats"] = value(true);
|
||||
new_section["collation_period_seconds"] = value(60);
|
||||
new_section["license_key"] = value(license_key);
|
||||
if isp_config.uisp_integration.enable_uisp {
|
||||
new_section["uisp_reporting_interval_seconds"] = value(300);
|
||||
}
|
||||
config_doc["long_term_stats"] = new_section;
|
||||
|
||||
let new_cfg = config_doc.to_string();
|
||||
if let Err(e) = fs::write(Path::new("/etc/lqos.conf"), new_cfg) {
|
||||
error!("Unable to write to /etc/lqos.conf");
|
||||
error!("{e:?}");
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Unable to parse TOML from /etc/lqos.conf");
|
||||
error!("Full error: {:?}", e);
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
let Ok(config) = load_config() else { return };
|
||||
let mut new_config = (*config).clone();
|
||||
new_config.long_term_stats.gather_stats = true;
|
||||
new_config.long_term_stats.license_key = Some(license_key);
|
||||
new_config.long_term_stats.collation_period_seconds = 60;
|
||||
if config.uisp_integration.enable_uisp {
|
||||
new_config.long_term_stats.uisp_reporting_interval_seconds = Some(300);
|
||||
}
|
||||
match update_config(&new_config) {
|
||||
Ok(_) => info!("Long-term stats enabled"),
|
||||
Err(e) => {
|
||||
error!("Unable to update configuration: {e:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -20,6 +20,14 @@ pub struct LongTermStats {
|
||||
/// for some people. A good default may be 5 minutes. Not specifying this
|
||||
/// disabled UISP integration.
|
||||
pub uisp_reporting_interval_seconds: Option<u64>,
|
||||
|
||||
/// If set, then this URL will be used for connecting to a self-hosted or
|
||||
/// development LTS server. It will be decorated with https:// and :443
|
||||
pub lts_url: Option<String>,
|
||||
|
||||
/// If enabled, Insight (LTS2) will be used in addition to the normal
|
||||
/// LTS system. This system is in alpha and is invite only for now.
|
||||
pub use_insight: Option<bool>,
|
||||
}
|
||||
|
||||
impl Default for LongTermStats {
|
||||
@ -29,6 +37,8 @@ impl Default for LongTermStats {
|
||||
collation_period_seconds: 60,
|
||||
license_key: None,
|
||||
uisp_reporting_interval_seconds: Some(300),
|
||||
lts_url: None,
|
||||
use_insight: None,
|
||||
}
|
||||
}
|
||||
}
|
@ -7,6 +7,7 @@ use std::{
|
||||
fs, path::{Path, PathBuf},
|
||||
};
|
||||
use std::collections::HashSet;
|
||||
use allocative_derive::Allocative;
|
||||
use thiserror::Error;
|
||||
use lqos_utils::units::DownUpOrder;
|
||||
pub use network_json_node::NetworkJsonNode;
|
||||
@ -15,7 +16,7 @@ pub use network_json_transport::NetworkJsonTransport;
|
||||
/// Holder for the network.json representation.
|
||||
/// This is condensed into a single level vector with index-based referencing
|
||||
/// for easy use in funnel calculations.
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Allocative)]
|
||||
pub struct NetworkJson {
|
||||
/// Nodes that make up the tree, flattened and referenced by index number.
|
||||
/// TODO: We should add a primary key to nodes in network.json.
|
||||
@ -34,7 +35,7 @@ impl NetworkJson {
|
||||
Self { nodes: Vec::new() }
|
||||
}
|
||||
|
||||
/// Retrieves the length and capacity for the nodes vector.
|
||||
/// Returns the length and capacity of the nodes vector.
|
||||
pub fn len_and_capacity(&self) -> (usize, usize) {
|
||||
(self.nodes.len(), self.nodes.capacity())
|
||||
}
|
||||
@ -64,6 +65,10 @@ impl NetworkJson {
|
||||
name: "Root".to_string(),
|
||||
max_throughput: (0, 0),
|
||||
current_throughput: DownUpOrder::zeroed(),
|
||||
current_packets: DownUpOrder::zeroed(),
|
||||
current_tcp_packets: DownUpOrder::zeroed(),
|
||||
current_udp_packets: DownUpOrder::zeroed(),
|
||||
current_icmp_packets: DownUpOrder::zeroed(),
|
||||
current_tcp_retransmits: DownUpOrder::zeroed(),
|
||||
current_drops: DownUpOrder::zeroed(),
|
||||
current_marks: DownUpOrder::zeroed(),
|
||||
@ -150,6 +155,10 @@ impl NetworkJson {
|
||||
//log::warn!("Locking network tree for throughput cycle");
|
||||
self.nodes.iter_mut().for_each(|n| {
|
||||
n.current_throughput.set_to_zero();
|
||||
n.current_packets.set_to_zero();
|
||||
n.current_tcp_packets.set_to_zero();
|
||||
n.current_udp_packets.set_to_zero();
|
||||
n.current_icmp_packets.set_to_zero();
|
||||
n.current_tcp_retransmits.set_to_zero();
|
||||
n.rtts.clear();
|
||||
n.current_drops.set_to_zero();
|
||||
@ -164,11 +173,19 @@ impl NetworkJson {
|
||||
&mut self,
|
||||
targets: &[usize],
|
||||
bytes: (u64, u64),
|
||||
packets: (u64, u64),
|
||||
tcp: (u64, u64),
|
||||
udp: (u64, u64),
|
||||
icmp: (u64, u64),
|
||||
) {
|
||||
for idx in targets {
|
||||
// Safety first: use "get" to ensure that the node exists
|
||||
if let Some(node) = self.nodes.get_mut(*idx) {
|
||||
node.current_throughput.checked_add_tuple(bytes);
|
||||
node.current_packets.checked_add_tuple(packets);
|
||||
node.current_tcp_packets.checked_add_tuple(tcp);
|
||||
node.current_udp_packets.checked_add_tuple(udp);
|
||||
node.current_icmp_packets.checked_add_tuple(icmp);
|
||||
} else {
|
||||
warn!("No network tree entry for index {idx}");
|
||||
}
|
||||
@ -248,6 +265,10 @@ fn recurse_node(
|
||||
json_to_u32(json.get("uploadBandwidthMbps")),
|
||||
),
|
||||
current_throughput: DownUpOrder::zeroed(),
|
||||
current_packets: DownUpOrder::zeroed(),
|
||||
current_tcp_packets: DownUpOrder::zeroed(),
|
||||
current_udp_packets: DownUpOrder::zeroed(),
|
||||
current_icmp_packets: DownUpOrder::zeroed(),
|
||||
current_tcp_retransmits: DownUpOrder::zeroed(),
|
||||
current_drops: DownUpOrder::zeroed(),
|
||||
current_marks: DownUpOrder::zeroed(),
|
||||
|
@ -1,9 +1,10 @@
|
||||
use std::collections::HashSet;
|
||||
use allocative_derive::Allocative;
|
||||
use lqos_utils::units::DownUpOrder;
|
||||
use crate::NetworkJsonTransport;
|
||||
|
||||
/// Describes a node in the network map tree.
|
||||
#[derive(Debug, Clone)]
|
||||
#[derive(Debug, Clone, Allocative)]
|
||||
pub struct NetworkJsonNode {
|
||||
/// The node name, as it appears in `network.json`
|
||||
pub name: String,
|
||||
@ -14,6 +15,18 @@ pub struct NetworkJsonNode {
|
||||
/// Current throughput (in bytes/second) at this node
|
||||
pub current_throughput: DownUpOrder<u64>, // In bytes
|
||||
|
||||
/// Current Packets
|
||||
pub current_packets: DownUpOrder<u64>,
|
||||
|
||||
/// Current TCP Packets
|
||||
pub current_tcp_packets: DownUpOrder<u64>,
|
||||
|
||||
/// Current UDP Packets
|
||||
pub current_udp_packets: DownUpOrder<u64>,
|
||||
|
||||
/// Current ICMP Packets
|
||||
pub current_icmp_packets: DownUpOrder<u64>,
|
||||
|
||||
/// Current TCP Retransmits
|
||||
pub current_tcp_retransmits: DownUpOrder<u64>, // In retries
|
||||
|
||||
@ -50,6 +63,22 @@ impl NetworkJsonNode {
|
||||
self.current_throughput.get_down(),
|
||||
self.current_throughput.get_up(),
|
||||
),
|
||||
current_packets: (
|
||||
self.current_packets.get_down(),
|
||||
self.current_packets.get_up(),
|
||||
),
|
||||
current_tcp_packets: (
|
||||
self.current_tcp_packets.get_down(),
|
||||
self.current_tcp_packets.get_up(),
|
||||
),
|
||||
current_udp_packets: (
|
||||
self.current_udp_packets.get_down(),
|
||||
self.current_udp_packets.get_up(),
|
||||
),
|
||||
current_icmp_packets: (
|
||||
self.current_icmp_packets.get_down(),
|
||||
self.current_icmp_packets.get_up(),
|
||||
),
|
||||
current_retransmits: (
|
||||
self.current_tcp_retransmits.get_down(),
|
||||
self.current_tcp_retransmits.get_up(),
|
||||
|
@ -11,6 +11,14 @@ pub struct NetworkJsonTransport {
|
||||
pub max_throughput: (u32, u32),
|
||||
/// Current node throughput
|
||||
pub current_throughput: (u64, u64),
|
||||
/// Current node packets
|
||||
pub current_packets: (u64, u64),
|
||||
/// Current TCP packets
|
||||
pub current_tcp_packets: (u64, u64),
|
||||
/// Current UDP packets
|
||||
pub current_udp_packets: (u64, u64),
|
||||
/// Current ICMP packets
|
||||
pub current_icmp_packets: (u64, u64),
|
||||
/// Current count of TCP retransmits
|
||||
pub current_retransmits: (u64, u64),
|
||||
/// Cake marks
|
||||
|
@ -116,7 +116,9 @@ impl ConfigShapedDevices {
|
||||
pub fn replace_with_new_data(&mut self, devices: Vec<ShapedDevice>) {
|
||||
self.devices = devices;
|
||||
debug!("{:?}", self.devices);
|
||||
self.trie = ConfigShapedDevices::make_trie(&self.devices);
|
||||
let mut new_trie = ConfigShapedDevices::make_trie(&self.devices);
|
||||
std::mem::swap(&mut self.trie, &mut new_trie);
|
||||
std::mem::drop(new_trie); // Explicitly drop the old trie
|
||||
}
|
||||
|
||||
fn make_trie(
|
||||
@ -183,6 +185,21 @@ impl ConfigShapedDevices {
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Helper function to search for an XdpIpAddress and return a circuit id and name
|
||||
/// if they exist.
|
||||
pub fn get_circuit_hash_from_ip(&self, ip: &XdpIpAddress) -> Option<i64> {
|
||||
let lookup = match ip.as_ip() {
|
||||
IpAddr::V4(ip) => ip.to_ipv6_mapped(),
|
||||
IpAddr::V6(ip) => ip,
|
||||
};
|
||||
if let Some(c) = self.trie.longest_match(lookup) {
|
||||
let device = &self.devices[*c.1];
|
||||
return Some(device.circuit_hash);
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
|
@ -2,7 +2,7 @@ use csv::StringRecord;
|
||||
use tracing::error;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::net::{Ipv4Addr, Ipv6Addr};
|
||||
|
||||
use lqos_utils::hash_to_i64;
|
||||
use super::ShapedDevicesError;
|
||||
|
||||
/// Represents a row in the `ShapedDevices.csv` file.
|
||||
@ -55,6 +55,18 @@ pub struct ShapedDevice {
|
||||
|
||||
/// Generic comments field, does nothing.
|
||||
pub comment: String,
|
||||
|
||||
/// Hash of the circuit ID, used for internal lookups.
|
||||
#[serde(skip)]
|
||||
pub circuit_hash: i64,
|
||||
|
||||
/// Hash of the device ID, used for internal lookups.
|
||||
#[serde(skip)]
|
||||
pub device_hash: i64,
|
||||
|
||||
/// Hash of the parent node, used for internal lookups.
|
||||
#[serde(skip)]
|
||||
pub parent_hash: i64,
|
||||
}
|
||||
|
||||
impl ShapedDevice {
|
||||
@ -83,6 +95,9 @@ impl ShapedDevice {
|
||||
ShapedDevicesError::CsvEntryParseError(record[11].to_string())
|
||||
})?,
|
||||
comment: record[12].to_string(),
|
||||
circuit_hash: hash_to_i64(&record[0]),
|
||||
device_hash: hash_to_i64(&record[2]),
|
||||
parent_hash: hash_to_i64(&record[4]),
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
use lqos_bus::{BusRequest, BusResponse, TcHandle};
|
||||
use lqos_bus::{BlackboardSystem, BusRequest, BusResponse, TcHandle};
|
||||
use lqos_utils::hex_string::read_hex_string;
|
||||
use nix::libc::getpid;
|
||||
use pyo3::{
|
||||
@ -92,6 +92,8 @@ fn liblqos_python(_py: Python, m: &PyModule) -> PyResult<()> {
|
||||
m.add_wrapped(wrap_pyfunction!(get_tree_weights))?;
|
||||
m.add_wrapped(wrap_pyfunction!(get_libreqos_directory))?;
|
||||
m.add_wrapped(wrap_pyfunction!(is_network_flat))?;
|
||||
m.add_wrapped(wrap_pyfunction!(blackboard_finish))?;
|
||||
m.add_wrapped(wrap_pyfunction!(blackboard_submit))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@ -702,3 +704,21 @@ pub fn get_libreqos_directory() -> PyResult<String> {
|
||||
pub fn is_network_flat() -> PyResult<bool> {
|
||||
Ok(lqos_config::NetworkJson::load().unwrap().get_nodes_when_ready().len() == 1)
|
||||
}
|
||||
|
||||
#[pyfunction]
|
||||
pub fn blackboard_finish() -> PyResult<()> {
|
||||
let _ = run_query(vec![BusRequest::BlackboardFinish]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[pyfunction]
|
||||
pub fn blackboard_submit(subsystem: String, key: String, value: String) -> PyResult<()> {
|
||||
let subsystem = match subsystem.as_str() {
|
||||
"system" => BlackboardSystem::System,
|
||||
"site" => BlackboardSystem::Site,
|
||||
"circuit" => BlackboardSystem::Circuit,
|
||||
_ => return Err(PyOSError::new_err("Invalid subsystem")),
|
||||
};
|
||||
let _ = run_query(vec![BusRequest::BlackboardData { subsystem, key, value }]);
|
||||
Ok(())
|
||||
}
|
@ -13,7 +13,6 @@ lqos_config = { path = "../lqos_config" }
|
||||
lqos_sys = { path = "../lqos_sys" }
|
||||
lqos_utils = { path = "../lqos_utils" }
|
||||
tracing = { workspace = true }
|
||||
tokio = { workspace = true }
|
||||
once_cell = { workspace = true}
|
||||
dashmap = { workspace = true }
|
||||
anyhow = { workspace = true }
|
||||
|
@ -3,6 +3,7 @@ use tracing::{error, warn};
|
||||
use lqos_bus::TcHandle;
|
||||
use lqos_utils::hex_string::read_hex_string;
|
||||
use serde_json::Value;
|
||||
use lqos_utils::hash_to_i64;
|
||||
|
||||
#[derive(Default, Clone, Debug)]
|
||||
pub struct QueueNode {
|
||||
@ -22,11 +23,14 @@ pub struct QueueNode {
|
||||
pub circuits: Vec<QueueNode>,
|
||||
pub circuit_id: Option<String>,
|
||||
pub circuit_name: Option<String>,
|
||||
pub circuit_hash: Option<i64>,
|
||||
pub parent_node: Option<String>,
|
||||
pub parent_hash: Option<i64>,
|
||||
pub devices: Vec<QueueNode>,
|
||||
pub comment: String,
|
||||
pub device_id: Option<String>,
|
||||
pub device_name: Option<String>,
|
||||
pub device_hash: Option<i64>,
|
||||
pub mac: Option<String>,
|
||||
pub children: Vec<QueueNode>,
|
||||
}
|
||||
@ -157,18 +161,27 @@ impl QueueNode {
|
||||
}
|
||||
"circuitId" | "circuitID" => {
|
||||
grab_string_option!(result.circuit_id, key.as_str(), value);
|
||||
if result.circuit_id.is_some() {
|
||||
result.circuit_hash = Some(hash_to_i64(result.circuit_id.as_ref().unwrap()));
|
||||
}
|
||||
}
|
||||
"circuitName" => {
|
||||
grab_string_option!(result.circuit_name, key.as_str(), value);
|
||||
}
|
||||
"parentNode" | "ParentNode" => {
|
||||
grab_string_option!(result.parent_node, key.as_str(), value);
|
||||
if result.parent_node.is_some() {
|
||||
result.parent_hash = Some(hash_to_i64(result.parent_node.as_ref().unwrap()));
|
||||
}
|
||||
}
|
||||
"comment" => {
|
||||
grab_string!(result.comment, key.as_str(), value);
|
||||
}
|
||||
"deviceId" | "deviceID" => {
|
||||
grab_string_option!(result.device_id, key.as_str(), value);
|
||||
if result.device_id.is_some() {
|
||||
result.device_hash = Some(hash_to_i64(result.device_id.as_ref().unwrap()));
|
||||
}
|
||||
}
|
||||
"deviceName" => {
|
||||
grab_string_option!(result.device_name, key.as_str(), value);
|
||||
|
@ -39,7 +39,8 @@ fn zero_total_queue_stats() {
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct AllQueueData {
|
||||
data: Mutex<HashMap<String, QueueData>>,
|
||||
// Map is keyed on circuit hash (which is a hash of circuit_id)
|
||||
data: Mutex<HashMap<i64, QueueData>>,
|
||||
}
|
||||
|
||||
impl AllQueueData {
|
||||
@ -67,8 +68,8 @@ impl AllQueueData {
|
||||
|
||||
// Make download markings
|
||||
for dl in download.into_iter() {
|
||||
seen_queue_ids.push(dl.circuit_id.clone());
|
||||
if let Some(q) = lock.get_mut(&dl.circuit_id) {
|
||||
if let Some(q) = lock.get_mut(&dl.circuit_hash) {
|
||||
seen_queue_ids.push(dl.circuit_hash.clone());
|
||||
// We need to update it
|
||||
q.drops.down = dl.drops;
|
||||
q.marks.down = dl.marks;
|
||||
@ -82,14 +83,14 @@ impl AllQueueData {
|
||||
};
|
||||
new_record.drops.down = dl.drops;
|
||||
new_record.marks.down = dl.marks;
|
||||
lock.insert(dl.circuit_id.clone(), new_record);
|
||||
lock.insert(dl.circuit_hash, new_record);
|
||||
}
|
||||
}
|
||||
|
||||
// Make upload markings
|
||||
for ul in upload.into_iter() {
|
||||
seen_queue_ids.push(ul.circuit_id.clone());
|
||||
if let Some(q) = lock.get_mut(&ul.circuit_id) {
|
||||
if let Some(q) = lock.get_mut(&ul.circuit_hash) {
|
||||
seen_queue_ids.push(ul.circuit_hash.clone());
|
||||
// We need to update it
|
||||
q.drops.up = ul.drops;
|
||||
q.marks.up = ul.marks;
|
||||
@ -103,7 +104,7 @@ impl AllQueueData {
|
||||
};
|
||||
new_record.drops.up = ul.drops;
|
||||
new_record.marks.up = ul.marks;
|
||||
lock.insert(ul.circuit_id.clone(), new_record);
|
||||
lock.insert(ul.circuit_hash, new_record);
|
||||
}
|
||||
}
|
||||
|
||||
@ -111,7 +112,7 @@ impl AllQueueData {
|
||||
lock.retain(|k, _| seen_queue_ids.contains(k));
|
||||
}
|
||||
|
||||
pub fn iterate_queues(&self, mut f: impl FnMut(&str, &DownUpOrder<u64>, &DownUpOrder<u64>)) {
|
||||
pub fn iterate_queues(&self, mut f: impl FnMut(i64, &DownUpOrder<u64>, &DownUpOrder<u64>)) {
|
||||
let lock = self.data.lock().unwrap();
|
||||
for (circuit_id, q) in lock.iter() {
|
||||
if let Some(prev_drops) = q.prev_drops {
|
||||
@ -119,7 +120,7 @@ impl AllQueueData {
|
||||
if q.drops > prev_drops || q.marks > prev_marks {
|
||||
let drops = q.drops.checked_sub_or_zero(prev_drops);
|
||||
let marks = q.marks.checked_sub_or_zero(prev_marks);
|
||||
f(circuit_id, &drops, &marks);
|
||||
f(*circuit_id, &drops, &marks);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -3,7 +3,7 @@ use crate::{
|
||||
circuit_to_queue::CIRCUIT_TO_QUEUE, interval::QUEUE_MONITOR_INTERVAL,
|
||||
queue_store::QueueStore, tracking::reader::read_named_queue_from_interface,
|
||||
};
|
||||
use tracing::{debug, info, warn};
|
||||
use tracing::{debug, warn};
|
||||
use lqos_utils::fdtimer::periodic;
|
||||
mod reader;
|
||||
mod watched_queues;
|
||||
@ -66,13 +66,13 @@ fn track_queues() {
|
||||
QueueStore::new(download[0].clone(), upload[0].clone()),
|
||||
);
|
||||
} else {
|
||||
info!(
|
||||
debug!(
|
||||
"No queue data returned for {}, {}/{} found.",
|
||||
circuit_id.to_string(),
|
||||
download.len(),
|
||||
upload.len()
|
||||
);
|
||||
info!("You probably want to run LibreQoS.py");
|
||||
debug!("You probably want to run LibreQoS.py");
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -84,7 +84,7 @@ fn track_queues() {
|
||||
|
||||
/// Holds the CAKE marks/drops for a given queue/circuit.
|
||||
pub struct TrackedQueue {
|
||||
circuit_id: String,
|
||||
circuit_hash: i64,
|
||||
drops: u64,
|
||||
marks: u64,
|
||||
}
|
||||
@ -96,11 +96,11 @@ fn connect_queues_to_circuit(structure: &[QueueNode], queues: &[QueueType]) -> V
|
||||
if let QueueType::Cake(cake) = q {
|
||||
let (major, minor) = cake.parent.get_major_minor();
|
||||
if let Some (s) = structure.iter().find(|s| s.class_major == major as u32 && s.class_minor == minor as u32) {
|
||||
if let Some(circuit_id) = &s.circuit_id {
|
||||
if let Some(circuit_hash) = &s.circuit_hash {
|
||||
let marks: u32 = cake.tins.iter().map(|tin| tin.ecn_marks).sum();
|
||||
if cake.drops > 0 || marks > 0 {
|
||||
return Some(TrackedQueue {
|
||||
circuit_id: circuit_id.clone(),
|
||||
circuit_hash: *circuit_hash,
|
||||
drops: cake.drops as u64,
|
||||
marks: marks as u64,
|
||||
})
|
||||
@ -120,11 +120,11 @@ fn connect_queues_to_circuit_up(structure: &[QueueNode], queues: &[QueueType]) -
|
||||
if let QueueType::Cake(cake) = q {
|
||||
let (major, minor) = cake.parent.get_major_minor();
|
||||
if let Some (s) = structure.iter().find(|s| s.up_class_major == major as u32 && s.class_minor == minor as u32) {
|
||||
if let Some(circuit_id) = &s.circuit_id {
|
||||
if let Some(circuit_hash) = &s.circuit_hash {
|
||||
let marks: u32 = cake.tins.iter().map(|tin| tin.ecn_marks).sum();
|
||||
if cake.drops > 0 || marks > 0 {
|
||||
return Some(TrackedQueue {
|
||||
circuit_id: circuit_id.clone(),
|
||||
circuit_hash: *circuit_hash,
|
||||
drops: cake.drops as u64,
|
||||
marks: marks as u64,
|
||||
})
|
||||
|
@ -21,7 +21,7 @@ enum Commands {
|
||||
Sanity,
|
||||
/// Gather Support Info and Save it to /tmp
|
||||
Gather,
|
||||
/// Gather Support Info and Send it to the LibreQoS Team. Note that LTS users and donors get priority, we don't guarantee that we'll help anyone else. Please make sure you've tried Zulip first ( https://chat.libreqos.io/ )
|
||||
/// Gather Support Info and Send it to the LibreQoS Team. Note that Insight users and donors get priority, we don't guarantee that we'll help anyone else. Please make sure you've tried Zulip first ( https://chat.libreqos.io/ )
|
||||
Submit,
|
||||
/// Summarize the contents of a support dump
|
||||
Summarize {
|
||||
|
@ -16,5 +16,9 @@ once_cell = { workspace = true}
|
||||
thiserror = { workspace = true }
|
||||
zerocopy = { workspace = true }
|
||||
|
||||
# For memory debugging
|
||||
allocative = { version = "0.3.3", features = [ "dashmap" ] }
|
||||
allocative_derive = "0.3.3"
|
||||
|
||||
[build-dependencies]
|
||||
bindgen = "0"
|
||||
|
@ -14,6 +14,12 @@ struct host_counter {
|
||||
__u64 upload_bytes;
|
||||
__u64 download_packets;
|
||||
__u64 upload_packets;
|
||||
__u64 tcp_download_packets;
|
||||
__u64 tcp_upload_packets;
|
||||
__u64 udp_download_packets;
|
||||
__u64 udp_upload_packets;
|
||||
__u64 icmp_download_packets;
|
||||
__u64 icmp_upload_packets;
|
||||
__u32 tc_handle;
|
||||
__u64 last_seen;
|
||||
};
|
||||
@ -22,7 +28,7 @@ struct host_counter {
|
||||
// runs out of space, the least recently seen host will be removed.
|
||||
struct
|
||||
{
|
||||
__uint(type, BPF_MAP_TYPE_LRU_PERCPU_HASH);
|
||||
__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
|
||||
__type(key, struct in6_addr);
|
||||
__type(value, struct host_counter);
|
||||
__uint(max_entries, MAX_TRACKED_IPS);
|
||||
@ -34,37 +40,77 @@ static __always_inline void track_traffic(
|
||||
struct in6_addr * key,
|
||||
__u32 size,
|
||||
__u32 tc_handle,
|
||||
__u64 now
|
||||
struct dissector_t * dissector
|
||||
) {
|
||||
// Count the bits. It's per-CPU, so we can't be interrupted - no sync required
|
||||
struct host_counter * counter =
|
||||
(struct host_counter *)bpf_map_lookup_elem(&map_traffic, key);
|
||||
if (counter) {
|
||||
counter->last_seen = now;
|
||||
counter->last_seen = dissector->now;
|
||||
counter->tc_handle = tc_handle;
|
||||
if (direction == 1) {
|
||||
// Download
|
||||
counter->download_packets += 1;
|
||||
counter->download_bytes += size;
|
||||
switch (dissector->ip_protocol) {
|
||||
case IPPROTO_TCP:
|
||||
counter->tcp_download_packets += 1;
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
counter->udp_download_packets += 1;
|
||||
break;
|
||||
case IPPROTO_ICMP:
|
||||
counter->icmp_download_packets += 1;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// Upload
|
||||
counter->upload_packets += 1;
|
||||
counter->upload_bytes += size;
|
||||
switch (dissector->ip_protocol) {
|
||||
case IPPROTO_TCP:
|
||||
counter->tcp_upload_packets += 1;
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
counter->udp_upload_packets += 1;
|
||||
break;
|
||||
case IPPROTO_ICMP:
|
||||
counter->icmp_upload_packets += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
struct host_counter new_host = {0};
|
||||
new_host.tc_handle = tc_handle;
|
||||
new_host.last_seen = now;
|
||||
new_host.last_seen = dissector->now;
|
||||
if (direction == 1) {
|
||||
new_host.download_packets = 1;
|
||||
new_host.download_bytes = size;
|
||||
new_host.upload_bytes = 0;
|
||||
new_host.upload_packets = 0;
|
||||
switch (dissector->ip_protocol) {
|
||||
case IPPROTO_TCP:
|
||||
new_host.tcp_download_packets = 1;
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
new_host.udp_download_packets = 1;
|
||||
break;
|
||||
case IPPROTO_ICMP:
|
||||
new_host.icmp_download_packets = 1;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
new_host.upload_packets = 1;
|
||||
new_host.upload_bytes = size;
|
||||
new_host.download_bytes = 0;
|
||||
new_host.download_packets = 0;
|
||||
switch (dissector->ip_protocol) {
|
||||
case IPPROTO_TCP:
|
||||
new_host.tcp_upload_packets = 1;
|
||||
break;
|
||||
case IPPROTO_UDP:
|
||||
new_host.udp_upload_packets = 1;
|
||||
break;
|
||||
case IPPROTO_ICMP:
|
||||
new_host.icmp_upload_packets = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (bpf_map_update_elem(&map_traffic, key, &new_host, BPF_NOEXIST) != 0) {
|
||||
bpf_debug("Failed to insert flow");
|
||||
|
@ -153,7 +153,7 @@ int xdp_prog(struct xdp_md *ctx)
|
||||
&lookup_key.address,
|
||||
ctx->data_end - ctx->data, // end - data = length
|
||||
tc_handle,
|
||||
dissector.now
|
||||
&dissector
|
||||
);
|
||||
|
||||
// Send on its way
|
||||
|
@ -251,19 +251,18 @@ pub fn iterate_flows(
|
||||
// Arguments: the list of flow keys to expire
|
||||
pub fn end_flows(flows: &mut [FlowbeeKey]) -> anyhow::Result<()> {
|
||||
let mut map = BpfMap::<FlowbeeKey, FlowbeeData>::from_path("/sys/fs/bpf/flowbee")?;
|
||||
let mut keys = flows.iter().map(|k| k.clone()).collect();
|
||||
|
||||
for flow in flows {
|
||||
map.delete(flow)?;
|
||||
}
|
||||
map.clear_bulk_keys(&mut keys)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) fn expire_throughput(keys: &mut [XdpIpAddress]) -> anyhow::Result<()> {
|
||||
/// Expire all throughput data for the given keys
|
||||
/// This uses the bulk delete method, which is faster than
|
||||
/// the per-row method due to only having one lock.
|
||||
pub fn expire_throughput(mut keys: Vec<XdpIpAddress>) -> anyhow::Result<()> {
|
||||
let mut map = BpfMap::<XdpIpAddress, HostCounter>::from_path("/sys/fs/bpf/map_traffic")?;
|
||||
|
||||
for key in keys {
|
||||
map.delete(key).unwrap();
|
||||
}
|
||||
map.clear_bulk_keys(&mut keys)?;
|
||||
Ok(())
|
||||
}
|
@ -238,6 +238,20 @@ where
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Bulk clear selected keys from the map.
|
||||
pub fn clear_bulk_keys(&mut self, keys: &mut Vec<K>) -> Result<()> {
|
||||
let mut count = keys.len() as u32;
|
||||
loop {
|
||||
let ret = unsafe {
|
||||
bpf_map_delete_batch(self.fd, keys.as_mut_ptr() as *mut c_void, &mut count, null_mut())
|
||||
};
|
||||
if ret != 0 || count == 0 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl<K, V> Drop for BpfMap<K, V> {
|
||||
|
@ -1,11 +1,12 @@
|
||||
//! Data structures for the Flowbee eBPF program.
|
||||
|
||||
use allocative_derive::Allocative;
|
||||
use lqos_utils::XdpIpAddress;
|
||||
use zerocopy::FromBytes;
|
||||
use lqos_utils::units::DownUpOrder;
|
||||
|
||||
/// Representation of the eBPF `flow_key_t` type.
|
||||
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Hash, FromBytes)]
|
||||
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Hash, FromBytes, Allocative)]
|
||||
#[repr(C)]
|
||||
pub struct FlowbeeKey {
|
||||
/// Mapped `XdpIpAddress` source for the flow.
|
||||
|
@ -1,12 +1,12 @@
|
||||
use std::time::Duration;
|
||||
use tracing::{debug, error};
|
||||
use tracing::{debug, error, info};
|
||||
use lqos_utils::unix_time::time_since_boot;
|
||||
|
||||
/// Starts a periodic garbage collector that will run every hour.
|
||||
/// This is used to clean up old eBPF map entries to limit memory usage.
|
||||
pub fn bpf_garbage_collector() {
|
||||
const SLEEP_TIME: u64 = 60 * 60; // 1 Hour
|
||||
//const SLEEP_TIME: u64 = 5 * 60; // 5 Minutes
|
||||
//const SLEEP_TIME: u64 = 60 * 60; // 1 Hour
|
||||
const SLEEP_TIME: u64 = 5 * 60; // 5 Minutes
|
||||
|
||||
debug!("Starting BPF garbage collector");
|
||||
let result = std::thread::Builder::new()
|
||||
@ -26,7 +26,8 @@ pub fn bpf_garbage_collector() {
|
||||
/// Iterates through all throughput entries, building a list of any that
|
||||
/// haven't been seen for an hour. These are then bulk deleted.
|
||||
fn throughput_garbage_collect() {
|
||||
const EXPIRY_TIME: u64 = 60 * 60; // 1 Hour
|
||||
//const EXPIRY_TIME: u64 = 60 * 60; // 1 Hour
|
||||
const EXPIRY_TIME: u64 = 60 * 15; // 15 minutes
|
||||
//const EXPIRY_TIME: u64 = 5 * 60; // 5 Minutes
|
||||
let Ok(now) = time_since_boot() else { return };
|
||||
let now = Duration::from(now).as_nanos() as u64;
|
||||
@ -51,8 +52,8 @@ fn throughput_garbage_collect() {
|
||||
}
|
||||
|
||||
if !expired.is_empty() {
|
||||
debug!("Garbage collecting {} throughput entries", expired.len());
|
||||
if let Err(e) = crate::bpf_iterator::expire_throughput(&mut expired) {
|
||||
info!("Garbage collecting {} throughput entries", expired.len());
|
||||
if let Err(e) = crate::bpf_iterator::expire_throughput(expired) {
|
||||
error!("Failed to garbage collect throughput: {:?}", e);
|
||||
}
|
||||
}
|
||||
|
@ -30,6 +30,6 @@ pub use kernel_wrapper::LibreQoSKernels;
|
||||
pub use linux::num_possible_cpus;
|
||||
pub use lqos_kernel::max_tracked_ips;
|
||||
pub use throughput::{throughput_for_each, HostCounter};
|
||||
pub use bpf_iterator::{iterate_flows, end_flows};
|
||||
pub use bpf_iterator::{iterate_flows, end_flows, expire_throughput};
|
||||
pub use lqos_kernel::interface_name_to_index;
|
||||
pub use garbage_collector::bpf_garbage_collector;
|
@ -83,7 +83,7 @@ pub fn unload_xdp_from_interface(interface_name: &str) -> Result<()> {
|
||||
|
||||
fn set_strict_mode() -> Result<()> {
|
||||
let err = unsafe { libbpf_set_strict_mode(LIBBPF_STRICT_ALL) };
|
||||
//#[cfg(not(debug_assertions))]
|
||||
#[cfg(not(debug_assertions))]
|
||||
unsafe {
|
||||
bpf::do_not_print();
|
||||
}
|
||||
|
@ -17,6 +17,24 @@ pub struct HostCounter {
|
||||
/// Upload packets counter (keeps incrementing)
|
||||
pub upload_packets: u64,
|
||||
|
||||
/// TCP packets downloaded
|
||||
pub tcp_download_packets: u64,
|
||||
|
||||
/// TCP packets uploaded
|
||||
pub tcp_upload_packets: u64,
|
||||
|
||||
/// UDP packets downloaded
|
||||
pub udp_download_packets: u64,
|
||||
|
||||
/// UDP packets uploaded
|
||||
pub udp_upload_packets: u64,
|
||||
|
||||
/// ICMP packets downloaded
|
||||
pub icmp_download_packets: u64,
|
||||
|
||||
/// ICMP packets uploaded
|
||||
pub icmp_upload_packets: u64,
|
||||
|
||||
/// Mapped TC handle, 0 if there isn't one.
|
||||
pub tc_handle: u32,
|
||||
|
||||
|
@ -8,8 +8,13 @@ license = "GPL-2.0-only"
|
||||
serde = { workspace = true }
|
||||
nix = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
notify = { version = "5.0.0", default-features = false } # Not using crossbeam because of Tokio
|
||||
notify = { version = "6", features = ["crossbeam-channel"] }
|
||||
thiserror = { workspace = true }
|
||||
byteorder = { workspace = true }
|
||||
zerocopy = { workspace = true }
|
||||
num-traits = { workspace = true }
|
||||
crossbeam-channel = { workspace = true }
|
||||
|
||||
# For memory debugging
|
||||
allocative = { version = "0.3.3", features = [ "dashmap" ] }
|
||||
allocative_derive = "0.3.3"
|
@ -98,7 +98,7 @@ impl FileWatcher {
|
||||
}
|
||||
|
||||
// Build the watcher
|
||||
let (tx, rx) = std::sync::mpsc::channel();
|
||||
let (tx, rx) = crossbeam_channel::bounded(32);
|
||||
let watcher = notify::RecommendedWatcher::new(tx, Config::default());
|
||||
if watcher.is_err() {
|
||||
error!("Unable to create watcher for ShapedDevices.csv");
|
||||
|
@ -24,3 +24,11 @@ pub mod units;
|
||||
|
||||
/// XDP compatible IP Address
|
||||
pub use xdp_ip_address::XdpIpAddress;
|
||||
|
||||
/// Insight standard hasher for strings
|
||||
pub fn hash_to_i64(text: &str) -> i64 {
|
||||
use std::hash::{DefaultHasher, Hasher};
|
||||
let mut hasher = DefaultHasher::new();
|
||||
hasher.write(text.as_bytes());
|
||||
hasher.finish() as i64
|
||||
}
|
@ -4,6 +4,7 @@
|
||||
|
||||
use std::sync::atomic::AtomicU64;
|
||||
use std::sync::atomic::Ordering::Relaxed;
|
||||
use allocative_derive::Allocative;
|
||||
use crate::units::DownUpOrder;
|
||||
|
||||
/// AtomicDownUp is a struct that contains two atomic u64 values, one for down and one for up.
|
||||
@ -12,7 +13,7 @@ use crate::units::DownUpOrder;
|
||||
///
|
||||
/// Note that unlike the DownUpOrder struct, it is not intended for direct serialization, and
|
||||
/// is not generic.
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Allocative)]
|
||||
pub struct AtomicDownUp {
|
||||
down: AtomicU64,
|
||||
up: AtomicU64,
|
||||
@ -36,29 +37,15 @@ impl AtomicDownUp {
|
||||
/// Add a tuple of u64 values to the down and up values. The addition
|
||||
/// is checked, and will not occur if it would result in an overflow.
|
||||
pub fn checked_add_tuple(&self, n: (u64, u64)) {
|
||||
let n0 = self.down.load(std::sync::atomic::Ordering::Relaxed);
|
||||
if let Some(n) = n0.checked_add(n.0) {
|
||||
self.down.store(n, std::sync::atomic::Ordering::Relaxed);
|
||||
}
|
||||
|
||||
let n1 = self.up.load(std::sync::atomic::Ordering::Relaxed);
|
||||
if let Some(n) = n1.checked_add(n.1) {
|
||||
self.up.store(n, std::sync::atomic::Ordering::Relaxed);
|
||||
}
|
||||
let _ = self.down.fetch_update(Relaxed, Relaxed, |x| x.checked_add(n.0));
|
||||
let _ = self.up.fetch_update(Relaxed, Relaxed, |x| x.checked_add(n.1));
|
||||
}
|
||||
|
||||
/// Add a DownUpOrder to the down and up values. The addition
|
||||
/// is checked, and will not occur if it would result in an overflow.
|
||||
pub fn checked_add(&self, n: DownUpOrder<u64>) {
|
||||
let n0 = self.down.load(std::sync::atomic::Ordering::Relaxed);
|
||||
if let Some(n) = n0.checked_add(n.down) {
|
||||
self.down.store(n, std::sync::atomic::Ordering::Relaxed);
|
||||
}
|
||||
|
||||
let n1 = self.up.load(std::sync::atomic::Ordering::Relaxed);
|
||||
if let Some(n) = n1.checked_add(n.up) {
|
||||
self.up.store(n, std::sync::atomic::Ordering::Relaxed);
|
||||
}
|
||||
let _ = self.down.fetch_update(Relaxed, Relaxed, |x| x.checked_add(n.down));
|
||||
let _ = self.up.fetch_update(Relaxed, Relaxed, |x| x.checked_add(n.up));
|
||||
}
|
||||
|
||||
/// Get the down value.
|
||||
|
@ -3,6 +3,7 @@
|
||||
//! helps reduce directional confusion/bugs.
|
||||
|
||||
use std::ops::AddAssign;
|
||||
use allocative_derive::Allocative;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use zerocopy::FromBytes;
|
||||
use crate::units::UpDownOrder;
|
||||
@ -11,7 +12,7 @@ use crate::units::UpDownOrder;
|
||||
/// stored statistics to eliminate confusion. This is a generic
|
||||
/// type: you can control the type stored inside.
|
||||
#[repr(C)]
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Serialize, Deserialize, FromBytes, Default, Ord, PartialOrd)]
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Serialize, Deserialize, FromBytes, Default, Ord, PartialOrd, Allocative)]
|
||||
pub struct DownUpOrder<T> {
|
||||
/// The down value
|
||||
pub down: T,
|
||||
@ -22,7 +23,7 @@ pub struct DownUpOrder<T> {
|
||||
impl <T> DownUpOrder<T>
|
||||
where T: std::cmp::Ord + num_traits::Zero + Copy + num_traits::CheckedSub
|
||||
+ num_traits::CheckedAdd + num_traits::SaturatingSub + num_traits::SaturatingMul
|
||||
+ num_traits::FromPrimitive
|
||||
+ num_traits::FromPrimitive + num_traits::SaturatingAdd
|
||||
{
|
||||
/// Create a new DownUpOrder with the given down and up values.
|
||||
pub fn new(down: T, up: T) -> Self {
|
||||
@ -92,7 +93,7 @@ where T: std::cmp::Ord + num_traits::Zero + Copy + num_traits::CheckedSub
|
||||
|
||||
/// Add the `down` and `up` values, giving a total.
|
||||
pub fn sum(&self) -> T {
|
||||
self.down + self.up
|
||||
self.down.saturating_add(&self.up)
|
||||
}
|
||||
|
||||
/// Multiply the `down` and `up` values by 8, giving the total number of bits, assuming
|
||||
@ -119,6 +120,11 @@ where T: std::cmp::Ord + num_traits::Zero + Copy + num_traits::CheckedSub
|
||||
self.down = T::zero();
|
||||
self.up = T::zero();
|
||||
}
|
||||
|
||||
/// Check if both down and up are zero.
|
||||
pub fn not_zero(&self) -> bool {
|
||||
self.down != T::zero() || self.up != T::zero()
|
||||
}
|
||||
}
|
||||
|
||||
impl <T> Into<UpDownOrder<T>> for DownUpOrder<T> {
|
||||
@ -139,6 +145,14 @@ where T: std::cmp::Ord + num_traits::Zero + Copy + num_traits::CheckedAdd
|
||||
}
|
||||
}
|
||||
|
||||
/// Divides two DownUpOrder values, returning a tuple of the results.
|
||||
pub fn down_up_divide(left: DownUpOrder<u64>, right: DownUpOrder<u64>) -> (f64, f64) {
|
||||
(
|
||||
left.down as f64 / right.down as f64,
|
||||
left.up as f64 / right.up as f64,
|
||||
)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
@ -3,7 +3,7 @@ use nix::{
|
||||
sys::time::TimeSpec,
|
||||
time::{clock_gettime, ClockId},
|
||||
};
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
use thiserror::Error;
|
||||
|
||||
/// Retrieves the current time, in seconds since the UNIX epoch.
|
||||
@ -32,6 +32,16 @@ pub fn time_since_boot() -> Result<TimeSpec, TimeError> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert a time in nanoseconds since boot to a UNIX timestamp.
|
||||
pub fn boot_time_nanos_to_unix_now(
|
||||
start_time_nanos_since_boot: u64,
|
||||
) -> Result<u64, TimeError> {
|
||||
let time_since_boot = time_since_boot()?;
|
||||
let since_boot = Duration::from(time_since_boot);
|
||||
let boot_time = unix_now().unwrap() - since_boot.as_secs();
|
||||
Ok(boot_time + Duration::from_nanos(start_time_nanos_since_boot).as_secs())
|
||||
}
|
||||
|
||||
/// Error type for time functions.
|
||||
#[derive(Error, Debug)]
|
||||
pub enum TimeError {
|
||||
|
@ -2,12 +2,13 @@ use std::fmt::Display;
|
||||
use byteorder::{BigEndian, ByteOrder};
|
||||
use zerocopy::FromBytes;
|
||||
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
|
||||
use allocative_derive::Allocative;
|
||||
|
||||
/// XdpIpAddress provides helpful conversion between the XDP program's
|
||||
/// native storage of IP addresses in `[u8; 16]` blocks of bytes and
|
||||
/// Rust `IpAddr` types.
|
||||
#[repr(C)]
|
||||
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash, FromBytes)]
|
||||
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash, FromBytes, Allocative)]
|
||||
pub struct XdpIpAddress(pub [u8; 16]);
|
||||
|
||||
impl Default for XdpIpAddress {
|
||||
|
@ -7,6 +7,7 @@ license = "GPL-2.0-only"
|
||||
[features]
|
||||
default = ["equinix_tests"]
|
||||
equinix_tests = []
|
||||
flamegraphs = []
|
||||
|
||||
[dependencies]
|
||||
anyhow = { workspace = true }
|
||||
@ -16,8 +17,9 @@ lqos_queue_tracker = { path = "../lqos_queue_tracker" }
|
||||
lqos_utils = { path = "../lqos_utils" }
|
||||
lqos_heimdall = { path = "../lqos_heimdall" }
|
||||
lts_client = { path = "../lts_client" }
|
||||
lts2_sys = { path = "../lts2_sys" }
|
||||
lqos_support_tool = { path = "../lqos_support_tool" }
|
||||
tokio = { version = "1", features = [ "full", "parking_lot" ] }
|
||||
tokio = { version = "1", features = [ "full" ] }
|
||||
once_cell = { workspace = true}
|
||||
lqos_bus = { path = "../lqos_bus" }
|
||||
signal-hook = "0.3"
|
||||
@ -28,27 +30,34 @@ tracing-subscriber = { workspace = true }
|
||||
nix = { workspace = true }
|
||||
sysinfo = { workspace = true }
|
||||
dashmap = { workspace = true }
|
||||
itertools = "0.12.1"
|
||||
csv = { workspace = true }
|
||||
itertools = "0.13.0"
|
||||
reqwest = { workspace = true }
|
||||
flate2 = "1.0"
|
||||
flate2 = "1"
|
||||
bincode = { workspace = true }
|
||||
ip_network_table = { workspace = true }
|
||||
ip_network = { workspace = true }
|
||||
zerocopy = { workspace = true }
|
||||
fxhash = "0.2.1"
|
||||
axum = { version = "0.7.5", features = ["ws", "http2"] }
|
||||
axum-extra = { version = "0.9.3", features = ["cookie", "cookie-private"] }
|
||||
tower-http = { version = "0.5.2", features = ["fs", "cors"] }
|
||||
axum = { version = "0.7.7", features = ["ws"] }
|
||||
axum-extra = { version = "0.9.4", features = ["cookie", "cookie-private"] }
|
||||
tower-http = { version = "0.6.1", features = ["fs", "cors"] }
|
||||
strum = { version = "0.26.3", features = ["derive"] }
|
||||
default-net = { workspace = true }
|
||||
surge-ping = "0.8.1"
|
||||
rand = "0.8.5"
|
||||
mime_guess = "2.0.4"
|
||||
libloading = "0.8.5"
|
||||
time = { version = "0.3.36", features = ["serde"] }
|
||||
serde_cbor = { workspace = true }
|
||||
timerfd = { workspace = true }
|
||||
crossbeam-channel = { workspace = true }
|
||||
arc-swap = { workspace = true }
|
||||
crossbeam-queue = { workspace = true }
|
||||
sha256 = "1.5.0"
|
||||
|
||||
# For memory debugging
|
||||
allocative = { version = "0.3.3", features = [ "dashmap" ] }
|
||||
allocative_derive = "0.3.3"
|
||||
|
||||
# Support JemAlloc on supported platforms
|
||||
#[target.'cfg(any(target_arch = "x86", target_arch = "x86_64"))'.dependencies]
|
||||
|
100
src/rust/lqosd/src/blackboard.rs
Normal file
100
src/rust/lqosd/src/blackboard.rs
Normal file
@ -0,0 +1,100 @@
|
||||
use std::collections::HashMap;
|
||||
use std::sync::OnceLock;
|
||||
use crossbeam_channel::Sender;
|
||||
use serde::Serialize;
|
||||
use tracing::{info, warn};
|
||||
use lqos_bus::BlackboardSystem;
|
||||
|
||||
pub static BLACKBOARD_SENDER: OnceLock<Sender<BlackboardCommand>> = OnceLock::new();
|
||||
|
||||
pub enum BlackboardCommand {
|
||||
FinishSession,
|
||||
BlackboardData {
|
||||
subsystem: BlackboardSystem,
|
||||
key: String,
|
||||
value: String,
|
||||
},
|
||||
BlackboardBlob {
|
||||
tag: String,
|
||||
part: usize,
|
||||
blob: Vec<u8>,
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct Blackboard {
|
||||
system: HashMap<String, String>,
|
||||
sites: HashMap<String, String>,
|
||||
circuits: HashMap<String, String>,
|
||||
devices: HashMap<String, String>,
|
||||
blobs: HashMap<String, Vec<u8>>,
|
||||
}
|
||||
|
||||
pub fn start_blackboard() {
|
||||
let (tx, rx) = crossbeam_channel::bounded(65535);
|
||||
std::thread::spawn(move || {
|
||||
let mut board = Blackboard {
|
||||
system: HashMap::new(),
|
||||
sites: HashMap::new(),
|
||||
circuits: HashMap::new(),
|
||||
devices: HashMap::new(),
|
||||
blobs: HashMap::new(),
|
||||
};
|
||||
|
||||
loop {
|
||||
match rx.recv() {
|
||||
Ok(BlackboardCommand::FinishSession) => {
|
||||
// If empty, do nothing
|
||||
if board.circuits.is_empty() && board.sites.is_empty() && board.system.is_empty() && board.blobs.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Serialize CBOR to a vec of u8
|
||||
info!("Sending blackboard data");
|
||||
let cbor = match serde_cbor::to_vec(&board) {
|
||||
Ok(j) => j,
|
||||
Err(e) => {
|
||||
warn!("Failed to serialize blackboard: {}", e);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
lts2_sys::blackboard(&cbor);
|
||||
board.circuits.clear();
|
||||
board.sites.clear();
|
||||
board.system.clear();
|
||||
board.devices.clear();
|
||||
board.blobs.clear();
|
||||
}
|
||||
Ok(BlackboardCommand::BlackboardData { subsystem, key, value }) => {
|
||||
info!("Received data: {} = {}", key, value);
|
||||
match subsystem {
|
||||
BlackboardSystem::System => {
|
||||
board.system.insert(key, value);
|
||||
}
|
||||
BlackboardSystem::Site => {
|
||||
board.sites.insert(key, value);
|
||||
}
|
||||
BlackboardSystem::Circuit => {
|
||||
board.circuits.insert(key, value);
|
||||
}
|
||||
BlackboardSystem::Device => {
|
||||
board.devices.insert(key, value);
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(BlackboardCommand::BlackboardBlob { tag, part, blob }) => {
|
||||
info!("Received blob: {tag}, part {part}");
|
||||
// If it is the first one, insert it. Otherwise, append it
|
||||
if part == 0 {
|
||||
board.blobs.insert(tag, blob);
|
||||
} else {
|
||||
board.blobs.get_mut(&tag).unwrap().extend_from_slice(&blob);
|
||||
}
|
||||
}
|
||||
Err(_) => break,
|
||||
}
|
||||
}
|
||||
warn!("Blackboard thread exiting");
|
||||
});
|
||||
let _ = BLACKBOARD_SENDER.set(tx);
|
||||
}
|
@ -13,8 +13,15 @@ mod stats;
|
||||
mod preflight_checks;
|
||||
mod node_manager;
|
||||
mod system_stats;
|
||||
mod blackboard;
|
||||
mod remote_commands;
|
||||
|
||||
#[cfg(feature = "flamegraphs")]
|
||||
use std::io::Write;
|
||||
use std::net::IpAddr;
|
||||
|
||||
#[cfg(feature = "flamegraphs")]
|
||||
use allocative::Allocative;
|
||||
use crate::{
|
||||
file_lock::FileLock,
|
||||
ip_mapping::{clear_ip_flows, del_ip_flow, list_mapped_ips, map_ip_to_flow}, throughput_tracker::flow_data::{flowbee_handle_events, setup_netflow_tracker, FlowActor},
|
||||
@ -42,7 +49,14 @@ use crate::ip_mapping::clear_hot_cache;
|
||||
use mimalloc::MiMalloc;
|
||||
|
||||
use tracing::level_filters::LevelFilter;
|
||||
|
||||
use crate::blackboard::{BlackboardCommand, BLACKBOARD_SENDER};
|
||||
use crate::remote_commands::start_remote_commands;
|
||||
#[cfg(feature = "flamegraphs")]
|
||||
use crate::shaped_devices_tracker::NETWORK_JSON;
|
||||
#[cfg(feature = "flamegraphs")]
|
||||
use crate::throughput_tracker::flow_data::{ALL_FLOWS, RECENT_FLOWS};
|
||||
#[cfg(feature = "flamegraphs")]
|
||||
use crate::throughput_tracker::THROUGHPUT_TRACKER;
|
||||
// Use JemAllocator only on supported platforms
|
||||
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
|
||||
#[global_allocator]
|
||||
@ -130,7 +144,14 @@ fn main() -> Result<()> {
|
||||
};
|
||||
|
||||
// Spawn tracking sub-systems
|
||||
if let Err(e) = lts2_sys::start_lts2() {
|
||||
error!("Failed to start Insight: {:?}", e);
|
||||
} else {
|
||||
info!("Insight client started successfully");
|
||||
}
|
||||
let _blackboard_tx = blackboard::start_blackboard();
|
||||
let long_term_stats_tx = start_long_term_stats();
|
||||
start_remote_commands();
|
||||
let flow_tx = setup_netflow_tracker()?;
|
||||
let _ = throughput_tracker::flow_data::setup_flow_analysis();
|
||||
start_heimdall()?;
|
||||
@ -138,9 +159,9 @@ fn main() -> Result<()> {
|
||||
shaped_devices_tracker::shaped_devices_watcher()?;
|
||||
shaped_devices_tracker::network_json_watcher()?;
|
||||
anonymous_usage::start_anonymous_usage();
|
||||
throughput_tracker::spawn_throughput_monitor(long_term_stats_tx.clone(), flow_tx)?;
|
||||
spawn_queue_monitor()?;
|
||||
let system_usage_tx = system_stats::start_system_stats()?;
|
||||
throughput_tracker::spawn_throughput_monitor(long_term_stats_tx.clone(), flow_tx, system_usage_tx.clone())?;
|
||||
spawn_queue_monitor()?;
|
||||
lqos_sys::bpf_garbage_collector();
|
||||
|
||||
// Handle signals
|
||||
@ -179,6 +200,9 @@ fn main() -> Result<()> {
|
||||
// Create the socket server
|
||||
let server = UnixSocketServer::new().expect("Unable to spawn server");
|
||||
|
||||
// Memory Debugging
|
||||
memory_debug();
|
||||
|
||||
let handle = std::thread::Builder::new().name("Async Bus/Web".to_string()).spawn(move || {
|
||||
tokio::runtime::Builder::new_current_thread()
|
||||
.enable_all()
|
||||
@ -207,6 +231,32 @@ fn main() -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(feature = "flamegraphs")]
|
||||
fn memory_debug() {
|
||||
std::thread::spawn(|| {
|
||||
loop {
|
||||
std::thread::sleep(std::time::Duration::from_secs(60));
|
||||
let mut fb = allocative::FlameGraphBuilder::default();
|
||||
fb.visit_global_roots();
|
||||
fb.visit_root(&*THROUGHPUT_TRACKER);
|
||||
fb.visit_root(&*ALL_FLOWS);
|
||||
fb.visit_root(&*RECENT_FLOWS);
|
||||
fb.visit_root(&*NETWORK_JSON);
|
||||
let flamegraph_src = fb.finish();
|
||||
let flamegraph_src = flamegraph_src.flamegraph();
|
||||
let Ok(mut file) = std::fs::File::create("/tmp/lqosd-mem.svg") else {
|
||||
error!("Unable to write flamegraph.");
|
||||
continue;
|
||||
};
|
||||
file.write_all(flamegraph_src.write().as_bytes()).unwrap();
|
||||
info!("Wrote flamegraph to /tmp/lqosd-mem.svg");
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
#[cfg(not(feature = "flamegraphs"))]
|
||||
fn memory_debug() {}
|
||||
|
||||
fn handle_bus_requests(
|
||||
requests: &[BusRequest],
|
||||
responses: &mut Vec<BusResponse>,
|
||||
@ -331,6 +381,32 @@ fn handle_bus_requests(
|
||||
BusRequest::EtherProtocolSummary => throughput_tracker::ether_protocol_summary(),
|
||||
BusRequest::IpProtocolSummary => throughput_tracker::ip_protocol_summary(),
|
||||
BusRequest::FlowDuration => throughput_tracker::flow_duration(),
|
||||
BusRequest::BlackboardFinish => {
|
||||
if let Some(sender) = BLACKBOARD_SENDER.get() {
|
||||
let _ = sender.send(BlackboardCommand::FinishSession);
|
||||
}
|
||||
BusResponse::Ack
|
||||
}
|
||||
BusRequest::BlackboardData { subsystem, key, value } => {
|
||||
if let Some(sender) = BLACKBOARD_SENDER.get() {
|
||||
let _ = sender.send(BlackboardCommand::BlackboardData {
|
||||
subsystem: subsystem.clone(),
|
||||
key: key.to_string(),
|
||||
value: value.to_string()
|
||||
});
|
||||
}
|
||||
BusResponse::Ack
|
||||
}
|
||||
BusRequest::BlackboardBlob { tag, part, blob } => {
|
||||
if let Some(sender) = BLACKBOARD_SENDER.get() {
|
||||
let _ = sender.send(BlackboardCommand::BlackboardBlob {
|
||||
tag: tag.to_string(),
|
||||
part: *part,
|
||||
blob: blob.clone()
|
||||
});
|
||||
}
|
||||
BusResponse::Ack
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
@ -38,9 +38,10 @@ pub enum LoginResult {
|
||||
async fn check_login(jar: &CookieJar, users: &WebUsers) -> LoginResult {
|
||||
if let Some(token) = jar.get(COOKIE_PATH) {
|
||||
// Validate the token
|
||||
return match users.get_role_from_token(token.value()).unwrap() {
|
||||
UserRole::ReadOnly => LoginResult::ReadOnly,
|
||||
UserRole::Admin => LoginResult::Admin,
|
||||
return match users.get_role_from_token(token.value()) {
|
||||
Ok(UserRole::ReadOnly) => LoginResult::ReadOnly,
|
||||
Ok(UserRole::Admin) => LoginResult::Admin,
|
||||
Err(_e) => LoginResult::Denied,
|
||||
}
|
||||
}
|
||||
LoginResult::Denied
|
||||
|
@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
scripts=( index.js template.js login.js first-run.js shaped-devices.js tree.js help.js unknown-ips.js configuration.js circuit.js flow_map.js all_tree_sankey.js asn_explorer.js )
|
||||
scripts=( index.js template.js login.js first-run.js shaped-devices.js tree.js help.js unknown-ips.js configuration.js circuit.js flow_map.js all_tree_sankey.js asn_explorer.js lts_trial.js )
|
||||
for script in "${scripts[@]}"
|
||||
do
|
||||
echo "Building {$script}"
|
||||
|
@ -50,7 +50,9 @@ class AllTreeSankeyGraph extends GenericRingBuffer {
|
||||
return trimStringWithElipsis(params.name.replace("(Generated Site) ", ""), 14);
|
||||
}
|
||||
};
|
||||
if (redact) label.backgroundColor = label.color;
|
||||
if (redact) {
|
||||
label.fontFamily = "Illegible";
|
||||
}
|
||||
|
||||
nodes.push({
|
||||
name: head[i][1].name,
|
||||
@ -66,7 +68,7 @@ class AllTreeSankeyGraph extends GenericRingBuffer {
|
||||
links.push({
|
||||
source: head[immediateParent][1].name,
|
||||
target: head[i][1].name,
|
||||
value: head[i][1].current_throughput[0],
|
||||
value: Math.min(1, head[i][1].current_throughput[0]),
|
||||
lineStyle: {
|
||||
color: capacityColor,
|
||||
},
|
||||
@ -130,6 +132,9 @@ class AllTreeSankey extends DashboardGraph {
|
||||
{
|
||||
nodeAlign: 'left',
|
||||
type: 'sankey',
|
||||
labelLayout: {
|
||||
moveOverlap: 'shiftx',
|
||||
},
|
||||
data: [],
|
||||
links: []
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {clearDiv} from "./helpers/builders";
|
||||
import {scaleNanos, scaleNumber} from "./helpers/scaling";
|
||||
import {scaleNanos, scaleNumber} from "./lq_js_common/helpers/scaling";
|
||||
|
||||
//const API_URL = "local-api/";
|
||||
const API_URL = "local-api/";
|
||||
|
@ -5,7 +5,7 @@ import {formatRetransmit, formatRtt, formatThroughput, lerpGreenToRedViaOrange}
|
||||
import {BitsPerSecondGauge} from "./graphs/bits_gauge";
|
||||
import {CircuitTotalGraph} from "./graphs/circuit_throughput_graph";
|
||||
import {CircuitRetransmitGraph} from "./graphs/circuit_retransmit_graph";
|
||||
import {scaleNanos, scaleNumber} from "./helpers/scaling";
|
||||
import {scaleNanos, scaleNumber} from "./lq_js_common/helpers/scaling";
|
||||
import {DevicePingHistogram} from "./graphs/device_ping_graph";
|
||||
import {FlowsSankey} from "./graphs/flow_sankey";
|
||||
import {subscribeWS} from "./pubsub/ws";
|
||||
@ -140,6 +140,8 @@ function connectFlowChannel() {
|
||||
});
|
||||
}
|
||||
|
||||
let movingAverages = new Map();
|
||||
|
||||
function updateTrafficTab(msg) {
|
||||
let target = document.getElementById("allTraffic");
|
||||
|
||||
@ -147,29 +149,48 @@ function updateTrafficTab(msg) {
|
||||
table.classList.add("table", "table-sm", "table-striped");
|
||||
let thead = document.createElement("thead");
|
||||
thead.appendChild(theading("Protocol"));
|
||||
thead.appendChild(theading("Current Rate", 2));
|
||||
thead.appendChild(theading("Total Bytes", 2));
|
||||
thead.appendChild(theading("Total Packets", 2));
|
||||
thead.appendChild(theading("TCP Retransmits", 2));
|
||||
thead.appendChild(theading("RTT", 2));
|
||||
thead.appendChild(theading("Current Rate (⬇️/⬆️)", 2));
|
||||
thead.appendChild(theading("Total Bytes (⬇️/⬆️)", 2));
|
||||
thead.appendChild(theading("Total Packets (⬇️/⬆️)", 2));
|
||||
thead.appendChild(theading("TCP Retransmits (⬇️/⬆️)", 2));
|
||||
thead.appendChild(theading("RTT (⬇️/⬆️)", 2));
|
||||
thead.appendChild(theading("ASN"));
|
||||
thead.appendChild(theading("Country"));
|
||||
thead.appendChild(theading("Remote IP"));
|
||||
table.appendChild(thead);
|
||||
let tbody = document.createElement("tbody");
|
||||
const one_second_in_nanos = 1000000000; // For display filtering
|
||||
const thirty_seconds_in_nanos = 30000000000; // For display filtering
|
||||
|
||||
msg.flows.forEach((flow) => {
|
||||
let flowKey = flow[0].protocol_name + flow[0].row_id;
|
||||
let rate = flow[1].rate_estimate_bps.down + flow[1].rate_estimate_bps.up;
|
||||
if (movingAverages.has(flowKey)) {
|
||||
let avg = movingAverages.get(flowKey);
|
||||
avg.push(rate);
|
||||
if (avg.length > 10) {
|
||||
avg.shift();
|
||||
}
|
||||
movingAverages.set(flowKey, avg);
|
||||
} else {
|
||||
movingAverages.set(flowKey, [ rate ]);
|
||||
}
|
||||
});
|
||||
|
||||
// Sort msg.flows by flows[0].rate_estimate_bps.down + flows[0].rate_estimate_bps.up descending
|
||||
msg.flows.sort((a, b) => {
|
||||
let aRate = a[1].rate_estimate_bps.down + a[1].rate_estimate_bps.up;
|
||||
let bRate = b[1].rate_estimate_bps.down + b[1].rate_estimate_bps.up;
|
||||
let flowKeyA = a[0].protocol_name + a[0].row_id;
|
||||
let flowKeyB = b[0].protocol_name + b[0].row_id;
|
||||
let aRate = movingAverages.get(flowKeyA).reduce((a, b) => a + b, 0) / movingAverages.get(flowKeyA).length;
|
||||
let bRate = movingAverages.get(flowKeyB).reduce((a, b) => a + b, 0) / movingAverages.get(flowKeyB).length;
|
||||
return bRate - aRate;
|
||||
});
|
||||
|
||||
msg.flows.forEach((flow) => {
|
||||
if (flow[0].last_seen_nanos > one_second_in_nanos) return;
|
||||
if (flow[0].last_seen_nanos > thirty_seconds_in_nanos) return;
|
||||
let row = document.createElement("tr");
|
||||
row.classList.add("small");
|
||||
let opacity = Math.min(1, flow[0].last_seen_nanos / thirty_seconds_in_nanos);
|
||||
row.style.opacity = 1.0 - opacity;
|
||||
row.appendChild(simpleRow(flow[0].protocol_name));
|
||||
row.appendChild(simpleRowHtml(formatThroughput(flow[1].rate_estimate_bps.down * 8, plan.down)));
|
||||
row.appendChild(simpleRowHtml(formatThroughput(flow[1].rate_estimate_bps.up * 8, plan.up)));
|
||||
@ -177,8 +198,8 @@ function updateTrafficTab(msg) {
|
||||
row.appendChild(simpleRow(scaleNumber(flow[1].bytes_sent.up)));
|
||||
row.appendChild(simpleRow(scaleNumber(flow[1].packets_sent.down)));
|
||||
row.appendChild(simpleRow(scaleNumber(flow[1].packets_sent.up)));
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(flow[1].tcp_retransmits.down)));
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(flow[1].tcp_retransmits.up)));
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(flow[1].tcp_retransmits.down / 100.0)));
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(flow[1].tcp_retransmits.up / 100.0)));
|
||||
row.appendChild(simpleRow(scaleNanos(flow[1].rtt[0].nanoseconds)));
|
||||
row.appendChild(simpleRow(scaleNanos(flow[1].rtt[1].nanoseconds)));
|
||||
row.appendChild(simpleRow(flow[0].asn_name));
|
||||
|
@ -279,7 +279,7 @@ function validateConfig() {
|
||||
$("#" + target.field).addClass("invalid");
|
||||
}
|
||||
}
|
||||
} else if (target.data === "integer" || target.data === "float") {
|
||||
} else if (target.data === "integer") {
|
||||
let newValue = $("#" + target.field).val();
|
||||
newValue = parseInt(newValue);
|
||||
if (isNaN(newValue)) {
|
||||
@ -302,6 +302,29 @@ function validateConfig() {
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (target.data === "float") {
|
||||
let newValue = $("#" + target.field).val();
|
||||
newValue = parseFloat(newValue);
|
||||
if (isNaN(newValue)) {
|
||||
valid = false;
|
||||
errors.push(target.path + " must be a decimal number.");
|
||||
$("#" + target.field).addClass("invalid");
|
||||
} else {
|
||||
if (target.min != null) {
|
||||
if (newValue < target.min) {
|
||||
valid = false;
|
||||
errors.push(target.path + " must be between " + target.min + " and " + target.max + ".");
|
||||
$("#" + target.field).addClass("invalid");
|
||||
}
|
||||
}
|
||||
if (target.max != null) {
|
||||
if (newValue > target.max) {
|
||||
valid = false;
|
||||
errors.push(target.path + " must be between " + target.min + " and " + target.max + ".");
|
||||
$("#" + target.field).addClass("invalid");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -713,15 +736,9 @@ function checkIpv4(ip) {
|
||||
}
|
||||
|
||||
function checkIpv6(ip) {
|
||||
const ipv6Pattern =
|
||||
/^([0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}$/;
|
||||
|
||||
if (ip.indexOf('/') === -1) {
|
||||
return ipv6Pattern.test(ip);
|
||||
} else {
|
||||
let parts = ip.split('/');
|
||||
return ipv6Pattern.test(parts[0]);
|
||||
}
|
||||
// Check if the input is a valid IPv6 address with prefix
|
||||
const regex = /^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))(\/([0-9]{1,3}))?$/;
|
||||
return regex.test(ip);
|
||||
}
|
||||
|
||||
function checkIpv4Duplicate(ip, index) {
|
||||
|
@ -6,6 +6,10 @@ export class BaseDashlet {
|
||||
this.id = "dash_" + slotNumber;
|
||||
this.size = 3;
|
||||
this.setupDone = false;
|
||||
// For multi-period LTS support
|
||||
this.buttons = [];
|
||||
this.graphs = [];
|
||||
this.graphDivs = [];
|
||||
}
|
||||
|
||||
canBeSlowedDown() {
|
||||
@ -96,8 +100,49 @@ export class BaseDashlet {
|
||||
title.appendChild(tooltip);
|
||||
}
|
||||
|
||||
if (this.supportsZoom()) {
|
||||
let zoom = document.createElement("span");
|
||||
zoom.style.marginLeft = "5px";
|
||||
let button = document.createElement("a");
|
||||
button.title = "Zoom";
|
||||
button.innerHTML = "<i class='fas fa-search-plus'></i>";
|
||||
zoom.appendChild(button);
|
||||
title.appendChild(zoom);
|
||||
}
|
||||
|
||||
div.appendChild(title);
|
||||
|
||||
return div;
|
||||
}
|
||||
|
||||
supportsZoom() {
|
||||
return false;
|
||||
}
|
||||
|
||||
makePeriodBtn(name) {
|
||||
let btn = document.createElement("button");
|
||||
btn.classList.add("btn", "btn-sm", "btn-outline-primary", "tiny", "me-1");
|
||||
btn.innerText = name;
|
||||
btn.id = this.graphDivId() + "_btn_" + name;
|
||||
btn.onclick = () => {
|
||||
this.buttons.forEach((b) => {
|
||||
b.classList.remove("active");
|
||||
let targetName = "#" + b.id.replace("_btn", "");
|
||||
if (targetName.lastIndexOf("Live") > 0) {
|
||||
targetName = "#" + this.graphDivId();
|
||||
}
|
||||
if (b === btn) {
|
||||
b.classList.add("active");
|
||||
$(targetName).show();
|
||||
} else {
|
||||
$(targetName).hide();
|
||||
}
|
||||
});
|
||||
this.graphs.forEach((g) => {
|
||||
g.chart.resize();
|
||||
});
|
||||
}
|
||||
this.buttons.push(btn);
|
||||
return btn;
|
||||
}
|
||||
}
|
@ -25,6 +25,9 @@ import {Top10DownloadersVisual} from "./top10_downloads_graphic";
|
||||
import {Worst10DownloadersVisual} from "./worst10_downloaders_graphic";
|
||||
import {Worst10RetransmitsVisual} from "./worst10_retransmits_graphic";
|
||||
import {FlowDurationDash} from "./flow_durations_dash";
|
||||
import {LtsShaperStatus} from "./ltsShaperStatus";
|
||||
import {LtsLast24Hours} from "./ltsLast24Hours";
|
||||
import {TcpRetransmitsDash} from "./total_retransmits";
|
||||
|
||||
export const DashletMenu = [
|
||||
{ name: "Throughput Bits/Second", tag: "throughputBps", size: 3 },
|
||||
@ -50,10 +53,13 @@ export const DashletMenu = [
|
||||
{ name: "Network Tree Summary", tag: "treeSummary", size: 6 },
|
||||
{ name: "Combined Top 10 Box", tag: "combinedTop10", size: 6 },
|
||||
{ name: "Total Cake Stats", tag: "totalCakeStats", size: 3 },
|
||||
{ name: "Total TCP Retransmits", tag: "totalRetransmits", size: 3 },
|
||||
{ name: "Circuits At Capacity", tag: "circuitCapacity", size: 6 },
|
||||
{ name: "Tree Nodes At Capacity", tag: "treeCapacity", size: 6 },
|
||||
{ name: "Network Tree Sankey", tag: "networkTreeSankey", size: 6 },
|
||||
{ name: "Round-Trip Time Histogram 3D", tag: "rttHistogram3D", size: 12 },
|
||||
{ name: "(Insight) Shaper Status", tag: "ltsShaperStatus", size: 3 },
|
||||
{ name: "(Insight) Last 24 Hours", tag: "ltsLast24", size: 3}
|
||||
];
|
||||
|
||||
export function widgetFactory(widgetName, count) {
|
||||
@ -83,9 +89,12 @@ export function widgetFactory(widgetName, count) {
|
||||
case "treeSummary" : widget = new TopTreeSummary(count); break;
|
||||
case "combinedTop10" : widget = new CombinedTopDashlet(count); break;
|
||||
case "totalCakeStats" : widget = new QueueStatsTotalDash(count); break;
|
||||
case "totalRetransmits" : widget = new TcpRetransmitsDash(count); break;
|
||||
case "circuitCapacity" : widget = new CircuitCapacityDash(count); break;
|
||||
case "treeCapacity" : widget = new TreeCapacityDash(count); break;
|
||||
case "networkTreeSankey": widget = new TopTreeSankey(count); break;
|
||||
case "ltsShaperStatus" : widget = new LtsShaperStatus(count); break;
|
||||
case "ltsLast24" : widget = new LtsLast24Hours(count); break;
|
||||
default: {
|
||||
console.log("I don't know how to construct a widget of type [" + widgetName + "]");
|
||||
return null;
|
||||
|
@ -1,6 +1,7 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDashDiv, theading} from "../helpers/builders";
|
||||
import {scaleNumber, rttNanosAsSpan} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
import {rttNanosAsSpan} from "../helpers/scaling";
|
||||
|
||||
export class Top10EndpointsByCountry extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
|
@ -1,6 +1,6 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDashDiv, simpleRow, theading} from "../helpers/builders";
|
||||
import {scaleNumber, scaleNanos} from "../helpers/scaling";
|
||||
import {scaleNumber, scaleNanos} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class EtherProtocols extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
|
@ -1,6 +1,6 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDashDiv, simpleRow, theading} from "../helpers/builders";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class IpProtocols extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
|
@ -0,0 +1,49 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {LtsLast24Hours_graph} from "../graphs/ltsLast24Hours_graph";
|
||||
|
||||
export class LtsLast24Hours extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
}
|
||||
|
||||
title() {
|
||||
return "Last 24 Hours Throughput (Insight)";
|
||||
}
|
||||
|
||||
canBeSlowedDown() {
|
||||
return true;
|
||||
}
|
||||
|
||||
tooltip() {
|
||||
return "<h5>24 Hour Throughput</h5><p>Throughput for the last 24 hours. The error bars indicate minimum and maximum within a time range, while the points indicate median. Many other systems will inadvertently give incorrect data by averaging across large periods of time, and not sampling frequently enough (1 second for Mb/s).</p>";
|
||||
}
|
||||
|
||||
subscribeTo() {
|
||||
return [ "Cadence" ];
|
||||
}
|
||||
|
||||
buildContainer() {
|
||||
let base = super.buildContainer();
|
||||
base.appendChild(this.graphDiv());
|
||||
return base;
|
||||
}
|
||||
|
||||
setup() {
|
||||
super.setup();
|
||||
this.count = 0;
|
||||
this.graph = new LtsLast24Hours_graph(this.graphDivId());
|
||||
}
|
||||
|
||||
onMessage(msg) {
|
||||
if (msg.event === "Cadence") {
|
||||
if (this.count === 0) {
|
||||
// Test
|
||||
$.get("/local-api/lts24", (data) => {
|
||||
this.graph.update(data);
|
||||
});
|
||||
}
|
||||
this.count++;
|
||||
this.count %= 120;
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,82 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDiv, simpleRow, simpleRowHtml, theading} from "../helpers/builders";
|
||||
|
||||
export class LtsShaperStatus extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
}
|
||||
|
||||
title() {
|
||||
return "Shaper Status (Insight)";
|
||||
}
|
||||
|
||||
canBeSlowedDown() {
|
||||
return true;
|
||||
}
|
||||
|
||||
tooltip() {
|
||||
return "<h5>Shaper Status</h5><p>Status from each of the LibreQoS shapers you are running.</p>";
|
||||
}
|
||||
|
||||
subscribeTo() {
|
||||
return [ "Cadence" ];
|
||||
}
|
||||
|
||||
buildContainer() {
|
||||
let base = super.buildContainer();
|
||||
base.style.height = "250px";
|
||||
base.style.overflow = "auto";
|
||||
let content = document.createElement("div");
|
||||
content.style.width = "100%";
|
||||
content.id = "ltsShaperStatus_" + this.slot;
|
||||
this.contentId = content.id;
|
||||
base.appendChild(content);
|
||||
return base;
|
||||
}
|
||||
|
||||
setup() {
|
||||
super.setup();
|
||||
this.count = 0;
|
||||
}
|
||||
|
||||
onMessage(msg) {
|
||||
if (msg.event === "Cadence") {
|
||||
if (this.count === 0) {
|
||||
$.get("/local-api/ltsShaperStatus", (data) => {
|
||||
let target = document.getElementById(this.contentId);
|
||||
|
||||
let table = document.createElement("table");
|
||||
table.classList.add("table", "table-sm", "small");
|
||||
let thead = document.createElement("thead");
|
||||
thead.appendChild(theading(""));
|
||||
thead.appendChild(theading("Shaper"));
|
||||
thead.appendChild(theading("Last Seen (seconds)"));
|
||||
table.appendChild(thead);
|
||||
let tbody = document.createElement("tbody");
|
||||
|
||||
data.forEach((row) => {
|
||||
let tr = document.createElement("tr");
|
||||
tr.classList.add("small");
|
||||
let color = "green";
|
||||
if (row.last_seen_seconds_ago > 300) {
|
||||
color = "red";
|
||||
} else if (row.last_seen_seconds_ago > 120) {
|
||||
color = "orange";
|
||||
}
|
||||
tr.appendChild(simpleRowHtml(`<span style="color: ${color}">■</span>`));
|
||||
tr.appendChild(simpleRow(row.name));
|
||||
tr.appendChild(simpleRow(row.last_seen_seconds_ago + "s"));
|
||||
tbody.appendChild(tr);
|
||||
})
|
||||
table.appendChild(tbody);
|
||||
clearDiv(target);
|
||||
target.appendChild(table);
|
||||
|
||||
//console.log(data);
|
||||
});
|
||||
}
|
||||
this.count++;
|
||||
this.count %= 10;
|
||||
}
|
||||
}
|
||||
}
|
@ -1,11 +1,12 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDashDiv, simpleRow, theading} from "../helpers/builders";
|
||||
import {scaleNumber, scaleNanos} from "../helpers/scaling";
|
||||
import {QueueStatsTotalGraph} from "../graphs/queue_stats_total_graph";
|
||||
import {periodNameToSeconds} from "../helpers/time_periods";
|
||||
import {LtsCakeGraph} from "../graphs/lts_cake_stats_graph";
|
||||
|
||||
export class QueueStatsTotalDash extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
this.counter = 0;
|
||||
}
|
||||
|
||||
title() {
|
||||
@ -22,18 +23,74 @@ export class QueueStatsTotalDash extends BaseDashlet {
|
||||
|
||||
buildContainer() {
|
||||
let base = super.buildContainer();
|
||||
base.appendChild(this.graphDiv());
|
||||
let graphs = this.graphDiv();
|
||||
|
||||
// Add some time controls
|
||||
base.classList.add("dashlet-with-controls");
|
||||
let controls = document.createElement("div");
|
||||
controls.classList.add("dashgraph-controls", "small");
|
||||
|
||||
if (window.hasInsight) {
|
||||
let btnLive = this.makePeriodBtn("Live");
|
||||
btnLive.classList.add("active");
|
||||
controls.appendChild(btnLive);
|
||||
|
||||
let targets = ["1h", "6h", "12h", "24h", "7d"];
|
||||
targets.forEach((t) => {
|
||||
let graph = document.createElement("div");
|
||||
graph.id = this.graphDivId() + "_" + t;
|
||||
graph.classList.add("dashgraph");
|
||||
graph.style.display = "none";
|
||||
graph.innerHTML = window.hasLts ? "Loading..." : "<p class='text-secondary small'>You need an active LibreQoS Insight account to view this data.</p>";
|
||||
this.graphDivs.push(graph);
|
||||
controls.appendChild(this.makePeriodBtn(t));
|
||||
});
|
||||
}
|
||||
|
||||
base.appendChild(controls);
|
||||
base.appendChild(graphs);
|
||||
this.graphDivs.forEach((g) => {
|
||||
base.appendChild(g);
|
||||
});
|
||||
return base;
|
||||
}
|
||||
|
||||
setup() {
|
||||
super.setup();
|
||||
this.graph = new QueueStatsTotalGraph(this.graphDivId())
|
||||
this.graph = new QueueStatsTotalGraph(this.graphDivId());
|
||||
this.ltsLoaded = false;
|
||||
if (window.hasLts) {
|
||||
this.graphDivs.forEach((g) => {
|
||||
let period = periodNameToSeconds(g.id.replace(this.graphDivId() + "_", ""));
|
||||
let graph = new LtsCakeGraph(g.id, period);
|
||||
this.graphs.push(graph);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
onMessage(msg) {
|
||||
if (msg.event === "QueueStatsTotal") {
|
||||
this.graph.update(msg.marks, msg.drops);
|
||||
|
||||
if (!this.ltsLoaded && window.hasLts) {
|
||||
//console.log("Loading LTS data");
|
||||
this.graphs.forEach((g) => {
|
||||
//console.log("Loading " + g.period);
|
||||
let url = "/local-api/ltsCake/" + g.period;
|
||||
//console.log(url);
|
||||
$.get(url, (data) => {
|
||||
//console.log(data);
|
||||
g.update(data);
|
||||
});
|
||||
});
|
||||
this.ltsLoaded = true;
|
||||
}
|
||||
this.counter++;
|
||||
if (this.counter > 120) {
|
||||
// Reload the LTS graphs every 2 minutes
|
||||
this.counter = 0;
|
||||
this.ltsLoaded = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -27,7 +27,7 @@ export class ThroughputPpsDash extends BaseDashlet{
|
||||
|
||||
onMessage(msg) {
|
||||
if (msg.event === "Throughput") {
|
||||
this.graph.update(msg.data.pps.down, msg.data.pps.up);
|
||||
this.graph.update(msg.data.pps.down, msg.data.pps.up, msg.data.tcp_pps, msg.data.udp_pps, msg.data.icmp_pps);
|
||||
}
|
||||
}
|
||||
}
|
@ -1,9 +1,12 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {ThroughputRingBufferGraph} from "../graphs/throughput_ring_graph";
|
||||
import {LtsThroughputPeriodGraph} from "../graphs/lts_throughput_period_graph";
|
||||
import {periodNameToSeconds} from "../helpers/time_periods";
|
||||
|
||||
export class ThroughputRingDash extends BaseDashlet{
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
this.counter = 0;
|
||||
}
|
||||
|
||||
title() {
|
||||
@ -20,18 +23,77 @@ export class ThroughputRingDash extends BaseDashlet{
|
||||
|
||||
buildContainer() {
|
||||
let base = super.buildContainer();
|
||||
base.appendChild(this.graphDiv());
|
||||
let graphs = this.graphDiv();
|
||||
|
||||
// Add some time controls
|
||||
base.classList.add("dashlet-with-controls");
|
||||
let controls = document.createElement("div");
|
||||
controls.classList.add("dashgraph-controls", "small");
|
||||
|
||||
if (window.hasInsight) {
|
||||
let btnLive = this.makePeriodBtn("Live");
|
||||
btnLive.classList.add("active");
|
||||
controls.appendChild(btnLive);
|
||||
|
||||
let targets = ["1h", "6h", "12h", "24h", "7d"];
|
||||
targets.forEach((t) => {
|
||||
let graph = document.createElement("div");
|
||||
graph.id = this.graphDivId() + "_" + t;
|
||||
graph.classList.add("dashgraph");
|
||||
graph.style.display = "none";
|
||||
graph.innerHTML = window.hasLts ? "Loading..." : "<p class='text-secondary small'>You need an active LibreQoS Insight account to view this data.</p>";
|
||||
this.graphDivs.push(graph);
|
||||
controls.appendChild(this.makePeriodBtn(t));
|
||||
});
|
||||
}
|
||||
|
||||
base.appendChild(controls);
|
||||
base.appendChild(graphs);
|
||||
this.graphDivs.forEach((g) => {
|
||||
base.appendChild(g);
|
||||
});
|
||||
return base;
|
||||
}
|
||||
|
||||
setup() {
|
||||
super.setup();
|
||||
this.graph = new ThroughputRingBufferGraph(this.graphDivId());
|
||||
this.ltsLoaded = false;
|
||||
if (window.hasLts) {
|
||||
this.graphDivs.forEach((g) => {
|
||||
let period = periodNameToSeconds(g.id.replace(this.graphDivId() + "_", ""));
|
||||
let graph = new LtsThroughputPeriodGraph(g.id, period);
|
||||
this.graphs.push(graph);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
onMessage(msg) {
|
||||
if (msg.event === "Throughput") {
|
||||
this.graph.update(msg.data.shaped_bps, msg.data.bps);
|
||||
if (!this.ltsLoaded && window.hasLts) {
|
||||
//console.log("Loading LTS data");
|
||||
this.graphs.forEach((g) => {
|
||||
//console.log("Loading " + g.period);
|
||||
let url = "/local-api/ltsThroughput/" + g.period;
|
||||
//console.log(url);
|
||||
$.get(url, (data) => {
|
||||
//console.log(data);
|
||||
g.update(data);
|
||||
});
|
||||
});
|
||||
this.ltsLoaded = true;
|
||||
}
|
||||
this.counter++;
|
||||
if (this.counter > 120) {
|
||||
// Reload the LTS graphs every 2 minutes
|
||||
this.counter = 0;
|
||||
this.ltsLoaded = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
supportsZoom() {
|
||||
return true;
|
||||
}
|
||||
}
|
@ -1,12 +1,11 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {RttHistogram} from "../graphs/rtt_histo";
|
||||
import {clearDashDiv, theading, TopNTableFromMsgData, topNTableHeader, topNTableRow} from "../helpers/builders";
|
||||
import {scaleNumber, rttCircleSpan, formatThroughput, formatRtt, formatRetransmit} from "../helpers/scaling";
|
||||
import {redactCell} from "../helpers/redact";
|
||||
import {clearDashDiv, TopNTableFromMsgData} from "../helpers/builders";
|
||||
import {TimedCache} from "../lq_js_common/helpers/timed_cache";
|
||||
|
||||
export class Top10Downloaders extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
this.timeCache = new TimedCache(10);
|
||||
}
|
||||
|
||||
title() {
|
||||
@ -40,11 +39,23 @@ export class Top10Downloaders extends BaseDashlet {
|
||||
if (msg.event === "TopDownloads") {
|
||||
let target = document.getElementById(this.id);
|
||||
|
||||
let t = TopNTableFromMsgData(msg);
|
||||
msg.data.forEach((r) => {
|
||||
let key = r.circuit_id;
|
||||
this.timeCache.addOrUpdate(key, r);
|
||||
});
|
||||
this.timeCache.tick();
|
||||
|
||||
let items = this.timeCache.get();
|
||||
items.sort((a, b) => {
|
||||
return b.bits_per_second.down - a.bits_per_second.down;
|
||||
});
|
||||
// Limit to 10 entries
|
||||
items = items.slice(0, 10);
|
||||
let t = TopNTableFromMsgData(items);
|
||||
|
||||
// Display it
|
||||
clearDashDiv(this.id, target);
|
||||
target.appendChild(t);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,9 +1,4 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {
|
||||
scaleNumber,
|
||||
lerpGreenToRedViaOrange, lerpColor
|
||||
} from "../helpers/scaling";
|
||||
import {DashboardGraph} from "../graphs/dashboard_graph";
|
||||
import {TopNSankey} from "../graphs/top_n_sankey";
|
||||
|
||||
export class Top10DownloadersVisual extends BaseDashlet {
|
||||
|
@ -1,7 +1,9 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDashDiv, theading} from "../helpers/builders";
|
||||
import {scaleNumber, formatRetransmit, rttNanosAsSpan} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
import {RttCache} from "../helpers/rtt_cache";
|
||||
import {formatRetransmit, rttNanosAsSpan} from "../helpers/scaling";
|
||||
import {TrimToFit} from "../lq_js_common/helpers/text_utils";
|
||||
|
||||
export class Top10FlowsBytes extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
@ -65,7 +67,7 @@ export class Top10FlowsBytes extends BaseDashlet {
|
||||
let circuit = document.createElement("td");
|
||||
let link = document.createElement("a");
|
||||
link.href = "circuit.html?id=" + encodeURI(r.circuit_id);
|
||||
link.innerText = r.circuit_name;
|
||||
link.innerText = TrimToFit(r.circuit_name);
|
||||
link.classList.add("redactable");
|
||||
circuit.appendChild(link);
|
||||
row.appendChild(circuit);
|
||||
@ -108,11 +110,11 @@ export class Top10FlowsBytes extends BaseDashlet {
|
||||
row.appendChild(rttU);
|
||||
|
||||
let tcp1 = document.createElement("td");
|
||||
tcp1.innerHTML = formatRetransmit(r.tcp_retransmits.down);
|
||||
tcp1.innerHTML = formatRetransmit(r.tcp_retransmits.down / r.packets_sent.down);
|
||||
row.appendChild(tcp1);
|
||||
|
||||
let tcp2 = document.createElement("td");
|
||||
tcp2.innerHTML = formatRetransmit(r.tcp_retransmits.up);
|
||||
tcp2.innerHTML = formatRetransmit(r.tcp_retransmits.up / r.packets_sent.up);
|
||||
row.appendChild(tcp2);
|
||||
|
||||
let asn = document.createElement("td");
|
||||
|
@ -1,7 +1,9 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {clearDashDiv, theading} from "../helpers/builders";
|
||||
import {scaleNumber, formatRetransmit, rttNanosAsSpan} from "../helpers/scaling";
|
||||
import {formatRetransmit, rttNanosAsSpan} from "../helpers/scaling";
|
||||
import {RttCache} from "../helpers/rtt_cache";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
import {TrimToFit} from "../lq_js_common/helpers/text_utils";
|
||||
|
||||
export class Top10FlowsRate extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
@ -65,7 +67,7 @@ export class Top10FlowsRate extends BaseDashlet {
|
||||
let circuit = document.createElement("td");
|
||||
let link = document.createElement("a");
|
||||
link.href = "circuit.html?id=" + encodeURI(r.circuit_id);
|
||||
link.innerText = r.circuit_name;
|
||||
link.innerText = TrimToFit(r.circuit_name);
|
||||
link.classList.add("redactable");
|
||||
circuit.appendChild(link);
|
||||
row.appendChild(circuit);
|
||||
@ -108,11 +110,11 @@ export class Top10FlowsRate extends BaseDashlet {
|
||||
row.appendChild(rttU);
|
||||
|
||||
let tcp1 = document.createElement("td");
|
||||
tcp1.innerHTML = formatRetransmit(r.tcp_retransmits.down);
|
||||
tcp1.innerHTML = formatRetransmit(r.tcp_retransmits.down / r.packets_sent.down);
|
||||
row.appendChild(tcp1);
|
||||
|
||||
let tcp2 = document.createElement("td");
|
||||
tcp2.innerHTML = formatRetransmit(r.tcp_retransmits.up);
|
||||
tcp2.innerHTML = formatRetransmit(r.tcp_retransmits.up / r.packets_sent.up);
|
||||
row.appendChild(tcp2);
|
||||
|
||||
let asn = document.createElement("td");
|
||||
|
@ -91,7 +91,7 @@ export class TopTreeSankey extends BaseDashlet {
|
||||
fontSize: 9,
|
||||
color: "#999"
|
||||
};
|
||||
if (redact) label.backgroundColor = label.color;
|
||||
if (redact) label.fontFamily = "Illegible";
|
||||
|
||||
let name = r[1].name;
|
||||
let bytes = r[1].current_throughput[0];
|
||||
|
@ -71,8 +71,16 @@ export class TopTreeSummary extends BaseDashlet {
|
||||
row.appendChild(nameCol);
|
||||
row.appendChild(simpleRowHtml(formatThroughput(r[1].current_throughput[0] * 8, r[1].max_throughput[0])));
|
||||
row.appendChild(simpleRowHtml(formatThroughput(r[1].current_throughput[1] * 8, r[1].max_throughput[1])));
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(r[1].current_retransmits[0] )))
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(r[1].current_retransmits[1])))
|
||||
if (r[1].current_tcp_packets[0] > 0) {
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(r[1].current_retransmits[0] / r[1].current_tcp_packets[0])))
|
||||
} else {
|
||||
row.appendChild(simpleRowHtml(""));
|
||||
}
|
||||
if (r[1].current_tcp_packets[1] > 0) {
|
||||
row.appendChild(simpleRowHtml(formatRetransmit(r[1].current_retransmits[1] / r[1].current_tcp_packets[1])))
|
||||
} else {
|
||||
row.appendChild(simpleRowHtml(""));
|
||||
}
|
||||
row.appendChild(simpleRowHtml(formatCakeStat(r[1].current_marks[0])))
|
||||
row.appendChild(simpleRowHtml(formatCakeStat(r[1].current_marks[1])))
|
||||
row.appendChild(simpleRowHtml(formatCakeStat(r[1].current_drops[0])))
|
||||
|
@ -0,0 +1,99 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {periodNameToSeconds} from "../helpers/time_periods";
|
||||
import {RetransmitsGraph} from "../graphs/retransmits_graph";
|
||||
import {LtsRetransmitsGraph} from "../graphs/lts_retransmits";
|
||||
|
||||
export class TcpRetransmitsDash extends BaseDashlet{
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
this.counter = 0;
|
||||
}
|
||||
|
||||
title() {
|
||||
return "Last 5 Minutes TCP Retransmits";
|
||||
}
|
||||
|
||||
tooltip() {
|
||||
return "<h5>Last 5 Minutes TCP Retransmits</h5><p>TCP retransmits over time. Retransmits can happen because CAKE is adjusting packet pacing, but large volumes are often indicative of a network problem - particularly customer Wi-Fi problems.</p>"
|
||||
}
|
||||
|
||||
subscribeTo() {
|
||||
return [ "Retransmits" ];
|
||||
}
|
||||
|
||||
buildContainer() {
|
||||
let base = super.buildContainer();
|
||||
let graphs = this.graphDiv();
|
||||
|
||||
// Add some time controls
|
||||
base.classList.add("dashlet-with-controls");
|
||||
let controls = document.createElement("div");
|
||||
controls.classList.add("dashgraph-controls", "small");
|
||||
|
||||
if (window.hasInsight) {
|
||||
let btnLive = this.makePeriodBtn("Live");
|
||||
btnLive.classList.add("active");
|
||||
controls.appendChild(btnLive);
|
||||
|
||||
let targets = ["1h", "6h", "12h", "24h", "7d"];
|
||||
targets.forEach((t) => {
|
||||
let graph = document.createElement("div");
|
||||
graph.id = this.graphDivId() + "_" + t;
|
||||
graph.classList.add("dashgraph");
|
||||
graph.style.display = "none";
|
||||
graph.innerHTML = window.hasLts ? "Loading..." : "<p class='text-secondary small'>You need an active LibreQoS Insight account to view this data.</p>";
|
||||
this.graphDivs.push(graph);
|
||||
controls.appendChild(this.makePeriodBtn(t));
|
||||
});
|
||||
}
|
||||
|
||||
base.appendChild(controls);
|
||||
base.appendChild(graphs);
|
||||
this.graphDivs.forEach((g) => {
|
||||
base.appendChild(g);
|
||||
});
|
||||
return base;
|
||||
}
|
||||
|
||||
setup() {
|
||||
super.setup();
|
||||
this.graph = new RetransmitsGraph(this.graphDivId());
|
||||
this.ltsLoaded = false;
|
||||
if (window.hasLts) {
|
||||
this.graphDivs.forEach((g) => {
|
||||
let period = periodNameToSeconds(g.id.replace(this.graphDivId() + "_", ""));
|
||||
let graph = new LtsRetransmitsGraph(g.id, period);
|
||||
this.graphs.push(graph);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
onMessage(msg) {
|
||||
if (msg.event === "Retransmits") {
|
||||
this.graph.update(msg.data.down, msg.data.up, msg.data.tcp_down, msg.data.tcp_up);
|
||||
if (!this.ltsLoaded && window.hasLts) {
|
||||
//console.log("Loading LTS data");
|
||||
this.graphs.forEach((g) => {
|
||||
//console.log("Loading " + g.period);
|
||||
let url = "/local-api/ltsRetransmits/" + g.period;
|
||||
//console.log(url);
|
||||
$.get(url, (data) => {
|
||||
//console.log(data);
|
||||
g.update(data);
|
||||
});
|
||||
});
|
||||
this.ltsLoaded = true;
|
||||
}
|
||||
this.counter++;
|
||||
if (this.counter > 120) {
|
||||
// Reload the LTS graphs every 2 minutes
|
||||
this.counter = 0;
|
||||
this.ltsLoaded = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
supportsZoom() {
|
||||
return true;
|
||||
}
|
||||
}
|
@ -1,12 +1,11 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {RttHistogram} from "../graphs/rtt_histo";
|
||||
import {clearDashDiv, theading, TopNTableFromMsgData, topNTableHeader, topNTableRow} from "../helpers/builders";
|
||||
import {scaleNumber, rttCircleSpan, formatThroughput, formatRtt} from "../helpers/scaling";
|
||||
import {redactCell} from "../helpers/redact";
|
||||
import {clearDashDiv, TopNTableFromMsgData} from "../helpers/builders";
|
||||
import {TimedCache} from "../lq_js_common/helpers/timed_cache";
|
||||
|
||||
export class Worst10Downloaders extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
this.timeCache = new TimedCache(10);
|
||||
}
|
||||
|
||||
canBeSlowedDown() {
|
||||
@ -40,7 +39,19 @@ export class Worst10Downloaders extends BaseDashlet {
|
||||
if (msg.event === "WorstRTT") {
|
||||
let target = document.getElementById(this.id);
|
||||
|
||||
let t = TopNTableFromMsgData(msg);
|
||||
msg.data.forEach((r) => {
|
||||
let key = r.circuit_id;
|
||||
this.timeCache.addOrUpdate(key, r);
|
||||
});
|
||||
this.timeCache.tick();
|
||||
|
||||
let items = this.timeCache.get();
|
||||
items.sort((a, b) => {
|
||||
return a.bits_per_second.down - b.bits_per_second.down;
|
||||
});
|
||||
// Limit to 10 entries
|
||||
items = items.slice(0, 10);
|
||||
let t = TopNTableFromMsgData(items);
|
||||
|
||||
// Display it
|
||||
clearDashDiv(this.id, target);
|
||||
|
@ -1,12 +1,11 @@
|
||||
import {BaseDashlet} from "./base_dashlet";
|
||||
import {RttHistogram} from "../graphs/rtt_histo";
|
||||
import {clearDashDiv, theading, TopNTableFromMsgData, topNTableHeader, topNTableRow} from "../helpers/builders";
|
||||
import {scaleNumber, rttCircleSpan, formatRtt, formatThroughput} from "../helpers/scaling";
|
||||
import {redactCell} from "../helpers/redact";
|
||||
import {clearDashDiv, TopNTableFromMsgData} from "../helpers/builders";
|
||||
import {TimedCache} from "../lq_js_common/helpers/timed_cache";
|
||||
|
||||
export class Worst10Retransmits extends BaseDashlet {
|
||||
constructor(slot) {
|
||||
super(slot);
|
||||
this.timeCache = new TimedCache(10);
|
||||
}
|
||||
|
||||
canBeSlowedDown() {
|
||||
@ -40,7 +39,19 @@ export class Worst10Retransmits extends BaseDashlet {
|
||||
if (msg.event === "WorstRetransmits") {
|
||||
let target = document.getElementById(this.id);
|
||||
|
||||
let t = TopNTableFromMsgData(msg);
|
||||
msg.data.forEach((r) => {
|
||||
let key = r.circuit_id;
|
||||
this.timeCache.addOrUpdate(key, r);
|
||||
});
|
||||
this.timeCache.tick();
|
||||
|
||||
let items = this.timeCache.get();
|
||||
items.sort((a, b) => {
|
||||
return a.tcp_retransmits.down - b.tcp_retransmits.down;
|
||||
});
|
||||
// Limit to 10 entries
|
||||
items = items.slice(0, 10);
|
||||
let t = TopNTableFromMsgData(items);
|
||||
|
||||
// Display it
|
||||
clearDashDiv(this.id, target);
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class BitsPerSecondGauge extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class CakeBacklog extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,4 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
|
||||
export class CakeDelays extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class CakeDrops extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class CakeMarks extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,4 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
|
||||
export class CakeQueueLength extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class CakeTraffic extends DashboardGraph {
|
||||
constructor(id) {
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
const RING_SIZE = 60 * 5; // 5 Minutes
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
const RING_SIZE = 60 * 5; // 5 Minutes
|
||||
|
||||
|
@ -1,5 +1,4 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
|
||||
export class FlowsSankey extends DashboardGraph {
|
||||
constructor(id) {
|
||||
@ -25,12 +24,12 @@ export class FlowsSankey extends DashboardGraph {
|
||||
let asns = {};
|
||||
let remoteDevices = {};
|
||||
|
||||
const one_second_in_nanos = 1000000000;
|
||||
const ten_second_in_nanos = 10000000000;
|
||||
|
||||
// Iterate over each flow and accumulate traffic.
|
||||
let flowCount = 0;
|
||||
flows.flows.forEach((flow) => {
|
||||
if (flow[0].last_seen_nanos > one_second_in_nanos) return;
|
||||
if (flow[0].last_seen_nanos > ten_second_in_nanos) return;
|
||||
flowCount++;
|
||||
let localDevice = flow[0].device_name;
|
||||
let proto = flow[0].protocol_name;
|
||||
|
@ -1,5 +1,6 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../helpers/scaling";
|
||||
import {GraphOptionsBuilder} from "../lq_js_common/e_charts/chart_builder";
|
||||
import {RingBuffer} from "../lq_js_common/helpers/ringbuffer";
|
||||
|
||||
const RING_SIZE = 60 * 5; // 5 Minutes
|
||||
|
||||
@ -8,55 +9,20 @@ export class FlowCountGraph extends DashboardGraph {
|
||||
super(id);
|
||||
this.ringbuffer = new RingBuffer(RING_SIZE);
|
||||
|
||||
let xaxis = [];
|
||||
for (let i=0; i<RING_SIZE; i++) {
|
||||
xaxis.push(i);
|
||||
}
|
||||
this.option = new GraphOptionsBuilder()
|
||||
.withSequenceAxis(0, RING_SIZE)
|
||||
.withScaledAbsYAxis("Tracked Flows", 30)
|
||||
.withLeftGridSize("15%")
|
||||
.build();
|
||||
this.option.series = [
|
||||
{
|
||||
name: 'Active/Tracked',
|
||||
data: [],
|
||||
type: 'line',
|
||||
symbol: 'none',
|
||||
}
|
||||
];
|
||||
|
||||
this.option = {
|
||||
grid: {
|
||||
x: '15%',
|
||||
},
|
||||
legend: {
|
||||
orient: "horizontal",
|
||||
right: 10,
|
||||
top: "bottom",
|
||||
selectMode: false,
|
||||
data: [
|
||||
{
|
||||
name: "Active/Tracked",
|
||||
icon: 'circle',
|
||||
},
|
||||
],
|
||||
textStyle: {
|
||||
color: '#aaa'
|
||||
},
|
||||
},
|
||||
xAxis: {
|
||||
type: 'category',
|
||||
data: xaxis,
|
||||
},
|
||||
yAxis: {
|
||||
type: 'value',
|
||||
axisLabel: {
|
||||
formatter: (val) => {
|
||||
return scaleNumber(Math.abs(val), 0);
|
||||
},
|
||||
}
|
||||
},
|
||||
series: [
|
||||
{
|
||||
name: 'Active/Tracked',
|
||||
data: [],
|
||||
type: 'line',
|
||||
symbol: 'none',
|
||||
}
|
||||
],
|
||||
tooltip: {
|
||||
trigger: 'item',
|
||||
},
|
||||
animation: false,
|
||||
}
|
||||
this.option && this.chart.setOption(this.option);
|
||||
}
|
||||
|
||||
@ -69,35 +35,4 @@ export class FlowCountGraph extends DashboardGraph {
|
||||
|
||||
this.chart.setOption(this.option);
|
||||
}
|
||||
}
|
||||
|
||||
class RingBuffer {
|
||||
constructor(size) {
|
||||
this.size = size;
|
||||
let data = [];
|
||||
for (let i=0; i<size; i++) {
|
||||
data.push([0, 0]);
|
||||
}
|
||||
this.head = 0;
|
||||
this.data = data;
|
||||
}
|
||||
|
||||
push(recent, completed) {
|
||||
this.data[this.head] = [recent, completed];
|
||||
this.head += 1;
|
||||
this.head %= this.size;
|
||||
}
|
||||
|
||||
series() {
|
||||
let result = [[], []];
|
||||
for (let i=this.head; i<this.size; i++) {
|
||||
result[0].push(this.data[i][0]);
|
||||
result[1].push(this.data[i][1]);
|
||||
}
|
||||
for (let i=0; i<this.head; i++) {
|
||||
result[0].push(this.data[i][0]);
|
||||
result[1].push(this.data[i][1]);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
}
|
@ -0,0 +1,139 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class LtsLast24Hours_graph extends DashboardGraph {
|
||||
constructor(id) {
|
||||
super(id);
|
||||
this.option = {
|
||||
xAxis: {
|
||||
type: 'category',
|
||||
data: [],
|
||||
axisLabel: {
|
||||
formatter: function (val)
|
||||
{
|
||||
return new Date(parseInt(val) * 1000).toLocaleString();
|
||||
},
|
||||
hideOverlap: true
|
||||
}
|
||||
},
|
||||
yAxis: {
|
||||
type: 'value',
|
||||
axisLabel: {
|
||||
formatter: (val) => {
|
||||
return scaleNumber(Math.abs(val), 0);
|
||||
},
|
||||
}
|
||||
},
|
||||
legend: {
|
||||
orient: "horizontal",
|
||||
right: 10,
|
||||
top: "bottom",
|
||||
selectMode: false,
|
||||
data: [
|
||||
{
|
||||
name: "Download",
|
||||
icon: 'circle',
|
||||
itemStyle: {
|
||||
color: window.graphPalette[0]
|
||||
}
|
||||
}, {
|
||||
name: "Upload",
|
||||
icon: 'circle',
|
||||
itemStyle: {
|
||||
color: window.graphPalette[1]
|
||||
}
|
||||
}
|
||||
],
|
||||
textStyle: {
|
||||
color: '#aaa'
|
||||
},
|
||||
},
|
||||
series: [
|
||||
{
|
||||
name: 'Download Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Download Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: '#ccc'
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Upload Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Upload Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: '#ccc'
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Download',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
},
|
||||
{
|
||||
name: 'Upload',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
},
|
||||
],
|
||||
};
|
||||
this.option && this.chart.setOption(this.option);
|
||||
}
|
||||
|
||||
update(data) {
|
||||
this.chart.hideLoading();
|
||||
//console.log(data);
|
||||
this.option.xAxis.data = [];
|
||||
|
||||
this.option.series[0].data = [];
|
||||
this.option.series[1].data = [];
|
||||
this.option.series[2].data = [];
|
||||
this.option.series[3].data = [];
|
||||
this.option.series[4].data = [];
|
||||
this.option.series[5].data = [];
|
||||
for (let x=0; x<data.length; x++) {
|
||||
this.option.xAxis.data.push(data[x].time);
|
||||
this.option.series[0].data.push(data[x].min_down * 8);
|
||||
this.option.series[1].data.push((data[x].max_down - data[x].min_down) * 8);
|
||||
this.option.series[2].data.push((0.0 - data[x].max_up) * 8);
|
||||
this.option.series[3].data.push((0.0 - (data[x].max_up - data[x].min_up)) * 8);
|
||||
//console.log(0.0 - data[x].min_up, 0.0 - data[x].max_up);
|
||||
|
||||
this.option.series[4].data.push(data[x].median_down * 8);
|
||||
this.option.series[5].data.push((0.0 - data[x].median_up) * 8);
|
||||
}
|
||||
this.chart.setOption(this.option);
|
||||
}
|
||||
}
|
@ -0,0 +1,223 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class LtsCakeGraph extends DashboardGraph {
|
||||
constructor(id, period) {
|
||||
super(id);
|
||||
this.period = period;
|
||||
this.option = {
|
||||
xAxis: {
|
||||
type: 'category',
|
||||
data: [],
|
||||
axisLabel: {
|
||||
formatter: function (val)
|
||||
{
|
||||
return new Date(parseInt(val) * 1000).toLocaleString();
|
||||
},
|
||||
hideOverlap: true
|
||||
}
|
||||
},
|
||||
yAxis: {
|
||||
type: 'value',
|
||||
axisLabel: {
|
||||
formatter: (val) => {
|
||||
return scaleNumber(Math.abs(val), 0);
|
||||
},
|
||||
}
|
||||
},
|
||||
legend: {
|
||||
orient: "horizontal",
|
||||
right: 10,
|
||||
top: "bottom",
|
||||
selectMode: false,
|
||||
data: [
|
||||
{
|
||||
name: "Marks",
|
||||
icon: 'circle',
|
||||
itemStyle: {
|
||||
color: window.graphPalette[0]
|
||||
}
|
||||
}, {
|
||||
name: "Drops",
|
||||
icon: 'circle',
|
||||
itemStyle: {
|
||||
color: window.graphPalette[1]
|
||||
}
|
||||
}
|
||||
],
|
||||
textStyle: {
|
||||
color: '#aaa'
|
||||
},
|
||||
},
|
||||
series: [
|
||||
{
|
||||
name: 'DownloadM Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'DownloadM Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: window.graphPalette[2]
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'UploadM Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'UploadM Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: window.graphPalette[2]
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Marks',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
lineStyle: {
|
||||
color: window.graphPalette[0],
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'MarksU',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
lineStyle: {
|
||||
color: window.graphPalette[0],
|
||||
}
|
||||
},
|
||||
|
||||
// Drops
|
||||
{
|
||||
name: 'DownloadD Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'DownloadD Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: window.graphPalette[3]
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'UploadD Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'UploadD Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: window.graphPalette[3]
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Drops',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
lineStyle: {
|
||||
color: window.graphPalette[1],
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'DropsU',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
lineStyle: {
|
||||
color: window.graphPalette[1],
|
||||
}
|
||||
},
|
||||
],
|
||||
};
|
||||
this.option && this.chart.setOption(this.option);
|
||||
}
|
||||
|
||||
update(data) {
|
||||
this.chart.hideLoading();
|
||||
//console.log(data);
|
||||
this.option.xAxis.data = [];
|
||||
|
||||
for (let i=0; i<12; i++) {
|
||||
this.option.series[i].data = [];
|
||||
}
|
||||
// this.option.series[0].data = [];
|
||||
// this.option.series[1].data = [];
|
||||
// this.option.series[2].data = [];
|
||||
// this.option.series[3].data = [];
|
||||
// this.option.series[4].data = [];
|
||||
// this.option.series[5].data = [];
|
||||
for (let x=0; x<data.length; x++) {
|
||||
this.option.xAxis.data.push(data[x].time);
|
||||
|
||||
// Marks
|
||||
this.option.series[0].data.push(data[x].min_marks_down);
|
||||
this.option.series[1].data.push(data[x].max_marks_down - data[x].min_marks_down);
|
||||
this.option.series[2].data.push(0.0 - data[x].max_marks_up);
|
||||
this.option.series[3].data.push((0.0 - data[x].max_marks_up - data[x].min_marks_up));
|
||||
this.option.series[4].data.push(data[x].median_marks_down);
|
||||
this.option.series[5].data.push((0.0 - data[x].median_marks_up));
|
||||
|
||||
// Drops
|
||||
this.option.series[6].data.push(data[x].min_drops_down);
|
||||
this.option.series[7].data.push(data[x].max_drops_down - data[x].min_drops_down);
|
||||
this.option.series[8].data.push(0.0 - data[x].max_drops_up);
|
||||
this.option.series[9].data.push(0.0 - (data[x].max_drops_up - data[x].min_drops_up));
|
||||
this.option.series[10].data.push(data[x].median_drops_down);
|
||||
this.option.series[11].data.push(0.0 - data[x].median_drops_up);
|
||||
}
|
||||
this.chart.setOption(this.option);
|
||||
}
|
||||
}
|
@ -0,0 +1,140 @@
|
||||
import {DashboardGraph} from "./dashboard_graph";
|
||||
import {scaleNumber} from "../lq_js_common/helpers/scaling";
|
||||
|
||||
export class LtsRetransmitsGraph extends DashboardGraph {
|
||||
constructor(id, period) {
|
||||
super(id);
|
||||
this.period = period;
|
||||
this.option = {
|
||||
xAxis: {
|
||||
type: 'category',
|
||||
data: [],
|
||||
axisLabel: {
|
||||
formatter: function (val)
|
||||
{
|
||||
return new Date(parseInt(val) * 1000).toLocaleString();
|
||||
},
|
||||
hideOverlap: true
|
||||
}
|
||||
},
|
||||
yAxis: {
|
||||
type: 'value',
|
||||
axisLabel: {
|
||||
formatter: (val) => {
|
||||
return scaleNumber(Math.abs(val), 0);
|
||||
},
|
||||
}
|
||||
},
|
||||
legend: {
|
||||
orient: "horizontal",
|
||||
right: 10,
|
||||
top: "bottom",
|
||||
selectMode: false,
|
||||
data: [
|
||||
{
|
||||
name: "Download",
|
||||
icon: 'circle',
|
||||
itemStyle: {
|
||||
color: window.graphPalette[0]
|
||||
}
|
||||
}, {
|
||||
name: "Upload",
|
||||
icon: 'circle',
|
||||
itemStyle: {
|
||||
color: window.graphPalette[1]
|
||||
}
|
||||
}
|
||||
],
|
||||
textStyle: {
|
||||
color: '#aaa'
|
||||
},
|
||||
},
|
||||
series: [
|
||||
{
|
||||
name: 'Download Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Download Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'dl',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: '#ccc'
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Upload Error',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Upload Error2',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
stack: 'ul',
|
||||
lineStyle: {
|
||||
opacity: 0
|
||||
},
|
||||
areaStyle: {
|
||||
color: '#ccc'
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'Download',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
},
|
||||
{
|
||||
name: 'Upload',
|
||||
type: 'line',
|
||||
data: [],
|
||||
symbol: 'none',
|
||||
},
|
||||
],
|
||||
};
|
||||
this.option && this.chart.setOption(this.option);
|
||||
}
|
||||
|
||||
update(data) {
|
||||
this.chart.hideLoading();
|
||||
//console.log(data);
|
||||
this.option.xAxis.data = [];
|
||||
|
||||
this.option.series[0].data = [];
|
||||
this.option.series[1].data = [];
|
||||
this.option.series[2].data = [];
|
||||
this.option.series[3].data = [];
|
||||
this.option.series[4].data = [];
|
||||
this.option.series[5].data = [];
|
||||
for (let x=0; x<data.length; x++) {
|
||||
this.option.xAxis.data.push(data[x].time);
|
||||
this.option.series[0].data.push(data[x].min_down * 8);
|
||||
this.option.series[1].data.push((data[x].max_down - data[x].min_down) * 8);
|
||||
this.option.series[2].data.push((0.0 - data[x].max_up) * 8);
|
||||
this.option.series[3].data.push((0.0 - (data[x].max_up - data[x].min_up)) * 8);
|
||||
//console.log(0.0 - data[x].min_up, 0.0 - data[x].max_up);
|
||||
|
||||
this.option.series[4].data.push(data[x].median_down * 8);
|
||||
this.option.series[5].data.push((0.0 - data[x].median_up) * 8);
|
||||
}
|
||||
this.chart.setOption(this.option);
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user