minor doc updates from review

This commit is contained in:
Jonathan Shook 2023-10-23 17:11:53 -05:00
parent c2f4bacc78
commit 29c35cefb8

View File

@ -35,7 +35,7 @@ provides a character of scaling over cores and configurations which is surprisin
Contrary to the metrics system which is moving from a hierarchic model to a dimensional model,
the core runtime structure of NoSQLBench is moving from a flat model to a hierarchic model. This
may seem counter-intuitive at first, but these two structural systems work together to provide a
more direct and fool-proof way of identifying test data, metrics, lifecycles, configuration, etc.
more direct and robust way of identifying test data, metrics, lifecycles, configuration, etc.
This approach is called the "Component Tree" in NoSQLBench. It simply reflects that each phase,
each parameter, each measurement that is in a NoSQLBench test design has a specific beginning
@ -52,14 +52,17 @@ Here are some of the basic features of the component tree:
component's lifecycle, whether that is upon attachment, detachment, or in-between. No
component is considered valid outside of these boundaries.
* Each component may provide a set of component-specific labels and label values at time of
construction which _uniquely_ describe its context within then parent component. Overriding a
construction which _uniquely_ describe its context within the parent component. Overriding a
label which is already set is not allowed, nor is providing a label set which is already known
within a parent component. Each component has a labels property which is the logical sum of all
the labels on it and all parents. This provides unique labels at every level which are compatible
with dimensional metrics, annotation, and logging systems.
* As a specific exception to the unique labels rule, some intermediate components may provide
an empty label set. A parent node may contain any number of these.
* Basic services, like metrics registration are provided within the component API
orthogonally and attached directly to components. Thus, the view of all metrics within the
runtime is simply the sum of all metrics registered on all components.
runtime is simply the sum of all metrics registered on all components with respect to a
particular node in the tree.
Here's a sketch of a typical NoSQLBench 5.21 session:
@ -104,7 +107,7 @@ Beginning in NoSQLBench 5.21, the primary metrics transport will be client-push
[openmetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md)
exposition format. As well, the Victoria Metrics [community edition](https://victoriametrics.com/products/open-source/)
is open source and provides all the necessary telemetry features needed, it is the preferred
collector, database and query engine which the NoSQLBench project will integrate with by default.
collector, database, and query engine which the NoSQLBench project will integrate with by default.
That doesn't mean that others will be or are not supported, but it does mean that they will not get
prioritized for implementation unless there is a specific user need which doesn't compromise the
basic integrity of the dimensional metrics system.
@ -122,7 +125,7 @@ Tools like findmax, stepup, and optimo will become more prevalent as the primary
leverage NoSQLBench. These advanced analysis methods were mostly functional in previous versions,
but they were nigh un-maintainable in their un-debuggable script form. This meant that they
couldn't be reliably leveraged across testing efforts to remove subjective and interpretive
human logic from advanced testing scenario. The new capability emulates the scenario fixtures of
human logic from advanced testing scenarios. The new capability emulates the scenario fixtures of
before, but with a native context for all the APIs, wherein all component services can be
accessed directly.
@ -140,17 +143,18 @@ the analysis methods.
### Footnotes
[^1]: The original metrics library used with NoSQLBench was the --Coda Hale-- --Yammer--
DropWizard metrics library which adopted the hierarchic naming scheme popular with systems like
[^1]: The original metrics library used with NoSQLBench was the
DropWizard metrics library which adopted the hierarchic naming structure popular with systems like
graphite. While useful at the time, telemetry systems moved on to dimensional metrics with the
adoption of Prometheus. The combination of graphite naming structure and data flow and
Prometheus was tenuous in practice. For a time, NoSQLBench wedged data from the hierarchic naming
schemed into dimensional form for Prometheus by using the graphite exporter and pattern matching
based name and label extraction. This was incredibly fragile and prevented workload modeling and
schemed into dimensional form for Prometheus by using the graphite exporter, with pattern matching
for name and label extraction. This was incredibly fragile and prevented workload modeling and
metrics capture around test parameters and other important details. Further, the _prometheus way_ of
gathering metrics imposed an onerous requirement on users that the metrics system was actively in
control of all data flows. (Yes you could use the external gateway, but that was yet another moving
part.) This further degraded the quality of metrics data by taking the time and cadence putting the
timing and cadence of metrics flows out of control of the client. It also put metrics flow behind
two polling mechanisms which degraded the immediacy of the metrics.
part.) This further degraded the quality of metrics data by taking the timing and cadence of
metrics flows out of control of the client. It also put metrics flow behind two uncoordinated
polling mechanisms which degraded the immediacy of the metrics.