nosqlbench/nb_521.md

200 lines
12 KiB
Markdown
Raw Permalink Normal View History

# NoSQLBench 5.21
__release notes preview / work-in-progress__
The 5.21 series of NoSQLBench marks a significant departure from the earlier versions. The platform
2023-10-23 17:17:16 -05:00
is rebased on top of Java 21 LTS. The core architecture has been adapted to suit more advanced
workflows, particularly around dimensional metrics, test parameterization and labeling, and
automated analysis. Support for the hierarchic naming methods of graphite have been removed and the
core metrics logic has been rebuilt around dimensional metric labeling.
## Java 21 LTS
The release of [Java 21](https://openjdk.org/projects/jdk/21/) is significant to the NoSQLBench
project for several reasons.
For systems like NoSQLBench, the runtime threading model in Java 21 is much improved. Virtual
threads offer a distinctly better solution for the kinds of workloads where you need to emulate
request-per-thread behavior efficiently. While virtual threads are not advised as a general
replacement in every case, they are particularly suited to the agnostic APIs within NoSQLBench
which wrap a myriad of different system driver types. In NB 5.21, virtual threads will be
enabled further as corner cases which cause pinning and other side-effects are removed.
The performance improvements are deeper than just the threading model by itself. The
built-in concurrent libraries which have evolved to work along-side virtual threads offer some of
the best opportunities for streamlining and simplifying concurrent code. A key example of this
is the rate limiter implementation in 5.21 which simply does not have the previous limitations
of the 5.17 implementation. It is based directly on java.util.concurrent.Semaphore, which
provides a character of scaling over cores and configurations which is surprisingly good.
## Component Tree
Contrary to the metrics system which is moving from a hierarchic model to a dimensional model,
the core runtime structure of NoSQLBench is moving from a flat model to a hierarchic model. This
may seem counter-intuitive at first, but these two structural systems work together to provide a
2023-10-23 17:11:53 -05:00
more direct and robust way of identifying test data, metrics, lifecycles, configuration, etc.
This approach is called the "Component Tree" in NoSQLBench. It simply reflects that each phase,
each parameter, each measurement that is in a NoSQLBench test design has a specific beginning
and end point which is well-defined, _within the scope of its parent_, and that all these aspects
live together on the component they pertain to.
Here are some of the basic features of the component tree:
* Each component has a parent except for the root component, which has no parent.
* Each component registers with its parent upon creation, and is scoped to its parent's
lifecycle. i.e., when a parent component goes out of scope, it takes its attached
sub-components with it.
* All functions and side effects that a component may provide happen naturally within that
component's lifecycle, whether that is upon attachment, detachment, or in-between. No
component is considered valid outside of these boundaries.
* Each component may provide a set of component-specific labels and label values at time of
2023-10-23 17:11:53 -05:00
construction which _uniquely_ describe its context within the parent component. Overriding a
label which is already set is not allowed, nor is providing a label set which is already known
within a parent component. Each component has a labels property which is the logical sum of all
the labels on it and all parents. This provides unique labels at every level which are compatible
with dimensional metrics, annotation, and logging systems.
2023-10-23 17:17:16 -05:00
* As a specific exception to the unique labels rule, some intermediate components may provide
an empty label set. A parent node may contain any number of these. They are generally
structural shims or similar elements which will be factored out.
* Basic services, like metrics registration are provided within the component API
orthogonally and attached directly to components. Thus, the view of all metrics within the
2023-10-23 17:11:53 -05:00
runtime is simply the sum of all metrics registered on all components with respect to a
particular node in the tree.
Here's a sketch of a typical NoSQLBench 5.21 session:
```
[CLI]
\
Session {session="s20231123_123456.123"}
┗━ Context {context="default"}
┣━ Activity {activity="schema"}
┃ ┗━ metric timer {name="cycles"}
┣━ Activity {activity="rampup"]
┃ ┗━ metric timer {name="cycles"}
┗━ Activity {activity="testann",k="100",dimensions="1000"}
┗━ metric timer {name="cycles"}
```
This shows the tree structure of the runtime and the implied lifecycle bounds of each type:
* The Command Line Interface is not a component, but it is used to configure global session
settings and launch a session.
* The Session is the root component. It has a single label under the name `session`. It has
three attached activities with distinct labels, each with an attached metric.
* `activity=schema`
* `activity=rampup`
* `activity=testann`
This contrived example demonstrates very simply the mechanisms of the component tree at work.
Each metric has a set of labels which uniquely identify it:
* timer with labels `{session="s20231123_123456.123",context="default",activity="schema",
name="cycles"}`
* timer with labels `{session="s20231123_123456.123",context="default",activity="rampup",
name="cycles"}`
* timer with
labels `{session="s20231123_123456.123",context="default",activity="testann",k="100",
dimensions="1000",name="cycles"}`
## Dimensional Metrics
Backstory and motivation for this change is captured in [^1].
Beginning in NoSQLBench 5.21, the primary metrics transport will be client-push using the
[openmetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md)
2023-10-23 17:17:16 -05:00
exposition format. As well, the Victoria
Metrics [community edition](https://victoriametrics.com/products/open-source/)
is open source and provides all the necessary telemetry features needed. It is the preferred
collector, database, and query engine which the NoSQLBench project will integrate with by default.
That doesn't mean that others will be or are not supported, but it does mean that they will not get
prioritized for implementation unless there is a specific user need which doesn't compromise the
basic integrity of the dimensional metrics system.
Further, the reliance on the original metrics library has become more problematic over time. The
next version, which promised support for dimensional labels in metrics is officially
["on pause"](https://github.com/dropwizard/metrics#metrics). As such, the NB project will seek
to pivot off this library to something more current and supported going forward, as options permit.
2023-10-23 17:19:30 -05:00
## Native Analysis Methods
The scripting layer in NoSQLBench hasn't gone away, but it will be considered secondary
to the Java-native way of writing scenario logic, especially for more sophisticated scenarios.
Tools like findmax, stepup, and optimo will become more prevalent as the primary way that users
leverage NoSQLBench. These advanced analysis methods were mostly functional in previous versions,
but they were nigh un-maintainable in their un-debuggable script form. This meant that they
couldn't be reliably leveraged across testing efforts to remove subjective and interpretive
human logic from advanced testing scenarios. The new capability emulates the scripting
environment of before, but with a native context for all the APIs, wherein all component
services can be accessed directly.
## Scenarios vs Commands & Contexts
In NB 5.21, The runtime element previously called a Scenario is now more loosely defined. It
actually doesn't exist as an element proper except when used in _Named Scenarios_. Everywhere
else, the term "scenario" should be considered descriptive only. Instead of scenarios, commands
run within a command context. Each command context allows you to chain session state between
different stages of testing and different types of logic. An example will illustrate this best:
```yaml
scenarios:
default:
flywheel: start driver=...
analyze: optimo activity=flywheel
shutdown: stop flywheel
```
In this example everything works within the named scenario as you might expect. This shows
mixing script commands and native code seamlessly. What ties the named steps of the `default`
named scenario together is that they are all run within the same named context, and thus
share access to the same running activity state, metrics, etc. By default, the named scenario
sets the default name of the command context for all commands contained within it. You can have
any number of command contexts you need in a test flow, and each is holds its own metrics,
activities, io buffers, etc. To show how command contexts are used outside of the named
scenario pattern, here is a basic example using just NB commands:
```
nb5 start driver ... context=default \
optimo activity=flywheel context=default \
stop flywheel context=default
```
Although, you don't need to specify anything related to command contexts if you only want to use
one, as this is created automatically under the `default` context name, and used transparently.
So, the equivalent simpler form of the above is simply:
`nb5 start driver ... optimo activity=flywheel stop flywheel`
The ways that this can be used for advanced testing are not fully explored yet, but the context
facility has already allowed for drastic simplification of analysis methods. Essentially, you
can blend them in to existing testing flows quite seamlessly.
2023-10-23 17:19:30 -05:00
## Parameterization
The changes described above hint at a capability that is nascent in the NB project: testing
within parameter spaces. In order to support the kinds of automated and advanced testing needed
for today's systems, this is a must-have. Specifically, we need the ability to describe a set of
parameters (what some may describe as _hyper-parameters_), and to have the testing system apply
an optimization or search algorithm to determine a local or global maxima. These parameters and
their results must be visible in a tangible form for technologists and diagnosticians to make
sense of them. This is why they are surfaced in NB 5.21 as labeled measurements, episodic and
real-time, over (labeled) parameter spaces. There will be more to come on this as we prove out
the analysis methods.
2023-10-23 17:19:30 -05:00
## Footnotes
2023-10-23 17:11:53 -05:00
[^1]: The original metrics library used with NoSQLBench was the
DropWizard metrics library which adopted the hierarchic naming structure popular with systems like
graphite. While useful at the time, telemetry systems moved on to dimensional metrics with the
adoption of Prometheus. The combination of graphite naming structure and data flow and
Prometheus was tenuous in practice. For a time, NoSQLBench wedged data from the hierarchic naming
2023-10-23 17:11:53 -05:00
schemed into dimensional form for Prometheus by using the graphite exporter, with pattern matching
for name and label extraction. This was incredibly fragile and prevented workload modeling and
metrics capture around test parameters and other important details. Further, the _prometheus way_ of
gathering metrics imposed an onerous requirement on users that the metrics system was actively in
control of all data flows. (Yes you could use the external gateway, but that was yet another moving
2023-10-23 17:11:53 -05:00
part.) This further degraded the quality of metrics data by taking the timing and cadence of
metrics flows out of control of the client. It also put metrics flow behind two uncoordinated
polling mechanisms which degraded the immediacy of the metrics.