add sibling labels restriction and some docs

This commit is contained in:
Jonathan Shook 2023-10-23 16:45:10 -05:00
parent 35fa137633
commit 957f178c4c
14 changed files with 296 additions and 99 deletions

View File

@ -26,8 +26,6 @@ know!
[I am a user and I want to contribute built-in scenarios.](devdocs/devguide/adding_scenarios.md)
[I am a UI developer and I want to improve the NoSQLBench UI (NBUI)](devdocs/devguide/nbui/README.md)
## Contribution Ideas
There are lots of ways to contribute to the project. Some ideas on how to

100
README.md
View File

@ -3,10 +3,17 @@
[![Star on Github](https://img.shields.io/github/stars/nosqlbench/nosqlbench.svg?style=social)](https://github.com/nosqlbench/nosqlbench/stargazers)
[![Chat on Discord](https://img.shields.io/discord/819995781406130176?logo=discord)](https://discord.gg/dBHRakusMN)
# NoSQLBench v5
# NoSQLBench 5
**The Open Source, Pluggable, NoSQL Benchmarking Suite**
👉 The current version of NoSQLBench in development is _5.21_, which is based on Java 21 LTS.
All new language features in Java 21, and some preview features may be used. There are
significant improvements in this branch which will be documented before a main release is
published. If you are presently using NoSQLBench for testing, and are not actively developing
against the code base, it is recommended that you stay on the latest 5.17 release until an
official 5.21 release is available. [What's in store for 5.21.](nb_521.md)
[Get it Here](DOWNLOADS.md)
[Contribute to NoSQLBench](CONTRIBUTING.md)
@ -15,71 +22,80 @@
## What is NoSQLBench?
NoSQLBench is a serious performance testing tool for the NoSQL ecosystem. It brings together features and capabilities
NoSQLBench is a serious performance testing tool for the NoSQL ecosystem. It brings together
features and capabilities
that are not found in any other tool.
- You can run common testing workloads directly from the command line. You can start doing this within 5 minutes of
reading this.
- You can generate virtual data sets of arbitrary size, with deterministic data and statistically shaped values.
- You can design custom workloads that emulate your application, contained in a single file, based on statement
templates - no IDE or coding required.
- When needed, you can open the access panels and rewire the runtime behavior of NoSQLBench to do advanced testing,
including a full scripting environment with Javascript.
- You can run common testing workloads directly from the command line. You can start doing this
within 5 minutes of
reading this.
- You can generate virtual data sets of arbitrary size, with deterministic data and statistically
shaped values.
- You can design custom workloads that emulate your application, contained in a single file, based
on statement
templates - no IDE or coding required.
- When needed, you can open the access panels and rewire the runtime behavior of NoSQLBench to do
advanced testing,
including a full scripting environment with Javascript.
The core machinery of NoSQLBench has been built with attention to detail. It has been battle tested within DataStax as a
way to help users validate their data models, baseline system performance, and qualify system designs for scale.
The core machinery of NoSQLBench has been built with attention to detail. It has been battle
tested within DataStax and in the NoSQL ecosystem as a way to help users validate their data
models, baseline system performance, and qualify system designs for scale.
In short, NoSQLBench wishes to be a programmable power tool for performance testing. However, it is somewhat generic. It
doesn't know directly about a particular type of system, or protocol. It simply provides a suitable machine harness in
which to put your drivers and testing logic. If you know how to build a client for a particular kind of system, it will
let you load it like a plugin and control it dynamically.
Initially, NoSQLBench comes with support for CQL, but we would like to see this expanded with contributions from others.
In short, NoSQLBench wishes to be a programmable power tool for performance testing. However, it
is somewhat generic. The core runtime of NoSQLBench doesn't know directly about a particular
type of system, or protocol. It simply provides a suitable machine harness in which to put your
drivers and testing logic. If you know how to build a client for a particular kind of system, it
will let you load it like a plugin and control it dynamically. However, several protocols are
supported out of the box as bundled drivers.
## Origins
The code in this project comes from multiple sources. The procedural data generation capability was known before as
'Virtual Data Set'. The core runtime and scripting harness was from the 'EngineBlock' project. The CQL support was
previously used within DataStax. In March of 2020, DataStax and the project maintainers for these projects decided to
put everything into one OSS project in order to make contributions and sharing easier for everyone. Thus, the new
project name and structure was launched as nosqlbench.io. NoSQLBench is an independent project that is sponsored by
DataStax.
The code in this project comes from multiple sources. The procedural data generation capability was
known before as 'Virtual Data Set' OSS project. The core runtime and scripting harness was from
the 'EngineBlock' OSS project. The CQL driver module was previously used within DataStax. In
March of 2020, DataStax and the project maintainers for these projects decided to put
everything into one OSS project in order to make contributions and sharing easier for everyone.
Thus, the new project name and structure was launched as nosqlbench.io. NoSQLBench is an
independent project that is sponsored by DataStax.
We offer NoSQLBench as a new way of thinking about testing systems. It is not limited to testing only one type of
system. It is our wish to build a community of users and practice around this project so that everyone in the NoSQL
ecosystem can benefit from common concepts and understanding and reliable patterns of use.
We offer NoSQLBench as a new way of thinking about testing systems. It is not limited to
testing only one type of system. It is our wish to build a community of users and practice
around this project so that everyone in the NoSQL ecosystem can benefit from common concepts
and understanding and reliable patterns of use.
## Getting Support
In general, our goals with NoSQLBench are to make the help systems and examples wrap around the users like a suit of
armor, so that they feel capable of doing most things autonomously. Please keep this in mind when looking for personal
support form our community, and help us find those places where the docs are lacking. Maybe you can help us by adding
some missing docs!
In general, our goals with NoSQLBench are to make the help systems and examples wrap around the
users like a suit of armor, so that they feel capable of doing most things autonomously. Please keep
this in mind when looking for personal support form our community, and help us find those places
where the docs are lacking. Maybe you can help us by adding some missing docs!
### NoSQLBench Discord Server
We have a discord server. This is where users and developers can discuss
anything about NoSQLBench and support each other.
Please [join us](https://discord.gg/dBHRakusMN) there if you are a new
user of NoSQLBench!
We have a discord server. This is where users and developers can discuss anything about NoSQLBench
and support each other. Please [join us](https://discord.gg/dBHRakusMN) there if you are a new user
of NoSQLBench!
## Contributing
We are actively looking for contributors to help make NoSQLBench better. This is an ambitious project that is just
finding its stride. If you want to be part of the next chapter in NoSQLBench development please look at
[CONTRIBUTING](CONTRIBUTING.md) for ideas, and jump in where you feel comfortable.
We are actively looking for contributors to help make NoSQLBench better. This is an ambitious
project that is just finding its stride. If you want to be part of the next chapter in NoSQLBench
development please look at [CONTRIBUTING](CONTRIBUTING.md) for ideas, and jump in where you feel
comfortable.
All contributors are expected to abide by the [CODE_OF_CONDUCT](CODE_OF_CONDUCT.md).
## License
All of the code in this repository is licensed under the APL version 2. If you contribute to this project, then you must
agree to license all of your constributions under this license.
All of the code in this repository is licensed under the APL version 2. If you contribute to this
project, then you must agree to license all of your contributions under this license.
## System Compatibility
This is a Linux targeted tool, as most cloud/nosql testing is done on Linux instances. Some support for other systems is
available, but more work is needed to support them fully. Here is what is supported for each:
This is a Linux targeted tool, as most cloud/nosql testing is done on Linux instances. Some support
for other systems is available, but more work is needed to support them fully. Here is what is
supported for each:
1. on Linux, all features are supported, for both `nb5.jar` as well as the appimage binary `nb`
2. on Mac, all features are supported, with `nb5.jar`.
@ -106,8 +122,8 @@ available, but more work is needed to support them fully. Here is what is suppor
</tr>
</table>
## Contributors
Checkout all our wonderful contributors [here](./CONTRIBUTING.md#contributors).
---

View File

@ -46,12 +46,10 @@ public class HttpOpMapperTest {
static HttpDriverAdapter adapter;
static HttpOpMapper mapper;
static NBComponent parent = new TestComponent("parent","parent");
@BeforeAll
public static void initializeTestMapper() {
HttpOpMapperTest.cfg = HttpSpace.getConfigModel().apply(Map.of());
HttpOpMapperTest.adapter = new HttpDriverAdapter(parent, NBLabels.forKV());
HttpOpMapperTest.adapter = new HttpDriverAdapter(new TestComponent("parent","parent"), NBLabels.forKV());
HttpOpMapperTest.adapter.applyConfig(HttpOpMapperTest.cfg);
final DriverSpaceCache<? extends HttpSpace> cache = HttpOpMapperTest.adapter.getSpaceCache();
HttpOpMapperTest.mapper = new HttpOpMapper(HttpOpMapperTest.adapter, HttpOpMapperTest.cfg, cache);
@ -60,7 +58,7 @@ public class HttpOpMapperTest {
private static ParsedOp parsedOpFor(final String yaml) {
final OpsDocList docs = OpsLoader.loadString(yaml, OpTemplateFormat.yaml, Map.of(), null);
final OpTemplate opTemplate = docs.getOps().get(0);
final ParsedOp parsedOp = new ParsedOp(opTemplate, HttpOpMapperTest.cfg, List.of(HttpOpMapperTest.adapter.getPreprocessor()), parent);
final ParsedOp parsedOp = new ParsedOp(opTemplate, HttpOpMapperTest.cfg, List.of(HttpOpMapperTest.adapter.getPreprocessor()), new TestComponent("parent","parent"));
return parsedOp;
}

View File

@ -941,7 +941,7 @@ public class ParsedOp extends NBBaseComponent implements LongFunction<Map<String
@Override
public String toString() {
return "ParsedOp: map: " + this.tmap.toString();
return "ParsedOp: map: " + ((this.tmap!=null) ? this.tmap.toString() : "NULL");
}
public List<CapturePoint> getCaptures() {

View File

@ -37,28 +37,34 @@ import static org.assertj.core.api.Assertions.assertThat;
public class ParsedOpTest {
final NBComponent parent = new TestComponent("opparent","opparent");
ParsedOp pc = new ParsedOp(
new OpData().applyFields(
Map.of(
"op", Map.of(
"stmt", "test",
"dyna1", "{dyna1}",
"dyna2", "{{NumberNameToString()}}",
"identity", "{{Identity()}}"
),
"bindings", Map.of(
"dyna1", "NumberNameToString()"
private NBComponent getParent() {
return new TestComponent("opparent","opparent");
}
private ParsedOp getOp() {
ParsedOp pc = new ParsedOp(
new OpData().applyFields(
Map.of(
"op", Map.of(
"stmt", "test",
"dyna1", "{dyna1}",
"dyna2", "{{NumberNameToString()}}",
"identity", "{{Identity()}}"
),
"bindings", Map.of(
"dyna1", "NumberNameToString()"
)
)
)
),
ConfigModel.of(ParsedOpTest.class)
.add(Param.defaultTo("testcfg", "testval"))
.asReadOnly()
.apply(Map.of()),
List.of(),
parent
);
),
ConfigModel.of(ParsedOpTest.class)
.add(Param.defaultTo("testcfg", "testval"))
.asReadOnly()
.apply(Map.of()),
List.of(),
getParent()
);
return pc;
}
@Test
public void testFieldDelegationFromDynamicToStaticToConfig() {
@ -78,7 +84,7 @@ public class ParsedOpTest {
final OpsDocList stmtsDocs = OpsLoader.loadString(opt, OpTemplateFormat.yaml, cfg.getMap(), null);
assertThat(stmtsDocs.getOps().size()).isEqualTo(1);
final OpTemplate opTemplate = stmtsDocs.getOps().get(0);
final ParsedOp parsedOp = new ParsedOp(opTemplate, cfg, List.of(), parent);
final ParsedOp parsedOp = new ParsedOp(opTemplate, cfg, List.of(), getParent());
assertThat(parsedOp.getAsFunctionOr("d1", "invalid").apply(1L)).isEqualTo("one");
assertThat(parsedOp.getAsFunctionOr("s1", "invalid").apply(1L)).isEqualTo("static-one");
@ -117,7 +123,7 @@ public class ParsedOpTest {
.asReadOnly()
.apply(Map.of()),
List.of(),
parent
getParent()
);
final LongFunction<? extends String> f1 = parsedOp.getAsRequiredFunction("field1-literal");
final LongFunction<? extends String> f2 = parsedOp.getAsRequiredFunction("field2-object");
@ -134,7 +140,7 @@ public class ParsedOpTest {
@Test
public void testParsedOp() {
final Map<String, Object> m1 = this.pc.apply(0);
final Map<String, Object> m1 = getOp().apply(0);
assertThat(m1).containsEntry("stmt", "test");
assertThat(m1).containsEntry("dyna1", "zero");
assertThat(m1).containsEntry("dyna2", "zero");
@ -143,21 +149,21 @@ public class ParsedOpTest {
@Test
public void testNewListBinder() {
final LongFunction<List<Object>> lb = this.pc.newListBinder("dyna1", "identity", "dyna2", "identity");
final LongFunction<List<Object>> lb = getOp().newListBinder("dyna1", "identity", "dyna2", "identity");
final List<Object> objects = lb.apply(1);
assertThat(objects).isEqualTo(List.of("one", 1L, "one", 1L));
}
@Test
public void testNewMapBinder() {
final LongFunction<Map<String, Object>> mb = this.pc.newOrderedMapBinder("dyna1", "identity", "dyna2");
final LongFunction<Map<String, Object>> mb = getOp().newOrderedMapBinder("dyna1", "identity", "dyna2");
final Map<String, Object> objects = mb.apply(2);
assertThat(objects).isEqualTo(Map.<String, Object>of("dyna1", "two", "identity", 2L, "dyna2", "two"));
}
@Test
public void testNewAryBinder() {
final LongFunction<Object[]> ab = this.pc.newArrayBinder("dyna1", "dyna1", "identity", "identity");
final LongFunction<Object[]> ab = getOp().newArrayBinder("dyna1", "dyna1", "identity", "identity");
final Object[] objects = ab.apply(3);
assertThat(objects).isEqualTo(new Object[]{"three", "three", 3L, 3L});
}
@ -189,7 +195,7 @@ public class ParsedOpTest {
.asReadOnly()
.apply(Map.of()),
List.of(),
parent
getParent()
);
Map<String, Object> result = pc.getTemplateMap().apply(1);

View File

@ -6,15 +6,19 @@ need to know where to find them.
## NoSQLBench docs
The core docs are found under the [engine-docs](../../engine-docs)
module
under [/src/main/resources/docs-for-nb/](../../engine-docs/src/main/resources/docs-for-nb)
.
The core docs are found under the nosqlbench/nosqlbench-build-docs repo.
By browsing this directory structure and looking at the frontmatter on
each markdown file, you'll get a sense for how they are orgainzed.
By browsing this directory structure and looking at the front matter on
each markdown file, you'll get a sense for how they are organized.
## Driver Docs
For the main user and developer guides, you can submit PRs to the nosqlbench-build-docs repo.
## Doc Overlays
Some sources of documentation, like the driver adapter markdown files are required to be provided by
the NoSQLBench build pipeline in the core nosqlbench project. These are auto-exported as an overlay
into the project repo above.
### Driver Docs
Some of the other docs are found within each driver module. For example,
the cql docs are found in the resources root directory of the cql driver
@ -22,9 +26,9 @@ module. This is the case for any basic docs provided for a driver. The
docs are bundled with modules to allow for them to be maintained by the
driver maintainers directly.
## Binding Function Docs
### Binding Function Docs
All of the binding function docs are generated automatically from source.
All the binding function docs are generated automatically from source.
Javadoc source as well as annotation details are used to decorate the
binding functions so that the can be cataloged and shared to the doc site.
To improve the binding function docs, you must improve the markdown

View File

@ -99,9 +99,9 @@ public class StandardActivity<R extends Op, S> extends SimpleActivity implements
Optional<String> defaultDriverOption = activityDef.getParams().getOptionalString("driver");
for (OpTemplate ot : opTemplates) {
ParsedOp incompleteOpDef = new ParsedOp(ot, NBConfiguration.empty(), List.of(), this);
String driverName = incompleteOpDef.takeOptionalStaticValue("driver", String.class)
.or(() -> incompleteOpDef.takeOptionalStaticValue("type", String.class))
// ParsedOp incompleteOpDef = new ParsedOp(ot, NBConfiguration.empty(), List.of(), this);
String driverName = ot.getOptionalStringParam("driver", String.class)
.or(() -> ot.getOptionalStringParam("type", String.class))
.or(() -> defaultDriverOption)
.orElseThrow(() -> new OpConfigError("Unable to identify driver name for op template:\n" + ot));

View File

@ -26,17 +26,16 @@ import static org.assertj.core.api.Assertions.assertThat;
public class AtomicInputTest {
private final static NBComponent root = new TestComponent("testing","atomicinput");
@Test
public void testThatNoCyclesAndNoRecyclesMeansZero() {
AtomicInput input = new AtomicInput(root, ActivityDef.parseActivityDef("alias=foo;cycles=0;recycles=0"));
AtomicInput input = new AtomicInput(new TestComponent("testing","atomicinput"), ActivityDef.parseActivityDef("alias=foo;cycles=0;recycles=0"));
CycleSegment inputSegment = input.getInputSegment(1);
assertThat(inputSegment).isNull();
}
@Test
public void testThatNoCyclesAndDefaultRecyclesMeans1xCycles() {
AtomicInput input = new AtomicInput(root, ActivityDef.parseActivityDef("alias=foo;cycles=10"));
AtomicInput input = new AtomicInput(new TestComponent("testing","atomicinput"), ActivityDef.parseActivityDef("alias=foo;cycles=10"));
CycleSegment inputSegment =null;
inputSegment= input.getInputSegment(10);
@ -53,7 +52,7 @@ public class AtomicInputTest {
int intendedRecycles=4;
int stride=10;
AtomicInput input = new AtomicInput(root, ActivityDef.parseActivityDef("alias=foo;cycles="+intendedCycles+";recycles="+intendedRecycles));
AtomicInput input = new AtomicInput(new TestComponent("testing","atomicinput"), ActivityDef.parseActivityDef("alias=foo;cycles="+intendedCycles+";recycles="+intendedRecycles));
CycleSegment segment =null;
for (int nextRecycle = 0; nextRecycle < intendedRecycles; nextRecycle++) {
for (int nextCycle = 0; nextCycle < intendedCycles; nextCycle+=stride) {
@ -68,14 +67,14 @@ public class AtomicInputTest {
@Test
public void testThatOneCycleAndOneRecycleYieldsOneTotal() {
AtomicInput input = new AtomicInput(root, ActivityDef.parseActivityDef("alias=foo;cycles=1;recycles=1"));
AtomicInput input = new AtomicInput(new TestComponent("testing","atomicinput"), ActivityDef.parseActivityDef("alias=foo;cycles=1;recycles=1"));
CycleSegment segment = input.getInputSegment(1);
assertThat(segment).isNotNull();
assertThat(segment.nextCycle()).isEqualTo(0L);
}
@Test
public void testThatCycleAndRecycleOffsetsWork() {
AtomicInput input = new AtomicInput(root, ActivityDef.parseActivityDef("alias=foo;cycles=310..330;recycles=37..39"));
AtomicInput input = new AtomicInput(new TestComponent("testing","atomicinput"), ActivityDef.parseActivityDef("alias=foo;cycles=310..330;recycles=37..39"));
CycleSegment segment = null;
int stride=10;
segment = input.getInputSegment(stride);
@ -97,7 +96,7 @@ public class AtomicInputTest {
@Test
public void testEmptyIntervalShouldNotProvideValues() {
AtomicInput i = new AtomicInput(root,ActivityDef.parseActivityDef("alias=foo;cycles=23..23"));
AtomicInput i = new AtomicInput(new TestComponent("testing","atomicinput"),ActivityDef.parseActivityDef("alias=foo;cycles=23..23"));
CycleSegment inputSegment = i.getInputSegment(1);
assertThat(inputSegment).isNull();
}

View File

@ -25,7 +25,7 @@ import java.util.regex.Pattern;
public class MapLabels implements NBLabels {
// private final static Logger logger = LogManager.getLogger(MapLabels.class);
private final Map<String,String> labels;
protected final Map<String,String> labels;
public MapLabels(final Map<String, String> labels) {
verifyValidNamesAndValues(labels);
@ -337,4 +337,8 @@ public class MapLabels implements NBLabels {
return difference;
}
@Override
public boolean isEmpty() {
return labels.isEmpty();
}
}

View File

@ -50,6 +50,4 @@ public interface NBLabeledElement {
default String description() {
return this.getClass().getSimpleName() + ((this.getLabels()!=null) ? " " + this.getLabels().linearizeAsMetrics() : " {NOLABELS}");
}
}

View File

@ -186,4 +186,5 @@ public interface NBLabels {
NBLabels difference(NBLabels otherLabels);
boolean isEmpty();
}

View File

@ -54,8 +54,17 @@ public class NBBaseComponent extends NBBaseComponentMetrics implements NBCompone
@Override
public NBComponent attachChild(NBComponent... children) {
for (NBComponent child : children) {
logger.debug(() -> "attaching " + child.description() + " to parent " + this.description());
for (NBComponent extant : this.children) {
if (!child.getComponentOnlyLabels().isEmpty() && child.getComponentOnlyLabels().equals(extant.getComponentOnlyLabels())) {
throw new RuntimeException("Adding second child under already-defined labels is not allowed:\n" +
" extant: " + extant + "\n" +
" adding: " + child);
}
}
this.children.add(child);
}
return this;
@ -83,6 +92,11 @@ public class NBBaseComponent extends NBBaseComponentMetrics implements NBCompone
return effectiveLabels;
}
@Override
public NBLabels getComponentOnlyLabels() {
return this.labels;
}
@Override
public void beforeDetach() {

View File

@ -17,6 +17,7 @@
package io.nosqlbench.components;
import io.nosqlbench.api.labels.NBLabeledElement;
import io.nosqlbench.api.labels.NBLabels;
import io.nosqlbench.components.decorators.NBProviderSearch;
import java.util.List;
@ -52,6 +53,8 @@ public interface NBComponent extends
List<NBComponent> getChildren();
NBLabels getComponentOnlyLabels();
default void beforeDetach() {}
@Override

156
nb_521.md Normal file
View File

@ -0,0 +1,156 @@
# NoSQLBench 5.21
__release notes preview / work-in-progress__
The 5.21 series of NoSQLBench marks a significant departure from the earlier versions. The platform
is rebased on top of [Java 21 LTS](https://openjdk.org/projects/jdk/21/). The core architecture
has been adapted to suit more advanced workflows, particularly around dimensional metrics, test
parameterization and labeling, and automated analysis. Support for the hierarchic naming methods of
graphite have been removed and the core metrics logic has been rebuilt around dimensional metric
labeling.
## Java 21 LTS
The release of [Java 21](https://openjdk.org/projects/jdk/21/) is significant to the NoSQLBench
project for several reasons.
### Java 21 Performance & Threading Model
For systems like NoSQLBench, the runtime threading model in Java 21 is much improved. Virtual
threads offer a distinctly better solution for the kinds of workloads where you need to emulate
request-per-thread behavior efficiently. While virtual threads are not advised as a general
replacement in every case, they are particularly suited to the agnostic APIs within NoSQLBench
which wrap a myriad of different system driver types. In NB 5.21, virtual threads will be
enabled further as corner cases which cause pinning and other side-effects are removed.
The performance improvements are deeper than just the threading model by itself. The
built-in concurrent libraries which have evolved to work along-side virtual threads offer some of
the best opportunities for streamlining and simplifying concurrent code. A key example of this
is the rate limiter implementation in 5.21 which simply does not have the previous limitations
of the 5.17 implementation. It is based directly on java.util.concurrent.Semaphore, which
provides a character of scaling over cores and configurations which is surprisingly good.
## Component Tree
Contrary to the metrics system which is moving from a hierarchic model to a dimensional model,
the core runtime structure of NoSQLBench is moving from a flat model to a hierarchic model. This
may seem counter-intuitive at first, but these two structural systems work together to provide a
more direct and fool-proof way of identifying test data, metrics, lifecycles, configuration, etc.
This approach is called the "Component Tree" in NoSQLBench. It simply reflects that each phase,
each parameter, each measurement that is in a NoSQLBench test design has a specific beginning
and end point which is well-defined, _within the scope of its parent_, and that all these aspects
live together on the component they pertain to.
Here are some of the basic features of the component tree:
* Each component has a parent except for the root component, which has no parent.
* Each component registers with its parent upon creation, and is scoped to its parent's
lifecycle. i.e., when a parent component goes out of scope, it takes its attached
sub-components with it.
* All functions and side effects that a component may provide happen naturally within that
component's lifecycle, whether that is upon attachment, detachment, or in-between. No
component is considered valid outside of these boundaries.
* Each component may provide a set of component-specific labels and label values at time of
construction which _uniquely_ describe its context within then parent component. Overriding a
label which is already set is not allowed, nor is providing a label set which is already known
within a parent component. Each component has a labels property which is the logical sum of all
the labels on it and all parents. This provides unique labels at every level which are compatible
with dimensional metrics, annotation, and logging systems.
* Basic services, like metrics registration are provided within the component API
orthogonally and attached directly to components. Thus, the view of all metrics within the
runtime is simply the sum of all metrics registered on all components.
Here's a sketch of a typical NoSQLBench 5.21 session:
```
[CLI]
\
Session {session="s20231123_123456.123"}
┗━ Scenario {scenario="default"}
┣━ Activity {activity="schema"}
┃ ┗━ metric timer {name="cycles"}
┣━ Activity {activity="rampup"]
┃ ┗━ metric timer {name="cycles"}
┗━ Activity {activity="testann",k="100",dimensions="1000"}
┗━ metric timer {name="cycles"}
```
This shows the tree structure of the runtime and the implied lifecycle bounds of each type:
* The Command Line Interface is not a component, but it is used to configure global session
settings and launch a session.
* The Session is the root component. It has a single label under the name `session`. It has
three attached activities with distinct labels, each with an attached metric.
* `activity=schema`
* `activity=rampup`
* `activity=testann`
This contrived example demonstrates very simply the mechanisms of the component tree at work.
Each metric has a set of labels which uniquely identify it:
* timer with labels `{session="s20231123_123456.123",scenario="default",activity="schema",
name="cycles"}`
* timer with labels `{session="s20231123_123456.123",scenario="default",activity="rampup",
name="cycles"}`
* timer with
labels `{session="s20231123_123456.123",scenario="default",activity="testann",k="100",dimensions="1000",name="cycles"}`
## Dimensional Metrics
Backstory and motivation for this change is captured in [^1].
Beginning in NoSQLBench 5.21, the primary metrics transport will be client-push using the
[openmetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md)
exposition format. As well, the Victoria Metrics [community edition](https://victoriametrics.com/products/open-source/)
is open source and provides all the necessary telemetry features needed, it is the preferred
collector, database and query engine which the NoSQLBench project will integrate with by default.
That doesn't mean that others will be or are not supported, but it does mean that they will not get
prioritized for implementation unless there is a specific user need which doesn't compromise the
basic integrity of the dimensional metrics system.
Further, the reliance on the original metrics library has become more problematic over time. The
next version, which promised support for dimensional labels in metrics is officially
["on pause"](https://github.com/dropwizard/metrics#metrics). As such, the NB project will seek
to pivot off this library to something more current and supported going forward, as options permit.
### Native Analysis Methods
The scenario scripting layer in NoSQLBench hasn't gone away, but it will be considered secondary
to the Java-native way of writing scenario logic, especially for more sophisticated scenarios.
Tools like findmax, stepup, and optimo will become more prevalent as the primary way that users
leverage NoSQLBench. These advanced analysis methods were mostly functional in previous versions,
but they were nigh un-maintainable in their un-debuggable script form. This meant that they
couldn't be reliably leveraged across testing efforts to remove subjective and interpretive
human logic from advanced testing scenario. The new capability emulates the scenario fixtures of
before, but with a native context for all the APIs, wherein all component services can be
accessed directly.
### Parameterization
The changes described above hint at a capability that is nascent in the NB project: testing
within parameter spaces. In order to support the kinds of automated and advanced testing needed
for today's systems, this is a must-have. Specifically, we need the ability to describe a set of
parameters (what some may describe as _hyper-parameters_), and to have the testing system apply
an optimization or search algorithm to determine a local or global maxima. These parameters and
their results must be visible in a tangible form for technologists and diagnosticians to make
sense of them. This is why they are surfaced in NB 5.21 as labeled measurements, episodic and
real-time, over (labeled) parameter spaces. There will be more to come on this as we prove out
the analysis methods.
### Footnotes
[^1]: The original metrics library used with NoSQLBench was the --Coda Hale-- --Yammer--
DropWizard metrics library which adopted the hierarchic naming scheme popular with systems like
graphite. While useful at the time, telemetry systems moved on to dimensional metrics with the
adoption of Prometheus. The combination of graphite naming structure and data flow and
Prometheus was tenuous in practice. For a time, NoSQLBench wedged data from the hierarchic naming
schemed into dimensional form for Prometheus by using the graphite exporter and pattern matching
based name and label extraction. This was incredibly fragile and prevented workload modeling and
metrics capture around test parameters and other important details. Further, the _prometheus way_ of
gathering metrics imposed an onerous requirement on users that the metrics system was actively in
control of all data flows. (Yes you could use the external gateway, but that was yet another moving
part.) This further degraded the quality of metrics data by taking the time and cadence putting the
timing and cadence of metrics flows out of control of the client. It also put metrics flow behind
two polling mechanisms which degraded the immediacy of the metrics.