post-merge fixups

This commit is contained in:
Jonathan Shook 2021-11-10 14:37:55 -06:00
commit e6f1707d36
67 changed files with 1022 additions and 704 deletions

View File

@ -1,3 +1,28 @@
- d9ce62435 (HEAD -> main, origin/main) Merge pull request #374 from nosqlbench/pulsar-payload-rtt
- d47843b04 release: logging improvements, summary fixes, pulsar driver updates
- 8353a9d84 (origin/pulsar-payload-rtt) Extract payload rtt field from Avro record if it exists
- c15bd97c9 Avoid NullPointerException if pulsarCache hasn't been initialized.
- 2fbf0b4ad add instrumentation scaffold for payloadRtt
- 1c98935d1 post-merge fixups
- c4ee38d22 squelch invalid parts of summary
- d223e4c6c enable separate configuration of logging patterns
- c3544094b add global option to enable or disable ansi
- 574e297e1 add exception message to some errors
- e5715e17a init pulsar cache early to avoid NPE on initialization error
- 6f29b9005 update missing field
- cbfbe9f77 logging library updates
- 0798b9a8b minor doc updates
- 9bb13604c Merge pull request #377 from yabinmeng/main
- 591f21b35 README and code cleanup
- d94594a4e qualify and update antlr4-maven-plugin version
- fea9cce43 add clock-based temporal binding functions
- 6a5e9616b minor formatting improvements
- 3688dc9f3 pulsar.md readability formatting for console views
- b0c30a7cc split description on periods or semicolons
- 16f4c183d minor formatting
- 6a1c9337f fix incorrect pom versions
- 9e7555727 Merge branch 'main' of github.com:nosqlbench/nosqlbench
- e139f1269 cleanup extraneous files
- fa78ca54 (HEAD -> main, origin/main) Merge pull request #372 from lhotari/lh-detect-duplicates-after-gap
- 71c3b190 Detect delayed out-of-order delivery
- e694eaec Merge pull request #373 from lhotari/lh-upgrade-pulsar-2.8.1

View File

@ -22,6 +22,12 @@
<!-- core dependencies -->
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>

View File

@ -0,0 +1,52 @@
NoSQLBench has lots of configurable parameters. The more it is used, the more ways we add for specifying, templating, and indirecting values in various places.
This is an attempt to clarify what patterns are currently supported, what a more consistent view would look like, and a plan for getting there without disrupting existing usage scenarios.
## Configuration Points
These represent the known places that a user might need to provide configuration or template values:
* CLI options
* global parameters - options which affect the whole process, including any scenarios or activities within
* component configurations - composable behaviors such as Annotations, which may support nested config details
* Script parameters - named parameters provided directly to a scenario script
* Scenario command parameters (including Activity parameters) - named parameters which accessorize activity commands or other scenario commands like `waitfor`
* Op templates in YAML - The field values used to construct op templates
* Driver-specific configuration files - Separate configuration files provided for a driver, such as the pulsar `config.properties` file
Each of these provides an place for a user to specify the behavior of NoSQLBench, and each generally has a different type of flexibility needed. However, there is some overlap in these usage scenarios.
## Indirection Methods
* On NB CLI, filepath arguments are handed to NBIO, which supports some indirection
* In NBConfig logic, the `IMPORT{<nbio path>}` format is allowed, which defers directly to NBIO
* In CQL driver options, `passfile:...` is supported
* In NBEnvironment, the `${name}` and `$name` patterns are supported
* In YAML op templates, any field containing `{name}` is presumed to be a dynamic variable
* In YAML op glue, any field containing `{{name}}` is presumed to be a capture point
* In YAML op glue any field containing `[[...]]` or `[...]` has special meaning.
* In YAML template variables, `<<name:value>>` or `TEMPLATE(name,value)` are supported
* In HTTP driver URl patterns, `E[[...]]` is taken as a URL-Encoded section.
All of these are different types of indirection, and should be distinct in some cases. Yet, there is room for simplification.
## Conventions
Where possible, idiomatic conventions should be followed, such as '${...}' for environment
variables.
## Usage Matrix
The configuration points and the indirection methods should be analyzed for compatibility and
fit on a matrix.
## Sanitization and Safety
The rules for parsing and marshaling different types should be robust and clearly documented for
users.
## Controlling Indirection
You should be able to turn some methods on or off, like environment variables.

View File

@ -12,7 +12,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -21,7 +21,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>nb-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
@ -117,7 +117,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -21,7 +21,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-jdbc</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -23,13 +23,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
@ -145,6 +145,7 @@
<plugin>
<groupId>org.antlr</groupId>
<artifactId>antlr4-maven-plugin</artifactId>
<version>4.9.2</version>
<configuration>
<sourceDirectory>src/main/grammars/cql3
</sourceDirectory>
@ -169,8 +170,9 @@
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.3</version>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>

View File

@ -1,5 +1,6 @@
description:
description: |
This is a workload which creates an incrementally growing dataset over cycles.
Rows will be added incrementally in both rampup and main phases. However, during
the main phase, reads will also occur at the same rate, with the read patterns
selecting from the size of data written up to that point.

View File

@ -4,7 +4,7 @@
<parent>
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -24,13 +24,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
@ -136,6 +136,7 @@
<plugin>
<groupId>org.antlr</groupId>
<artifactId>antlr4-maven-plugin</artifactId>
<version>4.9.2</version>
<configuration>
<sourceDirectory>src/main/grammars/cql3
</sourceDirectory>
@ -160,8 +161,9 @@
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.3</version>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -24,13 +24,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-cql-shaded</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -21,14 +21,14 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -23,13 +23,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
@ -179,6 +179,7 @@
<plugin>
<groupId>org.antlr</groupId>
<artifactId>antlr4-maven-plugin</artifactId>
<version>4.9.2</version>
<configuration>
<sourceDirectory>src/main/grammars/cql3
</sourceDirectory>
@ -203,8 +204,9 @@
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.3</version>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -22,14 +22,14 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>

View File

@ -3,7 +3,7 @@
<parent>
<artifactId>nosqlbench</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
@ -19,7 +19,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -40,13 +40,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-stdout</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-lang3 -->

View File

@ -79,7 +79,7 @@ public class PulsarConfig {
logger.error("Can't read the specified config properties file: " + fileName);
ioe.printStackTrace();
} catch (ConfigurationException cex) {
logger.error("Error loading configuration items from the specified config properties file: " + fileName);
logger.error("Error loading configuration items from the specified config properties file: " + fileName + ":" + cex.getMessage());
cex.printStackTrace();
}
}

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -22,15 +22,15 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
</dependencies>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -44,21 +44,15 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-stdout</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<!-- <dependency>-->
<!-- <groupId>org.slf4j</groupId>-->
<!-- <artifactId>slf4j-api</artifactId>-->
<!-- <version>1.7.25</version>-->
<!-- </dependency>-->
</dependencies>
<repositories>
<repository>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -21,13 +21,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -40,13 +40,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-stdout</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<!-- https://mvnrepository.com/artifact/commons-beanutils/commons-beanutils -->

View File

@ -40,6 +40,15 @@ public class PulsarActivity extends SimpleActivity implements ActivityDefObserve
// Metrics for NB Pulsar driver milestone: https://github.com/nosqlbench/nosqlbench/milestone/11
// - end-to-end latency
private Histogram e2eMsgProcLatencyHistogram;
/**
* A histogram that tracks payload round-trip-time, based on a user-defined field in some sender
* system which can be interpreted as millisecond epoch time in the system's local time zone.
* This is paired with a field name of the same type to be extracted and reported in a meteric
* named 'payload-rtt'.
*/
private Histogram payloadRttHistogram;
// - message out of sequence error counter
private Counter msgErrOutOfSeqCounter;
// - message loss counter
@ -69,6 +78,10 @@ public class PulsarActivity extends SimpleActivity implements ActivityDefObserve
public void shutdownActivity() {
super.shutdownActivity();
if (pulsarCache == null) {
return;
}
for (PulsarSpace pulsarSpace : pulsarCache.getAssociatedPulsarSpace()) {
pulsarSpace.shutdownPulsarSpace();
}
@ -77,6 +90,8 @@ public class PulsarActivity extends SimpleActivity implements ActivityDefObserve
@Override
public void initActivity() {
super.initActivity();
pulsarCache = new PulsarSpaceCache(this);
bytesCounter = ActivityMetrics.counter(activityDef, "bytes");
messageSizeHistogram = ActivityMetrics.histogram(activityDef, "message_size");
bindTimer = ActivityMetrics.timer(activityDef, "bind");
@ -85,6 +100,8 @@ public class PulsarActivity extends SimpleActivity implements ActivityDefObserve
commitTransactionTimer = ActivityMetrics.timer(activityDef, "commit_transaction");
e2eMsgProcLatencyHistogram = ActivityMetrics.histogram(activityDef, "e2e_msg_latency");
payloadRttHistogram = ActivityMetrics.histogram(activityDef, "payload_rtt");
msgErrOutOfSeqCounter = ActivityMetrics.counter(activityDef, "err_msg_oos");
msgErrLossCounter = ActivityMetrics.counter(activityDef, "err_msg_loss");
msgErrDuplicateCounter = ActivityMetrics.counter(activityDef, "err_msg_dup");
@ -101,7 +118,6 @@ public class PulsarActivity extends SimpleActivity implements ActivityDefObserve
initPulsarAdminAndClientObj();
createPulsarSchemaFromConf();
pulsarCache = new PulsarSpaceCache(this);
this.sequencer = createOpSequence((ot) -> new ReadyPulsarOp(ot, pulsarCache, this));
setDefaultsFromOpSequence(sequencer);
@ -257,6 +273,7 @@ public class PulsarActivity extends SimpleActivity implements ActivityDefObserve
public Timer getCreateTransactionTimer() { return createTransactionTimer; }
public Timer getCommitTransactionTimer() { return commitTransactionTimer; }
public Histogram getPayloadRttHistogram() {return payloadRttHistogram;}
public Histogram getE2eMsgProcLatencyHistogram() { return e2eMsgProcLatencyHistogram; }
public Counter getMsgErrOutOfSeqCounter() { return msgErrOutOfSeqCounter; }
public Counter getMsgErrLossCounter() { return msgErrLossCounter; }

View File

@ -44,7 +44,7 @@ public class PulsarAdminNamespaceOp extends PulsarAdminOp {
future.whenComplete((unused, throwable) ->
logger.trace("Successfully created namespace \"" + fullNsName + "\" asynchronously!"))
.exceptionally(ex -> {
logger.error("Failed to create namespace \"" + fullNsName + "\" asynchronously!");
logger.error("Failed to create namespace \"" + fullNsName + "\" asynchronously!:" + ex.getMessage());
return null;
});
}

View File

@ -3,13 +3,13 @@ package io.nosqlbench.driver.pulsar.ops;
import io.nosqlbench.driver.pulsar.PulsarActivity;
import io.nosqlbench.driver.pulsar.PulsarSpace;
import io.nosqlbench.engine.api.templating.CommandTemplate;
import java.util.HashMap;
import java.util.Map;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.pulsar.client.api.Consumer;
import org.apache.pulsar.client.api.transaction.Transaction;
import java.util.HashMap;
import java.util.Map;
import java.util.function.LongFunction;
import java.util.function.Supplier;
@ -28,9 +28,8 @@ public class PulsarConsumerMapper extends PulsarTransactOpMapper {
private final static Logger logger = LogManager.getLogger(PulsarProducerMapper.class);
private final LongFunction<Consumer<?>> consumerFunc;
private final LongFunction<Boolean> topicMsgDedupFunc;
private final LongFunction<String> subscriptionTypeFunc;
private final boolean e2eMsProc;
private final LongFunction<String> payloadRttFieldFunc;
public PulsarConsumerMapper(CommandTemplate cmdTpl,
PulsarSpace clientSpace,
@ -39,15 +38,13 @@ public class PulsarConsumerMapper extends PulsarTransactOpMapper {
LongFunction<Boolean> useTransactionFunc,
LongFunction<Boolean> seqTrackingFunc,
LongFunction<Supplier<Transaction>> transactionSupplierFunc,
LongFunction<Boolean> topicMsgDedupFunc,
LongFunction<Consumer<?>> consumerFunc,
LongFunction<String> subscriptionTypeFunc,
boolean e2eMsgProc) {
boolean e2eMsgProc,
LongFunction<String> payloadRttFieldFunc) {
super(cmdTpl, clientSpace, pulsarActivity, asyncApiFunc, useTransactionFunc, seqTrackingFunc, transactionSupplierFunc);
this.consumerFunc = consumerFunc;
this.topicMsgDedupFunc = topicMsgDedupFunc;
this.subscriptionTypeFunc = subscriptionTypeFunc;
this.e2eMsProc = e2eMsgProc;
this.payloadRttFieldFunc = payloadRttFieldFunc;
}
@Override
@ -57,24 +54,20 @@ public class PulsarConsumerMapper extends PulsarTransactOpMapper {
boolean asyncApi = asyncApiFunc.apply(value);
boolean useTransaction = useTransactionFunc.apply(value);
Supplier<Transaction> transactionSupplier = transactionSupplierFunc.apply(value);
boolean topicMsgDedup = topicMsgDedupFunc.apply(value);
String subscriptionType = subscriptionTypeFunc.apply(value);
String payloadRttFieldFunc = this.payloadRttFieldFunc.apply(value);
return new PulsarConsumerOp(
this,
pulsarActivity,
asyncApi,
useTransaction,
seqTracking,
transactionSupplier,
topicMsgDedup,
consumer,
subscriptionType,
clientSpace.getPulsarSchema(),
clientSpace.getPulsarClientConf().getConsumerTimeoutSeconds(),
value,
e2eMsProc,
this::getReceivedMessageSequenceTracker);
this::getReceivedMessageSequenceTracker,
payloadRttFieldFunc);
}

View File

@ -1,29 +1,31 @@
package io.nosqlbench.driver.pulsar.ops;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import java.util.function.Supplier;
import org.apache.commons.lang3.StringUtils;
import com.codahale.metrics.Counter;
import com.codahale.metrics.Histogram;
import com.codahale.metrics.Timer;
import io.nosqlbench.driver.pulsar.PulsarActivity;
import io.nosqlbench.driver.pulsar.exception.*;
import io.nosqlbench.driver.pulsar.exception.PulsarDriverUnexpectedException;
import io.nosqlbench.driver.pulsar.util.AvroUtil;
import io.nosqlbench.driver.pulsar.util.PulsarActivityUtil;
import java.util.function.Function;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.pulsar.client.api.*;
import org.apache.pulsar.client.api.Consumer;
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.Schema;
import org.apache.pulsar.client.api.transaction.Transaction;
import org.apache.pulsar.common.schema.SchemaType;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
public class PulsarConsumerOp implements PulsarOp {
private final static Logger logger = LogManager.getLogger(PulsarConsumerOp.class);
private final PulsarConsumerMapper consumerMapper;
private final PulsarActivity pulsarActivity;
private final boolean asyncPulsarOp;
@ -31,13 +33,10 @@ public class PulsarConsumerOp implements PulsarOp {
private final boolean seqTracking;
private final Supplier<Transaction> transactionSupplier;
private final boolean topicMsgDedup;
private final Consumer<?> consumer;
private final String subscriptionType;
private final Schema<?> pulsarSchema;
private final int timeoutSeconds;
private final boolean e2eMsgProc;
private final long curCycleNum;
private final Counter bytesCounter;
private final Histogram messageSizeHistogram;
@ -45,25 +44,24 @@ public class PulsarConsumerOp implements PulsarOp {
// keep track of end-to-end message latency
private final Histogram e2eMsgProcLatencyHistogram;
private final Function<String, ReceivedMessageSequenceTracker> receivedMessageSequenceTrackerForTopic;
private final Histogram payloadRttHistogram;
private final String payloadRttTrackingField;
public PulsarConsumerOp(
PulsarConsumerMapper consumerMapper,
PulsarActivity pulsarActivity,
boolean asyncPulsarOp,
boolean useTransaction,
boolean seqTracking,
Supplier<Transaction> transactionSupplier,
boolean topicMsgDedup,
Consumer<?> consumer,
String subscriptionType,
Schema<?> schema,
int timeoutSeconds,
long curCycleNum,
boolean e2eMsgProc,
Function<String, ReceivedMessageSequenceTracker> receivedMessageSequenceTrackerForTopic)
Function<String, ReceivedMessageSequenceTracker> receivedMessageSequenceTrackerForTopic,
String payloadRttTrackingField)
{
this.consumerMapper = consumerMapper;
this.pulsarActivity = pulsarActivity;
this.asyncPulsarOp = asyncPulsarOp;
@ -71,12 +69,9 @@ public class PulsarConsumerOp implements PulsarOp {
this.seqTracking = seqTracking;
this.transactionSupplier = transactionSupplier;
this.topicMsgDedup = topicMsgDedup;
this.consumer = consumer;
this.subscriptionType = subscriptionType;
this.pulsarSchema = schema;
this.timeoutSeconds = timeoutSeconds;
this.curCycleNum = curCycleNum;
this.e2eMsgProc = e2eMsgProc;
this.bytesCounter = pulsarActivity.getBytesCounter();
@ -84,7 +79,9 @@ public class PulsarConsumerOp implements PulsarOp {
this.transactionCommitTimer = pulsarActivity.getCommitTransactionTimer();
this.e2eMsgProcLatencyHistogram = pulsarActivity.getE2eMsgProcLatencyHistogram();
this.payloadRttHistogram = pulsarActivity.getPayloadRttHistogram();
this.receivedMessageSequenceTrackerForTopic = receivedMessageSequenceTrackerForTopic;
this.payloadRttTrackingField = payloadRttTrackingField;
}
private void checkAndUpdateMessageErrorCounter(Message message) {
@ -150,9 +147,22 @@ public class PulsarConsumerOp implements PulsarOp {
}
}
if (!payloadRttTrackingField.isEmpty()) {
String avroDefStr = pulsarSchema.getSchemaInfo().getSchemaDefinition();
org.apache.avro.Schema avroSchema =
AvroUtil.GetSchema_ApacheAvro(avroDefStr);
org.apache.avro.generic.GenericRecord avroGenericRecord =
AvroUtil.GetGenericRecord_ApacheAvro(avroSchema, message.getData());
if (avroGenericRecord.hasField(payloadRttTrackingField)) {
long extractedSendTime = (Long)avroGenericRecord.get(payloadRttTrackingField);
long delta = System.currentTimeMillis() - extractedSendTime;
payloadRttHistogram.update(delta);
}
}
// keep track end-to-end message processing latency
long e2eMsgLatency = System.currentTimeMillis() - message.getPublishTime();
if (e2eMsgProc) {
long e2eMsgLatency = System.currentTimeMillis() - message.getPublishTime();
e2eMsgProcLatencyHistogram.update(e2eMsgLatency);
}
@ -232,8 +242,8 @@ public class PulsarConsumerOp implements PulsarOp {
}
}
long e2eMsgLatency = System.currentTimeMillis() - message.getPublishTime();
if (e2eMsgProc) {
long e2eMsgLatency = System.currentTimeMillis() - message.getPublishTime();
e2eMsgProcLatencyHistogram.update(e2eMsgLatency);
}

View File

@ -25,6 +25,8 @@ import java.util.stream.Collectors;
public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
// TODO: Add this to the pulsar driver docs
public static final String RTT_TRACKING_FIELD = "payload-tracking-field";
private final static Logger logger = LogManager.getLogger(ReadyPulsarOp.class);
private final OpTemplate opTpl;
@ -131,16 +133,16 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
}
logger.info("seq_tracking: {}", seqTrackingFunc.apply(0));
// Doc-level parameter: msg_dedup_broker
LongFunction<Boolean> brokerMsgDedupFunc = (l) -> false;
if (cmdTpl.containsKey(PulsarActivityUtil.DOC_LEVEL_PARAMS.MSG_DEDUP_BROKER.label)) {
if (cmdTpl.isStatic(PulsarActivityUtil.DOC_LEVEL_PARAMS.MSG_DEDUP_BROKER.label))
brokerMsgDedupFunc = (l) -> BooleanUtils.toBoolean(cmdTpl.getStatic(PulsarActivityUtil.DOC_LEVEL_PARAMS.MSG_DEDUP_BROKER.label));
else
throw new PulsarDriverParamException("[resolve()] \"" + PulsarActivityUtil.DOC_LEVEL_PARAMS.MSG_DEDUP_BROKER.label + "\" parameter cannot be dynamic!");
// TODO: Collapse this pattern into a simple version and flatten out all call sites
LongFunction<String> payloadRttFieldFunc = (l) -> "";
if (cmdTpl.isStatic(RTT_TRACKING_FIELD)) {
payloadRttFieldFunc = l -> cmdTpl.getStatic(RTT_TRACKING_FIELD);
logger.info("payload_rtt_field: {}", cmdTpl.getStatic(RTT_TRACKING_FIELD));
} else if (cmdTpl.isDynamic(RTT_TRACKING_FIELD)) {
payloadRttFieldFunc = l -> cmdTpl.getDynamic(RTT_TRACKING_FIELD,l);
logger.info("payload_rtt_field: {}", cmdTpl.getFieldDescription(RTT_TRACKING_FIELD));
}
logger.info("msg_dedup_broker: {}", seqTrackingFunc.apply(0));
logger.info("payload_rtt_field_func: {}", payloadRttFieldFunc.toString());
// TODO: Complete implementation for websocket-producer and managed-ledger
// Admin operation: create/delete tenant
@ -167,8 +169,8 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
asyncApiFunc,
useTransactionFunc,
seqTrackingFunc,
brokerMsgDedupFunc,
false);
false,
payloadRttFieldFunc);
}
// Regular/non-admin operation: single message consuming from multiple-topics (consumer)
else if (StringUtils.equalsIgnoreCase(stmtOpType, PulsarActivityUtil.OP_TYPES.MSG_MULTI_CONSUME.label)) {
@ -178,7 +180,7 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
asyncApiFunc,
useTransactionFunc,
seqTrackingFunc,
brokerMsgDedupFunc);
payloadRttFieldFunc);
}
// Regular/non-admin operation: single message consuming a single topic (reader)
else if (StringUtils.equalsIgnoreCase(stmtOpType, PulsarActivityUtil.OP_TYPES.MSG_READ.label)) {
@ -208,8 +210,8 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
asyncApiFunc,
useTransactionFunc,
seqTrackingFunc,
brokerMsgDedupFunc,
true);
true,
payloadRttFieldFunc);
}
// Invalid operation type
else {
@ -427,8 +429,8 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
LongFunction<Boolean> async_api_func,
LongFunction<Boolean> useTransactionFunc,
LongFunction<Boolean> seqTrackingFunc,
LongFunction<Boolean> brokerMsgDupFunc,
boolean e2eMsgProc
boolean e2eMsgProc,
LongFunction<String> rttTrackingFieldFunc
) {
LongFunction<String> subscription_name_func;
if (cmdTpl.isStatic("subscription_name")) {
@ -460,12 +462,6 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
LongFunction<Supplier<Transaction>> transactionSupplierFunc =
(l) -> clientSpace.getTransactionSupplier(); //TODO make it dependant on current cycle?
// TODO: Ignore namespace and topic level dedup check on the fly
// this will impact the consumer performance significantly
// Consider using caching or Memoizer in the future?
// (https://www.baeldung.com/guava-memoizer)
LongFunction<Boolean> topicMsgDedupFunc = brokerMsgDupFunc;
LongFunction<Consumer<?>> consumerFunc = (l) ->
clientSpace.getConsumer(
topic_uri_func.apply(l),
@ -482,10 +478,9 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
useTransactionFunc,
seqTrackingFunc,
transactionSupplierFunc,
topicMsgDedupFunc,
consumerFunc,
subscription_type_func,
e2eMsgProc);
e2eMsgProc,
rttTrackingFieldFunc);
}
private LongFunction<PulsarOp> resolveMultiTopicMsgConsume(
@ -494,7 +489,7 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
LongFunction<Boolean> async_api_func,
LongFunction<Boolean> useTransactionFunc,
LongFunction<Boolean> seqTrackingFunc,
LongFunction<Boolean> brokerMsgDupFunc
LongFunction<String> payloadRttFieldFunc
) {
// Topic list (multi-topic)
LongFunction<String> topic_names_func;
@ -564,18 +559,9 @@ public class ReadyPulsarOp implements OpDispenser<PulsarOp> {
useTransactionFunc,
seqTrackingFunc,
transactionSupplierFunc,
// For multi-topic subscription message consumption,
// - Only consider broker-level message deduplication setting
// - Ignore namespace- and topic-level message deduplication setting
//
// This is because Pulsar is able to specify a list of topics from
// different namespaces. In theory, we can get topic deduplication
// status from each message, but this will be too much overhead.
// e.g. pulsarAdmin.getPulsarAdmin().topics().getDeduplicationStatus(message.getTopicName())
brokerMsgDupFunc,
mtConsumerFunc,
subscription_type_func,
false);
false,
payloadRttFieldFunc);
}
private LongFunction<PulsarOp> resolveMsgRead(

View File

@ -1,7 +1,6 @@
package io.nosqlbench.driver.pulsar.util;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.*;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@ -13,6 +12,7 @@ import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.*;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@ -462,7 +462,7 @@ public class PulsarActivityUtil {
Path filePath = Paths.get(URI.create(schemaDefinitionStr));
schemaDefinitionStr = Files.readString(filePath, StandardCharsets.US_ASCII);
} catch (IOException ioe) {
throw new RuntimeException("Error reading the specified \"Avro\" schema definition file: " + definitionStr);
throw new RuntimeException("Error reading the specified \"Avro\" schema definition file: " + definitionStr + ": " + ioe.getMessage());
}
}

View File

@ -6,7 +6,7 @@ bindings:
params:
# "true" - asynchronous Pulsar Admin API
# "false" - synchronous Pulsar Admin API
async_api: "true"
async_api: "false"
# "true" - delete namespace
# "false" - create namespace
admin_delop: "false"

View File

@ -8,7 +8,7 @@ params:
topic_uri: "persistent://{tenant}/{namespace}/{core_topic_name}"
# "true" - asynchronous Pulsar Admin API
# "false" - synchronous Pulsar Admin API
async_api: "true"
async_api: "false"
# "true" - delete topic
# "false" - create topic
admin_delop: "false"

View File

@ -1,6 +1,6 @@
bindings:
# message key, property and value
mykey:
mykey: NumberNameToString()
int_prop_val: ToString(); Prefix("IntProp_")
text_prop_val: AlphaNumericString(10); Prefix("TextProp_")
myvalue: NumberNameToString() #AlphaNumericString(20)
@ -85,7 +85,7 @@ blocks:
# - websocket-producer:
# tags:
# type: websocket-produer
# type: websocket-producer
# statements:
# - websocket-producer-stuff:
#

View File

@ -0,0 +1,89 @@
# TODO : Design Revisit -- Advanced Driver Features
**NOTE**: The following text is based on the original multi-layer API
caching design which is not fully implemented at the moment. We need to
revisit the original design at some point in order to achieve maximum
testing flexibility.
To summarize, the original caching design has the following key
requirements:
* **Requirement 1**: Each NB Pulsar activity is able to launch and cache
multiple **client spaces**
* **Requirement 2**: Each client space can launch and cache multiple
Pulsar operators of the same type (producer, consumer, etc.)
* **Requirement 3**: The size of each Pulsar operator specific cached
space can be configurable.
In the current implementation, only requirement 2 is implemented.
* For requirement 1, the current implementation only supports one client
space per NB Pulsar activity
* For requirement 3, the cache space size is not configurable (no limit at
the moment)
## Other Activity Parameters
- **maxcached** - A default value to be applied to `max_clients`,
`max_producers`, `max_consumers`.
- default: `max_cached=100`
- **max_clients** - Clients cache size. This is the number of client
instances which are allowed to be cached in the NoSQLBench client
runtime. The clients cache automatically maintains a cache of unique
client instances internally. default: _maxcached_
- **max_operators** - Producers/Consumers/Readers cache size (per client
instance). Limits the number of instances which are allowed to be cached
per client instance. default: _maxcached_
## API Caching
This driver is tailored around the multi-tenancy and topic naming scheme
that is part of Apache Pulsar. Specifically, you can create an arbitrary
number of client instances, producers (per client), and consumers (per
client) depending on your testing requirements.
Further, the topic URI is composed from the provided qualifiers of
`persistence`, `tenant`, `namespace`, and `topic`, or you can provide a
fully-composed value in the `persistence://tenant/namespace/topic`
form.
### Instancing Controls
Normative usage of the Apache Pulsar API follows a strictly enforced
binding of topics to producers and consumers. As well, clients may be
customized with different behavior for advanced testing scenarios. There
is a significant variety of messaging and concurrency schemes seen in
modern architectures. Thus, it is important that testing tools rise to the
occasion by letting users configure their testing runtimes to emulate
applications as they are found in practice. To this end, the NoSQLBench
driver for Apache Pulsar provides a set controls within its op template
format which allow for flexible yet intuitive instancing in the client
runtime. This is enabled directly by using nominative variables for
instance names where needed. When the instance names are not provided for
an operation, defaults are used to emulate a simple configuration.
Since this is a new capability in a NoSQLBench driver, how it works is
explained below:
When a pulsar cycles is executed, the operation is synthesized from the op
template fields as explained below under _Op Fields_. This happens in a
specific order:
1. The client instance name is resolved. If a `client` field is provided,
this is taken as the client instance name. If not, it is set
to `default`.
2. The named client instance is fetched from the cache, or created and
cached if it does not yet exist.
3. The topic_uri is resolved. This is the value to be used with
`.topic(...)` calls in the API. The op fields below explain how to
control this value.
4. For _send_ operations, a producer is named and created if needed. By
default, the producer is named after the topic_uri above. You can
override this by providing a value for `producer`.
5. For _recv_ operations, a consumer is named and created if needed. By
default, the consumer is named after the topic_uri above. You can
override this by providing a value for `consumer`.
The most important detail for understanding the instancing controls is
that clients, producers, and consumers are all named and cached in the
specific order above.

View File

@ -1,38 +1,34 @@
- [1. NoSQLBench (NB) Pulsar Driver Overview](#1-nosqlbench-nb-pulsar-driver-overview)
- [1.1. Issues Tracker](#11-issues-tracker)
- [1.2. Global Level Pulsar Configuration Settings](#12-global-level-pulsar-configuration-settings)
- [1.3. NB Pulsar Driver Yaml File - High Level Structure](#13-nb-pulsar-driver-yaml-file---high-level-structure)
- [1.3.1. Configuration Parameter Levels](#131-configuration-parameter-levels)
- [1.4. Pulsar Driver Yaml File - Command Blocks](#14-pulsar-driver-yaml-file---command-blocks)
- [1.4.1. Pulsar Admin API Command Block - Create Tenants](#141-pulsar-admin-api-command-block---create-tenants)
- [1.4.2. Pulsar Admin API Command Block - Create Namespaces](#142-pulsar-admin-api-command-block---create-namespaces)
- [1.4.3. Pulsar Admin API Command Block - Create Topics (Partitioned or Regular)](#143-pulsar-admin-api-command-block---create-topics-partitioned-or-regular)
- [1.4.4. Batch Producer Command Block](#144-batch-producer-command-block)
- [1.4.5. Producer Command Block](#145-producer-command-block)
- [1.4.6. (Single-Topic) Consumer Command Block](#146-single-topic-consumer-command-block)
- [1.4.7. Reader Command Block](#147-reader-command-block)
- [1.4.8. Multi-topic Consumer Command Block](#148-multi-topic-consumer-command-block)
- [1.4.9. End-to-end Message Processing Command Block](#149-end-to-end-message-processing-command-block)
- [1.5. Message Properties](#15-message-properties)
- [1.6. Schema Support](#16-schema-support)
- [1.7. Measure End-to-end Message Processing Latency](#17-measure-end-to-end-message-processing-latency)
- [1.8. Detect Message Out-of-order, Message Loss, and Message Duplication](#18-detect-message-out-of-order-message-loss-and-message-duplication)
- [1.9. NB Activity Execution Parameters](#19-nb-activity-execution-parameters)
- [1.10. NB Pulsar Driver Execution Example](#110-nb-pulsar-driver-execution-example)
- [1.11. Appendix A. Template Global Setting File (config.properties)](#111-appendix-a-template-global-setting-file-configproperties)
- [2. TODO : Design Revisit -- Advanced Driver Features](#2-todo--design-revisit----advanced-driver-features)
- [2.1. Other Activity Parameters](#21-other-activity-parameters)
- [2.2. API Caching](#22-api-caching)
- [2.2.1. Instancing Controls](#221-instancing-controls)
- [1. Overview](#1-overview)
- [1.1. Issues Tracker](#11-issues-tracker)
- [2. NB Pulsar Driver Yaml File - High Level Structure](#2-nb-pulsar-driver-yaml-file---high-level-structure)
- [3. Global Level Pulsar Configuration Settings](#3-global-level-pulsar-configuration-settings)
- [4. Global, Document, and Statement Level Configuration Items](#4-global-document-and-statement-level-configuration-items)
- [5. NB Pulsar Driver Yaml File - Command Blocks](#5-nb-pulsar-driver-yaml-file---command-blocks)
- [5.1. Pulsar Admin API Command Block - Create/Delete Tenants](#51-pulsar-admin-api-command-block---createdelete-tenants)
- [5.2. Pulsar Admin API Command Block - Create/Delete Namespaces](#52-pulsar-admin-api-command-block---createdelete-namespaces)
- [5.3. Pulsar Admin API Command Block - Create/Delete Topics (Partitioned or Regular)](#53-pulsar-admin-api-command-block---createdelete-topics-partitioned-or-regular)
- [5.4. Batch Producer Command Block](#54-batch-producer-command-block)
- [5.5. Producer Command Block](#55-producer-command-block)
- [5.6. (Single-Topic) Consumer Command Block](#56-single-topic-consumer-command-block)
- [5.7. Reader Command Block](#57-reader-command-block)
- [5.8. Multi-topic Consumer Command Block](#58-multi-topic-consumer-command-block)
- [5.9. End-to-end Message Processing Command Block](#59-end-to-end-message-processing-command-block)
- [6. Message Properties](#6-message-properties)
- [7. Schema Support](#7-schema-support)
- [8. Measure End-to-end Message Processing Latency](#8-measure-end-to-end-message-processing-latency)
- [9. Detect Message Out-of-order, Message Loss, and Message Duplication](#9-detect-message-out-of-order-message-loss-and-message-duplication)
- [10. NB Activity Execution Parameters](#10-nb-activity-execution-parameters)
- [11. NB Pulsar Driver Execution Example](#11-nb-pulsar-driver-execution-example)
- [12. Appendix A. Template Global Setting File (config.properties)](#12-appendix-a-template-global-setting-file-configproperties)
# 1. NoSQLBench (NB) Pulsar Driver Overview
# 1. Overview
This driver allows you to simulate and run different types of workloads (as below) against a Pulsar cluster through NoSQLBench (NB).
* Admin API - create tenants
* Admin API - create namespaces
* Admin API - create topics
* Producer
* Consumer
* Admin API - create/delete tenants
* Admin API - create/delete namespaces
* Admin API - create/delete topics, partitioned or not
* Producer - publish messages with Avro schema support
* Consumer - consume messages with all subscription types
* Reader
* (Future) WebSocket Producer
* (Future) Managed Ledger
@ -41,68 +37,7 @@ This driver allows you to simulate and run different types of workloads (as belo
If you have issues or new requirements for this driver, please add them at the [pulsar issues tracker](https://github.com/nosqlbench/nosqlbench/issues/new?labels=pulsar).
## 1.2. Global Level Pulsar Configuration Settings
The NB Pulsar driver relies on Pulsar's [Java Client API](https://pulsar.apache.org/docs/en/client-libraries-java/) to publish messages to and consume messages from a Pulsar cluster. In order to do so, a [PulsarClient](https://pulsar.incubator.apache.org/api/client/2.7.0-SNAPSHOT/org/apache/pulsar/client/api/PulsarClient) object needs to be created first in order to establish the connection to the Pulsar cluster; then a workload-specific object (e.g. [Producer](https://pulsar.incubator.apache.org/api/client/2.7.0-SNAPSHOT/org/apache/pulsar/client/api/Producer) or [Consumer](https://pulsar.incubator.apache.org/api/client/2.7.0-SNAPSHOT/org/apache/pulsar/client/api/Consumer)) is required in order to execute workload-specific actions (e.g. publishing or consuming messages).
When creating these objects (e.g. PulsarClient, Producer), there are different configuration options that can be used. For example, [this document](https://pulsar.apache.org/docs/en/client-libraries-java/#configure-producer) lists all possible configuration options when creating a Pulsar Producer object.
The NB pulsar driver supports these options via a global properties file (e.g. **config.properties**). An example of this file is as below:
```properties
### Schema related configurations - schema.xxx
schema.type = avro
schema.definition = file:///<path/to/avro/schema/definition/file>
### Pulsar client related configurations - client.xxx
client.connectionTimeoutMs = 5000
### Producer related configurations (global) - producer.xxx
producer.topicName = persistent://public/default/mynbtest
producer.producerName =
producer.sendTimeoutMs =
```
There are multiple sections in this file that correspond to different groups of configuration settings:
* **Schema related settings**:
* All settings under this section starts with **schema.** prefix.
* The NB Pulsar driver supports schema-based message publishing and
consuming. This section defines configuration settings that are
schema related.
* There are 2 valid options under this section.
* *schema.type*: Pulsar message schema type. When unset or set as
an empty string, Pulsar messages will be handled in raw *byte[]*
format. The other valid option is **avro** which the Pulsar
message will follow a specific Avro format.
* *schema.definition*: This only applies when an Avro schema type
is specified. The value of this configuration is the (full) file
path that contains the Avro schema definition.
* **Pulsar Client related settings**:
* All settings under this section starts with **client.** prefix.
* This section defines all configuration settings that are related
with defining a PulsarClient object.
* See [Pulsar Doc Reference](https://pulsar.apache.org/docs/en/client-libraries-java/#default-broker-urls-for-standalone-clusters)
* **Pulsar Producer related settings**:
* All settings under this section starts with **producer** prefix.
* This section defines all configuration settings that are related
with defining a Pulsar Producer object.
* See [Pulsar Doc Reference](https://pulsar.apache.org/docs/en/client-libraries-java/#configure-producer)
* **Pulsar Consumer related settings**:
* All settings under this section starts with **consumer** prefix.
* This section defines all configuration settings that are related
with defining a Pulsar Consumer object.
* See [Pulsar Doc Reference](http://pulsar.apache.org/docs/en/client-libraries-java/#configure-consumer)
* **Pulsar Reader related settings**:
* All settings under this section starts with **reader** prefix.
* This section defines all configuration settings that are related
with defining a Pulsar Reader object.
* See [Pulsar Doc Reference](https://pulsar.apache.org/docs/en/client-libraries-java/#reader)
In the future, when the support for other types of Pulsar workloads is
added in NB Pulsar driver, there will be corresponding configuration
sections in this file as well.
## 1.3. NB Pulsar Driver Yaml File - High Level Structure
# 2. NB Pulsar Driver Yaml File - High Level Structure
Just like other NB driver types, the actual Pulsar workload generation is
determined by the statement blocks in an NB driver Yaml file. Depending
@ -127,9 +62,7 @@ At high level, Pulsar driver yaml file has the following structure:
* **seq_tracking**: Whether to do message sequence tracking. This is
used for message out-of-order and message loss detection (more on
this later).
* **msg_dedup_broker**: Whether or not broker level message deduplication
is enabled.
* **blocks**: includes a series of command blocks. Each command block
* **statement blocks**: includes a series of command blocks. Each command block
defines one major Pulsar operation such as *producer*, *consumer*, etc.
Right now, the following command blocks are already supported or will be
added in the near future. We'll go through each of these command blocks
@ -203,7 +136,68 @@ multiple Pulsar operations in one run! But if we want to focus the testing
on one particular operation, we can use the tag to filter the command
block as listed above!
### 1.3.1. Configuration Parameter Levels
# 3. Global Level Pulsar Configuration Settings
The NB Pulsar driver relies on Pulsar's [Java Client API](https://pulsar.apache.org/docs/en/client-libraries-java/) to publish messages to and consume messages from a Pulsar cluster. In order to do so, a [PulsarClient](https://pulsar.incubator.apache.org/api/client/2.7.0-SNAPSHOT/org/apache/pulsar/client/api/PulsarClient) object needs to be created first in order to establish the connection to the Pulsar cluster; then a workload-specific object (e.g. [Producer](https://pulsar.incubator.apache.org/api/client/2.7.0-SNAPSHOT/org/apache/pulsar/client/api/Producer) or [Consumer](https://pulsar.incubator.apache.org/api/client/2.7.0-SNAPSHOT/org/apache/pulsar/client/api/Consumer)) is required in order to execute workload related actions (e.g. publishing or consuming messages).
When creating these objects (e.g. PulsarClient, Producer), there are different configuration options that can be used. For example, [this document](https://pulsar.apache.org/docs/en/client-libraries-java/#configure-producer) lists all possible configuration options when creating a Pulsar Producer object.
The NB pulsar driver supports these options via a global properties file (e.g. **config.properties**). An example of this file is as below:
```properties
### Schema related configurations - schema.xxx
schema.type = avro
schema.definition = file:///<path/to/avro/schema/definition/file>
### Pulsar client related configurations - client.xxx
client.connectionTimeoutMs = 5000
### Producer related configurations (global) - producer.xxx
producer.topicName = persistent://public/default/mynbtest
producer.producerName =
producer.sendTimeoutMs =
```
There are multiple sections in this file that correspond to different groups of configuration settings:
* **Schema related settings**:
* All settings under this section starts with **schema.** prefix.
* The NB Pulsar driver supports schema-based message publishing and
consuming. This section defines configuration settings that are
schema related.
* There are 2 valid options under this section.
* *schema.type*: Pulsar message schema type. When unset or set as
an empty string, Pulsar messages will be handled in raw *byte[]*
format. The other valid option is **avro** which the Pulsar
message will follow Avro schema format.
* *schema.definition*: This only applies when an Avro schema type
is specified. The value of this configuration is the (full) file
path that contains the Avro schema definition.
* **Pulsar Client related settings**:
* All settings under this section starts with **client.** prefix.
* This section defines all configuration settings that are related
with defining a PulsarClient object.
* See [Pulsar Doc Reference](https://pulsar.apache.org/docs/en/client-libraries-java/#default-broker-urls-for-standalone-clusters)
* **Pulsar Producer related settings**:
* All settings under this section starts with **producer** prefix.
* This section defines all configuration settings that are related
with defining a Pulsar Producer object.
* See [Pulsar Doc Reference](https://pulsar.apache.org/docs/en/client-libraries-java/#configure-producer)
* **Pulsar Consumer related settings**:
* All settings under this section starts with **consumer** prefix.
* This section defines all configuration settings that are related
with defining a Pulsar Consumer object.
* See [Pulsar Doc Reference](http://pulsar.apache.org/docs/en/client-libraries-java/#configure-consumer)
* **Pulsar Reader related settings**:
* All settings under this section starts with **reader** prefix.
* This section defines all configuration settings that are related
with defining a Pulsar Reader object.
* See [Pulsar Doc Reference](https://pulsar.apache.org/docs/en/client-libraries-java/#reader)
In the future, when the support for other types of Pulsar workloads is
added in NB Pulsar driver, there will be corresponding configuration
sections in this file as well.
# 4. Global, Document, and Statement Level Configuration Items
The NB Pulsar driver configuration parameters can be set at 3 different
levels:
@ -213,13 +207,13 @@ levels:
schema.type=
```
* **document level**: parameters that are set within NB yaml file and under
the ***params*** section
the ***params*** section
```
params:
topic_uri: ...
```
* **statement level**: parameters that are set within NB yaml file, but
under different block statements
under different block statements
```
- name: producer-block
statements:
@ -230,15 +224,15 @@ under different block statements
**NOTE**: If one parameter is set at multiple levels (e.g. producer name),
the parameter at lower level will take precedence.
## 1.4. Pulsar Driver Yaml File - Command Blocks
# 5. NB Pulsar Driver Yaml File - Command Blocks
### 1.4.1. Pulsar Admin API Command Block - Create Tenants
## 5.1. Pulsar Admin API Command Block - Create/Delete Tenants
This Pulsar Admin API Block is used to create or delete Pulsar tenants. It
has the following format.
Please note that when document level parameter **admin_delop** is set to be
true, then this command block will delete Pulsar tenants instead. Similarly
true, then this command block will delete Pulsar tenants instead. Similarly,
this applies to other Admin API blocks for namespace and topic management.
```yaml
@ -266,7 +260,7 @@ In this command block, there is only 1 statement (s1):
* (Mandatory) **tenant** is the Pulsar tenant name to be created. It
can either be dynamically or statically bound.
### 1.4.2. Pulsar Admin API Command Block - Create Namespaces
## 5.2. Pulsar Admin API Command Block - Create/Delete Namespaces
This Pulsar Admin API Block is used to create Pulsar namespaces. It has the following format:
@ -289,7 +283,7 @@ In this command block, there is only 1 statement (s1):
* (Mandatory) **namespace** is the Pulsar namespace name to be created
under a tenant. It can be either statically or dynamically bound.
### 1.4.3. Pulsar Admin API Command Block - Create Topics (Partitioned or Regular)
## 5.3. Pulsar Admin API Command Block - Create/Delete Topics (Partitioned or Regular)
This Pulsar Admin API Block is used to create Pulsar topics. It has the following format:
@ -318,7 +312,7 @@ In this command block, there is only 1 statement (s1):
**NOTE**: The topic name is bound by the document level parameter "topic_uri".
### 1.4.4. Batch Producer Command Block
## 5.4. Batch Producer Command Block
Batch producer command block is used to produce a batch of messages all at
once by one NB cycle execution. A typical format of this command block is
@ -390,7 +384,7 @@ ratios: 1, <batch_num>, 1.
**NOTE**: the topic that the producer needs to publish messages to is
specified by the document level parameter ***topic_uri***.
### 1.4.5. Producer Command Block
## 5.5. Producer Command Block
This is the regular Pulsar producer command block that produces one Pulsar
message per NB cycle execution. A typical format of this command block is
@ -432,15 +426,16 @@ This command block only has 1 statements (s1):
of the generated message. It must be a JSON string that contains a
series of key-value pairs.
* (Mandatory) **msg_payload** specifies the payload of the generated
message
message. If the message schema type is specified as Avro schema type,
then the message value format must be in proper Avro format.
**NOTE**: the topic that the producer needs to publish messages to is
specified by the document level parameter ***topic_uri***.
### 1.4.6. (Single-Topic) Consumer Command Block
## 5.6. (Single-Topic) Consumer Command Block
This is the regular Pulsar consumer command block that consumes one Pulsar
message from one single Pulsar topic per NB cycle execution. A typical
This is the regular Pulsar consumer command block that consumes messages
from one single Pulsar topic per NB cycle execution. A typical
format of this command block is as below:
```yaml
@ -463,18 +458,18 @@ This command block only has 1 statements (s1):
this statement
* (Mandatory) **subscription_name** specifies subscription name.
* (Optional) **subscription_type**, when provided, specifies
subscription type. Default to **exclusive** subscription type.
subscription type. Default to **Exclusive** subscription type.
* (Optional) **consumer_name**, when provided, specifies the
associated consumer name.
**NOTE**: the single topic that the consumer needs to consume messages from
is specified by the document level parameter ***topic_uri***.
**NOTE**: the single topic that the consumer receives messages from is
specified by the document level parameter ***topic_uri***.
### 1.4.7. Reader Command Block
## 5.7. Reader Command Block
This is the regular Pulsar reader command block that reads one Pulsar
message per NB cycle execution. A typical format of this command block is
as below:
This is the regular Pulsar reader command block that reads messages from
one Pulsar topic per NB cycle execution. A typical format of this command
block is as below:
```yaml
- name: reader-block
@ -513,11 +508,11 @@ Reader reader = pulsarClient.newReader()
.create();
```
### 1.4.8. Multi-topic Consumer Command Block
## 5.8. Multi-topic Consumer Command Block
This is the regular Pulsar consumer command block that consumes one Pulsar
message from multiple Pulsar topics per NB cycle execution. A typical format
of this command block is as below:
This is the Pulsar consumer command block that consumes messages from
multiple Pulsar topics per NB cycle execution. A typical format of
this command block is as below:
```yaml
- name: multi-topic-consumer-block
@ -541,22 +536,23 @@ This command block only has 1 statements (s1):
* (Mandatory) **optype (msg-consume)** is the statement identifier for
this statement
* (Optional) **topic_names**, when provided, specifies multiple topic
names from which to consume messages for multi-topic message consumption.
names from which to consume messages.
* (Optional) **topics_pattern**, when provided, specifies pulsar
topic regex pattern for multi-topic message consumption
* (Mandatory) **subscription_name** specifies subscription name.
* (Optional) **subscription_type**, when provided, specifies
subscription type. Default to **exclusive** subscription type.
subscription type. Default to **Exclusive** subscription type.
* (Optional) **consumer_name**, when provided, specifies the
associated consumer name.
**NOTE 1**: when both **topic_names** and **topics_pattern** are provided,
**NOTE 1**: if neither **topic_names** and **topics_pattern** is provided,
consumer topic name is default to the document level parameter **topic_uri**.
Otherwise, the document level parameter **topic_uri** is ignored.
**NOTE 2**: when both **topic_names** and **topics_pattern** are provided,
**topic_names** takes precedence over **topics_pattern**.
**NOTE 2**: if both **topic_names** and **topics_pattern** are not provided,
consumer topic name is default to the document level parameter **topic_uri**.
### 1.4.9. End-to-end Message Processing Command Block
## 5.9. End-to-end Message Processing Command Block
End-to-end message processing command block is used to simplify measuring
the end-to-end message processing (from being published to being consumed)
@ -600,19 +596,19 @@ ratios: 1, 1.
* (Optional) **ratio**, must be 1 when provided.
Otherwise, default to 1.
* Statement **s2** is used to consume the message that just got published
from the same topic
from the same topic
* (Mandatory) **optype (ec2-msg-proc-consume)** is the statement
identifier for this statement
* (Mandatory) **subscription_name** specifies subscription name.
* (Optional) **subscription_type**, when provided, specifies
subscription type. Default to **exclusive** subscription type.
subscription type. Default to **exclusive** subscription type.
* (Optional) **ratio**, must be 1 when provided.
Otherwise, default to 1.
**NOTE**: the topic that the producer needs to publish messages to is
specified by the document level parameter ***topic_uri***.
## 1.5. Message Properties
# 6. Message Properties
In the producer command block, it is optional to specify message properties:
```
@ -630,7 +626,7 @@ contains a list of key value pairs. Otherwise, if it is not a valid
JSON string as expected, the driver will ignore it and treat the
message as having no properties.
## 1.6. Schema Support
# 7. Schema Support
Pulsar has built-in schema support. Other than primitive types, Pulsar
also supports complex types like **Avro**, etc. At the moment, the NB
@ -648,8 +644,8 @@ related settings as below:
```
Take the previous Producer command block as an example, the **msg-value**
parameter has the value of a JSON string that follows the following Avro
schema definition:
parameter has the value of a JSON string that follows the specified Avro
schema definition, an example of which is as below:
```json
{
"type": "record",
@ -664,10 +660,10 @@ schema definition:
}
```
## 1.7. Measure End-to-end Message Processing Latency
# 8. Measure End-to-end Message Processing Latency
**e2e-msg-proc-block** measures the end-to-end message latency metrics. It
contains one message producing statement and one message consuming statement.
The built-in **e2e-msg-proc-block** measures the end-to-end message latency metrics.
It contains one message producing statement and one message consuming statement.
When the message that is published by the producer is received by the consumer,
the consumer calculates the time difference between when the time is received
and when the time is published.
@ -675,8 +671,8 @@ and when the time is published.
The measured end-to-end message processing latency is captured as a histogram
metrics name "e2e_msg_latency".
This command block uses one single machine to act as both a producer and a
consumer. We do so just for convenience purposes. In reality, we can use
This built-in command block uses one single machine to act as both a producer and
a consumer. We do so just for convenience purposes. In reality, we can use
**producer-block** and **consumer-block** command blocks on separate machines
to achieve the same goal, which is probably closer to the actual use case and
probably more accurate measurement (to avoid the situation of always reading
@ -685,11 +681,11 @@ messages from the managed ledger cache).
One thing to remember though if we're using multiple machines to measure the
end-to-end message processing latency, we need to make sure:
1) The time of the two machines are synced up with each other, e.g. through
NTP protocol.
NTP protocol.
2) If there is some time lag of starting the consumer, we need to count that
into consideration when interpreting the end-to-end message processing latency.
into consideration when interpreting the end-to-end message processing latency.
## 1.8. Detect Message Out-of-order, Message Loss, and Message Duplication
# 9. Detect Message Out-of-order, Message Loss, and Message Duplication
In order to detect errors like message out-of-order and message loss through
the NB Pulsar driver, we need to set the following document level parameter
@ -701,44 +697,7 @@ params:
seq_tracking: "true"
```
For message duplication detection, if broker level message dedup configuration
is enabled ("brokerDeduplicationEnabled=true" in broker.conf), we also need to
enable this document level parameter:
```
params:
msg_dedup_broker: "true"
```
However, since message dedup. can be also enabled or disabled at namespace level
or topic level, the NB Pulsar driver will also check the settings at these layers
through API. Basically, the final message dedup setting for a topic is determined
by the following rules:
* if topic level message dedup is not set, check namespace level setting
* if namespace level message dedup is not set, check broker level setting which
in turn is determined by the document level NB parameter **msg_dedup_broker**
* if message dedup is enabled at multiple levels, the priority sequence follows:
* topic level > namespace level > broker level
The logic of how this works is based on the fact that NB execution cycle number
is monotonically increasing by 1 for every cycle moving forward. When publishing
a series of messages, we use the current NB cycle number as one message property
which is also monotonically increasing by 1.
When receiving the messages, if the message sequence number stored in the message
property is not monotonically increasing or if there is a gap larger than 1, then
it must be one of the following errors:
* If the current message sequence ID is less than the previous message sequence ID,
then it is message out-of-order error. Exception **PulsarMsgOutOfOrderException**
will be thrown out.
* if the current message sequence ID is more than 1 bigger than the previous message
sequence ID, then it is message loss error. Exception **PulsarMsgLossException**
will be thrown out.
* if message dedup is enabled and the current message sequence ID is equal to the
previous message sequence ID, then it is message duplication error. Exception **PulsarMsgDuplicateException** will be thrown out.
In either case, a runtime error will be thrown out with corresponding error messages.
## 1.9. NB Activity Execution Parameters
# 10. NB Activity Execution Parameters
At the moment, the following NB Pulsar driver **specific** activity
parameters are supported:
@ -756,7 +715,7 @@ reference to NB documentation for more parameters
* cycles=<total_NB_cycle_execution_number>
* --report-csv-to <metrics_output_dir_name>
## 1.10. NB Pulsar Driver Execution Example
# 11. NB Pulsar Driver Execution Example
**NOTE**: in the following examples, the Pulsar service URL is **pulsar:
//localhost:6650**, please change it accordingly for your own Pulsar
@ -769,7 +728,7 @@ environment.
```
2. Run Pulsar producer batch API to produce 1M messages with 2 NB threads.
**NOTE**: *seq=* must have **concat** value in order to make the batch API working properly!
**NOTE**: *seq=* must have **concat** value in order to make the batch API working properly!
```bash
<nb_cmd> run driver=pulsar seq=concat tags=phase:batch-producer threads=2 cycles=1M web_url=http://localhost:8080 service_url=pulsar://localhost:6650 config=<dir>/config.properties yaml=<dir>/pulsar.yaml --report-csv-to <metrics_folder_path>
```
@ -780,8 +739,7 @@ environment.
<nb_cmd> run driver=pulsar tags=phase:consumer cycles=100 web_url=http://localhost:8080 service_url=pulsar://localhost:6650 config=<dir>/config.properties yaml=<dir>/pulsar.yaml
```
## 1.11. Appendix A. Template Global Setting File (config.properties)
# 12. Appendix A. Template Global Setting File (config.properties)
```properties
schema.type =
schema.definition =
@ -812,95 +770,3 @@ reader.receiverQueueSize =
reader.readerName =
reader.startMessagePos =
```
---
# 2. TODO : Design Revisit -- Advanced Driver Features
**NOTE**: The following text is based on the original multi-layer API
caching design which is not fully implemented at the moment. We need to
revisit the original design at some point in order to achieve maximum
testing flexibility.
To summarize, the original caching design has the following key
requirements:
* **Requirement 1**: Each NB Pulsar activity is able to launch and cache
multiple **client spaces**
* **Requirement 2**: Each client space can launch and cache multiple
Pulsar operators of the same type (producer, consumer, etc.)
* **Requirement 3**: The size of each Pulsar operator specific cached
space can be configurable.
In the current implementation, only requirement 2 is implemented.
* For requirement 1, the current implementation only supports one client
space per NB Pulsar activity
* For requirement 3, the cache space size is not configurable (no limit at
the moment)
## 2.1. Other Activity Parameters
- **maxcached** - A default value to be applied to `max_clients`,
`max_producers`, `max_consumers`.
- default: `max_cached=100`
- **max_clients** - Clients cache size. This is the number of client
instances which are allowed to be cached in the NoSQLBench client
runtime. The clients cache automatically maintains a cache of unique
client instances internally. default: _maxcached_
- **max_operators** - Producers/Consumers/Readers cache size (per client
instance). Limits the number of instances which are allowed to be cached
per client instance. default: _maxcached_
## 2.2. API Caching
This driver is tailored around the multi-tenancy and topic naming scheme
that is part of Apache Pulsar. Specifically, you can create an arbitrary
number of client instances, producers (per client), and consumers (per
client) depending on your testing requirements.
Further, the topic URI is composed from the provided qualifiers of
`persistence`, `tenant`, `namespace`, and `topic`, or you can provide a
fully-composed value in the `persistence://tenant/namespace/topic`
form.
### 2.2.1. Instancing Controls
Normative usage of the Apache Pulsar API follows a strictly enforced
binding of topics to producers and consumers. As well, clients may be
customized with different behavior for advanced testing scenarios. There
is a significant variety of messaging and concurrency schemes seen in
modern architectures. Thus, it is important that testing tools rise to the
occasion by letting users configure their testing runtimes to emulate
applications as they are found in practice. To this end, the NoSQLBench
driver for Apache Pulsar provides a set controls within its op template
format which allow for flexible yet intuitive instancing in the client
runtime. This is enabled directly by using nominative variables for
instance names where needed. When the instance names are not provided for
an operation, defaults are used to emulate a simple configuration.
Since this is a new capability in a NoSQLBench driver, how it works is
explained below:
When a pulsar cycles is executed, the operation is synthesized from the op
template fields as explained below under _Op Fields_. This happens in a
specific order:
1. The client instance name is resolved. If a `client` field is provided,
this is taken as the client instance name. If not, it is set
to `default`.
2. The named client instance is fetched from the cache, or created and
cached if it does not yet exist.
3. The topic_uri is resolved. This is the value to be used with
`.topic(...)` calls in the API. The op fields below explain how to
control this value.
4. For _send_ operations, a producer is named and created if needed. By
default, the producer is named after the topic_uri above. You can
override this by providing a value for `producer`.
5. For _recv_ operations, a consumer is named and created if needed. By
default, the consumer is named after the topic_uri above. You can
override this by providing a value for `consumer`.
The most important detail for understanding the instancing controls is
that clients, producers, and consumers are all named and cached in the
specific order above.

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -22,14 +22,14 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -24,19 +24,19 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-stdout</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -22,13 +22,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -23,13 +23,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>nb-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-userlibs</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -23,25 +23,25 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>nb-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>nb-annotations</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-userlibs</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -64,7 +64,7 @@ public class WorkloadDesc implements Comparable<WorkloadDesc> {
if (!description.isEmpty()) {
// sb.append("# description:\n");
String formattedDesc = "# "+ description.split("\n")[0];
String formattedDesc = "# "+ description.split("[\n.;]")[0];
sb.append(formattedDesc).append("\n");
while (sb.toString().endsWith("\n\n")) {
sb.setLength(sb.length() - 1);

View File

@ -308,4 +308,19 @@ public class CommandTemplate {
}
return true;
}
/**
* This should only be used to provide a view of a field definition, never for actual use in a payload.
* @param varname The field name which you want to explain
* @return A string representation of the field name
*/
public String getFieldDescription(String varname) {
if (this.isDynamic(varname)) {
return "dynamic: " + this.dynamics.get(varname).toString();
} else if (this.isStatic(varname)) {
return "static: " + this.getStatic(varname);
} else {
return "UNDEFINED";
}
}
}

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -23,13 +23,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-core</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-docker</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -102,14 +102,16 @@ public class NBCLI {
String sessionName = SessionNamer.format(globalOptions.getSessionName());
loggerConfig
.setSessionName(sessionName)
.setConsoleLevel(globalOptions.getConsoleLogLevel())
.setConsolePattern(globalOptions.getConsoleLoggingPattern())
.setLogfileLevel(globalOptions.getScenarioLogLevel())
.getLoggerLevelOverrides(globalOptions.getLogLevelOverrides())
.setMaxLogs(globalOptions.getLogsMax())
.setLogsDirectory(globalOptions.getLogsDirectory())
.activate();
.setSessionName(sessionName)
.setConsoleLevel(globalOptions.getConsoleLogLevel())
.setConsolePattern(globalOptions.getConsoleLoggingPattern())
.setLogfileLevel(globalOptions.getScenarioLogLevel())
.setLogfilePattern(globalOptions.getLogfileLoggingPattern())
.getLoggerLevelOverrides(globalOptions.getLogLevelOverrides())
.setMaxLogs(globalOptions.getLogsMax())
.setLogsDirectory(globalOptions.getLogsDirectory())
.setAnsiEnabled(globalOptions.isEnableAnsi())
.activate();
ConfigurationFactory.setConfigurationFactory(loggerConfig);
logger = LogManager.getLogger("NBCLI");
@ -124,7 +126,7 @@ public class NBCLI {
}
logger.info("Running NoSQLBench Version " + new VersionInfo().getVersion());
logger.info("command-line: "+Arrays.stream(args).collect(Collectors.joining(" ")));
logger.info("command-line: " + Arrays.stream(args).collect(Collectors.joining(" ")));
logger.info("client-hardware: " + SystemId.getHostSummary());
boolean dockerMetrics = globalOptions.wantsDockerMetrics();
@ -135,10 +137,10 @@ public class NBCLI {
int mOpts = (dockerMetrics ? 1 : 0) + (dockerMetricsAt != null ? 1 : 0) + (reportGraphiteTo != null ? 1 : 0);
if (mOpts > 1 && (reportGraphiteTo == null || annotatorsConfig == null)) {
throw new BasicError("You have multiple conflicting options which attempt to set\n" +
" the destination for metrics and annotations. Please select only one of\n" +
" --docker-metrics, --docker-metrics-at <addr>, or other options like \n" +
" --report-graphite-to <addr> and --annotators <config>\n" +
" For more details, see run 'nb help docker-metrics'");
" the destination for metrics and annotations. Please select only one of\n" +
" --docker-metrics, --docker-metrics-at <addr>, or other options like \n" +
" --report-graphite-to <addr> and --annotators <config>\n" +
" For more details, see run 'nb help docker-metrics'");
}
String metricsAddr = null;
@ -148,13 +150,13 @@ public class NBCLI {
logger.info("Docker metrics is enabled. Docker must be installed for this to work");
DockerMetricsManager dmh = new DockerMetricsManager();
Map<String, String> dashboardOptions = Map.of(
DockerMetricsManager.GRAFANA_TAG, globalOptions.getDockerGrafanaTag(),
DockerMetricsManager.PROM_TAG, globalOptions.getDockerPromTag(),
DockerMetricsManager.TSDB_RETENTION, String.valueOf(globalOptions.getDockerPromRetentionDays())
DockerMetricsManager.GRAFANA_TAG, globalOptions.getDockerGrafanaTag(),
DockerMetricsManager.PROM_TAG, globalOptions.getDockerPromTag(),
DockerMetricsManager.TSDB_RETENTION, String.valueOf(globalOptions.getDockerPromRetentionDays())
);
dmh.startMetrics(dashboardOptions);
String warn = "Docker Containers are started, for grafana and prometheus, hit" +
" these urls in your browser: http://<host>:3000 and http://<host>:9090";
" these urls in your browser: http://<host>:3000 and http://<host>:9090";
logger.warn(warn);
metricsAddr = "localhost";
} else if (dockerMetricsAt != null) {
@ -164,8 +166,8 @@ public class NBCLI {
if (metricsAddr != null) {
reportGraphiteTo = metricsAddr + ":9109";
annotatorsConfig = "[{type:'log',level:'info'},{type:'grafana',baseurl:'http://" + metricsAddr + ":3000" +
"/'," +
"tags:'appname:nosqlbench',timeoutms:5000,onerror:'warn'}]";
"/'," +
"tags:'appname:nosqlbench',timeoutms:5000,onerror:'warn'}]";
} else {
annotatorsConfig = "[{type:'log',level:'info'}]";
}
@ -230,22 +232,22 @@ public class NBCLI {
logger.debug("user requests to copy out " + resourceToCopy);
Optional<Content<?>> tocopy = NBIO.classpath()
.prefix("activities")
.prefix(options.wantsIncludes())
.name(resourceToCopy).extension(RawStmtsLoader.YAML_EXTENSIONS).first();
.prefix("activities")
.prefix(options.wantsIncludes())
.name(resourceToCopy).extension(RawStmtsLoader.YAML_EXTENSIONS).first();
if (tocopy.isEmpty()) {
tocopy = NBIO.classpath()
.prefix().prefix(options.wantsIncludes())
.prefix(options.wantsIncludes())
.name(resourceToCopy).first();
.prefix().prefix(options.wantsIncludes())
.prefix(options.wantsIncludes())
.name(resourceToCopy).first();
}
Content<?> data = tocopy.orElseThrow(
() -> new BasicError(
"Unable to find " + resourceToCopy +
" in classpath to copy out")
() -> new BasicError(
"Unable to find " + resourceToCopy +
" in classpath to copy out")
);
Path writeTo = Path.of(data.asPath().getFileName().toString());
@ -285,7 +287,7 @@ public class NBCLI {
if (options.wantsTopicalHelp()) {
Optional<String> helpDoc = MarkdownDocInfo.forHelpTopic(options.wantsTopicalHelpFor());
System.out.println(helpDoc.orElseThrow(
() -> new RuntimeException("No help could be found for " + options.wantsTopicalHelpFor())
() -> new RuntimeException("No help could be found for " + options.wantsTopicalHelpFor())
));
System.exit(0);
}
@ -333,15 +335,15 @@ public class NBCLI {
}
for (
NBCLIOptions.LoggerConfigData histoLogger : options.getHistoLoggerConfigs()) {
NBCLIOptions.LoggerConfigData histoLogger : options.getHistoLoggerConfigs()) {
ActivityMetrics.addHistoLogger(sessionName, histoLogger.pattern, histoLogger.file, histoLogger.interval);
}
for (
NBCLIOptions.LoggerConfigData statsLogger : options.getStatsLoggerConfigs()) {
NBCLIOptions.LoggerConfigData statsLogger : options.getStatsLoggerConfigs()) {
ActivityMetrics.addStatsLogger(sessionName, statsLogger.pattern, statsLogger.file, statsLogger.interval);
}
for (
NBCLIOptions.LoggerConfigData classicConfigs : options.getClassicHistoConfigs()) {
NBCLIOptions.LoggerConfigData classicConfigs : options.getClassicHistoConfigs()) {
ActivityMetrics.addClassicHistos(sessionName, classicConfigs.pattern, classicConfigs.file, classicConfigs.interval);
}

View File

@ -34,10 +34,11 @@ public class NBCLIOptions {
private static final String METRICS_PREFIX = "--metrics-prefix";
private static final String ANNOTATE_EVENTS = "--annotate";
private static final String ANNOTATORS_CONFIG = "--annotators";
private static final String DEFAULT_ANNOTATORS = "all";
// Enabled if the TERM env var is provided
private final static String ANSI = "--ansi";
private final static String DEFAULT_CHART_HDR_LOG_NAME = "hdrdata-for-chart.log";
@ -81,6 +82,9 @@ public class NBCLIOptions {
private final static String REPORT_SUMMARY_TO_DEFAULT = "stdout:60,_LOGS_/_SESSION_.summary";
private static final String PROGRESS = "--progress";
private static final String WITH_LOGGING_PATTERN = "--with-logging-pattern";
private static final String LOGGING_PATTERN = "--logging-pattern";
private static final String CONSOLE_PATTERN = "--console-pattern";
private static final String LOGFILE_PATTERN = "--logfile-pattern";
private static final String LOG_HISTOGRAMS = "--log-histograms";
private static final String LOG_HISTOSTATS = "--log-histostats";
private static final String CLASSIC_HISTOGRAMS = "--classic-histograms";
@ -97,7 +101,8 @@ public class NBCLIOptions {
private static final String NASHORN_ENGINE = "--nashorn";
private static final String GRAALJS_COMPAT = "--graaljs-compat";
private static final String DEFAULT_CONSOLE_LOGGING_PATTERN = "%7r %-5level [%t] %-12logger{0} %msg%n%throwable";
private static final String DEFAULT_CONSOLE_PATTERN = "TERSE";
private static final String DEFAULT_LOGFILE_PATTERN = "VERBOSE";
// private static final String DEFAULT_CONSOLE_LOGGING_PATTERN = "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n";
@ -128,7 +133,8 @@ public class NBCLIOptions {
private boolean wantsMarkerTypes = false;
private String[] rleDumpOptions = new String[0];
private String[] cyclelogImportOptions = new String[0];
private String consoleLoggingPattern = DEFAULT_CONSOLE_LOGGING_PATTERN;
private String consoleLoggingPattern = DEFAULT_CONSOLE_PATTERN;
private String logfileLoggingPattern = DEFAULT_LOGFILE_PATTERN;
private NBLogLevel logsLevel = NBLogLevel.INFO;
private Map<String, String> logLevelsOverrides = new HashMap<>();
private boolean enableChart = false;
@ -155,6 +161,7 @@ public class NBCLIOptions {
private final String hdrForChartFileName = DEFAULT_CHART_HDR_LOG_NAME;
private String dockerPromRetentionDays = "183d";
private String reportSummaryTo = REPORT_SUMMARY_TO_DEFAULT;
private boolean enableAnsi = System.getenv("TERM")!=null && !System.getenv("TERM").isEmpty();
public String getAnnotatorsConfig() {
return annotatorsConfig;
@ -177,6 +184,14 @@ public class NBCLIOptions {
this.showStackTraces=wantsStackTraces;
}
public boolean isEnableAnsi() {
return enableAnsi;
}
public String getLogfileLoggingPattern() {
return logfileLoggingPattern;
}
public enum Mode {
ParseGlobalsOnly,
ParseAllOptions
@ -271,6 +286,11 @@ public class NBCLIOptions {
}
arglist = argsfile.process(arglist);
break;
case ANSI:
arglist.removeFirst();
String doEnableAnsi = readWordOrThrow(arglist, "enable/disable ansi codes");
enableAnsi=doEnableAnsi.toLowerCase(Locale.ROOT).matches("enabled|enable|true");
break;
case DASH_V_INFO:
consoleLevel = NBLogLevel.INFO;
arglist.removeFirst();
@ -353,9 +373,20 @@ public class NBCLIOptions {
arglist.removeFirst();
logLevelsOverrides = parseLogLevelOverrides(readWordOrThrow(arglist, "log levels in name:LEVEL,... format"));
break;
case WITH_LOGGING_PATTERN:
case CONSOLE_PATTERN:
arglist.removeFirst();
consoleLoggingPattern = readWordOrThrow(arglist, "logging pattern");
consoleLoggingPattern =readWordOrThrow(arglist, "console pattern");
break;
case LOGFILE_PATTERN:
arglist.removeFirst();
logfileLoggingPattern =readWordOrThrow(arglist, "logfile pattern");
break;
case WITH_LOGGING_PATTERN:
case LOGGING_PATTERN:
arglist.removeFirst();
String pattern = readWordOrThrow(arglist, "console and logfile pattern");
consoleLoggingPattern = pattern;
logfileLoggingPattern = pattern;
break;
case SHOW_STACKTRACES:
arglist.removeFirst();

View File

@ -6,11 +6,14 @@ import io.nosqlbench.engine.api.scenarios.WorkloadDesc;
import java.util.List;
public class NBCLIScenarios {
public static void printWorkloads(boolean includeScenarios,
String... includes) {
public static void printWorkloads(
boolean includeScenarios,
String... includes
) {
List<WorkloadDesc> workloads = List.of();
try {
workloads= NBCLIScenarioParser.getWorkloadsWithScenarioScripts(true, includes);
workloads = NBCLIScenarioParser.getWorkloadsWithScenarioScripts(true, includes);
} catch (Exception e) {
throw new RuntimeException("Error while getting workloads:" + e.getMessage(), e);
@ -24,7 +27,7 @@ public class NBCLIScenarios {
}
System.out.println(
"## To include examples, add --include=examples\n" +
"## To include examples, add --include=examples\n" +
"## To copy any of these to your local directory, use\n" +
"## --include=examples --copy=examplename\n"
);

View File

@ -4,29 +4,25 @@ Help ( You're looking at it. )
--help
Short options, like '-v' represent simple options, like verbosity.
Using multiples increases the level of the option, like '-vvv'.
Short options, like '-v' represent simple options, like verbosity. Using multiples increases the
level of the option, like '-vvv'.
Long options, like '--help' are top-level options that may only be
used once. These modify general behavior, or allow you to get more
details on how to use PROG.
Long options, like '--help' are top-level options that may only be used once. These modify general
behavior, or allow you to get more details on how to use PROG.
All other options are either commands, or named arguments to commands.
Any single word without dashes is a command that will be converted
into script form. Any option that includes an equals sign is a
named argument to the previous command. The following example
is a commandline with a command *start*, and two named arguments
to that command.
All other options are either commands, or named arguments to commands. Any single word without
dashes is a command that will be converted into script form. Any option that includes an equals sign
is a named argument to the previous command. The following example is a commandline with a command *
start*, and two named arguments to that command.
PROG start driver=diag alias=example
### Discovery options ###
These options help you learn more about running PROG, and
about the plugins that are present in your particular version.
These options help you learn more about running PROG, and about the plugins that are present in your
particular version.
Get a list of additional help topics that have more detailed
documentation:
Get a list of additional help topics that have more detailed documentation:
PROG help topics
@ -56,11 +52,10 @@ Provide the metrics that are available for scripting
### Execution Options ###
This is how you actually tell PROG what scenario to run. Each of these
commands appends script logic to the scenario that will be executed.
These are considered as commands, can occur in any order and quantity.
The only rule is that arguments in the arg=value form will apply to
the preceding script or activity.
This is how you actually tell PROG what scenario to run. Each of these commands appends script logic
to the scenario that will be executed. These are considered as commands, can occur in any order and
quantity. The only rule is that arguments in the arg=value form will apply to the preceding script
or activity.
Add the named script file to the scenario, interpolating named parameters:
@ -92,12 +87,35 @@ Specify an override for one or more classes:
--log-level-override com.foobarbaz:DEBUG,com.barfoobaz:TRACE
Specify the logging pattern:
Specify the logging pattern for console and logfile:
--with-logging-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--logging-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--logging-pattern 'TERSE'
( default: '%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n' )
( See https://logback.qos.ch/manual/layouts.html#ClassicPatternLayout for format options )
Specify the logging pattern for console only:
--console-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--console-pattern 'TERSE-ANSI'
Specify the logging pattern for logfile only:
--logfile-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--logfile-pattern 'VERBOSE'
# See https://logging.apache.org/log4j/2.x/manual/layouts.html#Pattern_Layout
# These shortcuts are allowed
TERSE %8r %-5level [%t] %-12logger{0} %msg%n%throwable
VERBOSE %d{DEFAULT}{GMT} [%t] %logger %-5level: %msg%n%throwable
TERSE-ANSI %8r %highlight{%-5level} %style{%C{1.} [%t] %-12logger{0}} %msg%n%throwable
VERBOSE-ANSI %d{DEFAULT}{GMT} [%t] %highlight{%logger %-5level}: %msg%n%throwable
# ANSI variants are auto promoted for console if --ansi=enable
# ANSI variants are auto demoted for logfile in any case
Explicitly enable or disable ANSI logging support:
(ANSI support is enabled if the TERM environment variable is defined)
--ansi=enabled
--ansi=disabled
Specify a directory and enable CSV reporting of metrics:
@ -133,6 +151,13 @@ Adjust the HDR histogram precision:
--hdr-digits 3
The default is 3 digits, which creates 1000 equal-width histogram buckets for every named metric in
every reporting interval. For longer running test or for test which require a finer grain of
precision in metrics, you can set this up to 4 or 5. Note that this only sets the global default.
Each activity can also override this value with the hdr_digits parameter. Be aware that each
increase in this number multiples the amount of detail tracked on the client by 10x, so use
caution.
Adjust the progress reporting interval:
--progress console:1m
@ -141,20 +166,18 @@ or
--progress logonly:5m
NOTE: The progress indicator on console is provided by default unless
logging levels are turned up or there is a script invocation on the
command line.
NOTE: The progress indicator on console is provided by default unless logging levels are turned up
or there is a script invocation on the command line.
If you want to add in classic time decaying histogram metrics for your
histograms and timers, you may do so with this option:
If you want to add in classic time decaying histogram metrics for your histograms and timers, you
may do so with this option:
--classic-histograms prefix
--classic-histograms 'prefix:.*' # same as above
--classic-histograms 'prefix:.*specialmetrics' # subset of names
Name the current session, for logfile naming, etc
By default, this will be "scenario-TIMESTAMP", and a logfile will be created
for this name.
Name the current session, for logfile naming, etc By default, this will be "scenario-TIMESTAMP", and
a logfile will be created for this name.
--session-name <name>
@ -162,13 +185,12 @@ Enlist nosqlbench to stand up your metrics infrastructure using a local docker r
--docker-metrics
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically
on your local docker, configure them to work together, and point nosqlbench to send metrics
the system automatically. It also imports a base dashboard for nosqlbench and configures grafana
snapshot export to share with a central DataStax grafana instance (grafana can be found on localhost:3000
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically on
your local docker, configure them to work together, and point nosqlbench to send metrics the system
automatically. It also imports a base dashboard for nosqlbench and configures grafana snapshot
export to share with a central DataStax grafana instance (grafana can be found on localhost:3000
with the default credentials admin/admin).
### Console Options ###
Increase console logging levels: (Default console logging level is *warning*)
@ -179,9 +201,8 @@ Increase console logging levels: (Default console logging level is *warning*)
--progress console:1m (disables itself if -v options are used)
These levels affect *only* the console output level. Other logging level
parameters affect logging to the scenario log, stored by default in
logs/...
These levels affect *only* the console output level. Other logging level parameters affect logging
to the scenario log, stored by default in logs/...
Show version, long form, with artifact coordinates.
@ -189,10 +210,9 @@ Show version, long form, with artifact coordinates.
### Summary Reporting
The classic metrics logging format is used to report results into the
logfile for every scenario. This format is not generally human-friendly,
so a better summary report is provided by default to the console and/or a
specified summary file by default.
The classic metrics logging format is used to report results into the logfile for every scenario.
This format is not generally human-friendly, so a better summary report is provided by default to
the console and/or a specified summary file by default.
Examples:
@ -205,16 +225,14 @@ Examples:
# do both (the default)
--report-summary-to stdout:60,_LOGS_/_SESSION_.summary
Values of `stdout` or `stderr` are send summaries directly to the console,
and any other pattern is taken as a file name.
Values of `stdout` or `stderr` are send summaries directly to the console, and any other pattern is
taken as a file name.
You can use `_SESSION_` and `_LOGS_` to automatically name the file
according to the current session name and log directory.
You can use `_SESSION_` and `_LOGS_` to automatically name the file according to the current session
name and log directory.
The reason for the optional timing parameter is to allow for results of
short scenario runs to be squelched. Metrics for short runs are not
generally accurate nor meaningful. Spamming the console with boiler-plate
in such cases is undesirable. If the minimum session length is not
specified, it is assumed to be 0, meaning that a report will always show
on that channel.
The reason for the optional timing parameter is to allow for results of short scenario runs to be
squelched. Metrics for short runs are not generally accurate nor meaningful. Spamming the console
with boiler-plate in such cases is undesirable. If the minimum session length is not specified, it
is assumed to be 0, meaning that a report will always show on that channel.

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -21,7 +21,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -28,13 +28,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>drivers-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
@ -77,15 +77,15 @@
<artifactId>profiler</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.graalvm.tools</groupId>
<artifactId>chromeinspector</artifactId>
<scope>runtime</scope>
</dependency>
<!-- <dependency>-->
<!-- <groupId>org.graalvm.tools</groupId>-->
<!-- <artifactId>chromeinspector</artifactId>-->
<!-- <scope>runtime</scope>-->
<!-- </dependency>-->
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-clients</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<scope>compile</scope>
</dependency>

View File

@ -28,12 +28,31 @@ import org.apache.logging.log4j.Logger;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.nio.charset.StandardCharsets;
import java.util.HashSet;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;
public class ScenarioResult {
private final static Logger logger = LogManager.getLogger(ScenarioResult.class);
public static final Set<MetricAttribute> INTERVAL_ONLY_METRICS = Set.of(
MetricAttribute.MIN,
MetricAttribute.MAX,
MetricAttribute.MEAN,
MetricAttribute.STDDEV,
MetricAttribute.P50,
MetricAttribute.P75,
MetricAttribute.P95,
MetricAttribute.P98,
MetricAttribute.P99,
MetricAttribute.P999);
public static final Set<MetricAttribute> OVER_ONE_MINUTE_METRICS = Set.of(
MetricAttribute.MEAN_RATE,
MetricAttribute.M1_RATE,
MetricAttribute.M5_RATE,
MetricAttribute.M15_RATE
);
private final long startedAt;
private final long endedAt;
@ -60,12 +79,17 @@ public class ScenarioResult {
public String getSummaryReport() {
ByteArrayOutputStream os = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(os);
ConsoleReporter consoleReporter = ConsoleReporter.forRegistry(ActivityMetrics.getMetricRegistry())
ConsoleReporter.Builder builder = ConsoleReporter.forRegistry(ActivityMetrics.getMetricRegistry())
.convertDurationsTo(TimeUnit.MICROSECONDS)
.convertRatesTo(TimeUnit.SECONDS)
.filter(MetricFilter.ALL)
.outputTo(ps)
.build();
.outputTo(ps);
Set<MetricAttribute> disabled = new HashSet<>(INTERVAL_ONLY_METRICS);
if (this.getElapsedMillis()<60000) {
disabled.addAll(OVER_ONE_MINUTE_METRICS);
}
builder.disabledMetricAttributes(disabled);
ConsoleReporter consoleReporter = builder.build();
consoleReporter.report();
ps.flush();
@ -106,6 +130,9 @@ public class ScenarioResult {
}
public void reportToLog() {
logger.debug("-- WARNING: Metrics which are taken per-interval (like histograms) will not have --");
logger.debug("-- active data on this last report. (The workload has already stopped.) Record --");
logger.debug("-- metrics to an external format to see values for each reporting interval. --");
logger.debug("-- BEGIN METRICS DETAIL --");
Log4JMetricsReporter reporter = Log4JMetricsReporter.forRegistry(ActivityMetrics.getMetricRegistry())
.withLoggingLevel(Log4JMetricsReporter.LoggingLevel.DEBUG)
@ -139,7 +166,7 @@ public class ScenarioResult {
});
printStream.println("-- BEGIN NON-ZERO metric counts (run longer for full report):");
printStream.print(sb.toString());
printStream.print(sb);
printStream.println("-- END NON-ZERO metric counts:");
}

View File

@ -28,9 +28,31 @@ import java.util.stream.Collectors;
//@Plugin(name = "CustomConfigurationFactory", category = ConfigurationFactory.CATEGORY)
//@Order(50)
// Can't use plugin injection, since we need a tailored instance before logging
// Can't use plugin injection, since we need a tailored instance before logging is fully initialized
/**
* This is a custom programmatic logger config handler which allows for a variety of
* logging features to be controlled at runtime.
*
* @see <a href="https://logging.apache.org/log4j/2.x/manual/layouts.html#Pattern_Layout">Pattern Layout</a>
*/
public class LoggerConfig extends ConfigurationFactory {
public static Map<String, String> STANDARD_FORMATS = Map.of(
"TERSE", "%8r %-5level [%t] %-12logger{0} %msg%n%throwable",
"VERBOSE", "%d{DEFAULT}{GMT} [%t] %logger %-5level: %msg%n%throwable",
"TERSE-ANSI", "%8r %highlight{%-5level} %style{%C{1.} [%t] %-12logger{0}} %msg%n%throwable",
"VERBOSE-ANSI", "%d{DEFAULT}{GMT} [%t] %highlight{%logger %-5level}: %msg%n%throwable"
);
/**
* Some included libraries are spammy and intefere with normal diagnostic visibility, so
* we squelch them to some reasonable level so they aren't a nuisance.
*/
public static Map<String, Level> BUILTIN_OVERRIDES = Map.of(
"oshi.util", Level.INFO
);
/**
* ArgsFile
* Environment
@ -50,11 +72,17 @@ public class LoggerConfig extends ConfigurationFactory {
private String sessionName;
private int maxLogfiles = 100;
private String logfileLocation;
private boolean ansiEnabled;
public LoggerConfig() {
}
public LoggerConfig setAnsiEnabled(boolean ansiEnabled) {
this.ansiEnabled = ansiEnabled;
return this;
}
public LoggerConfig setConsoleLevel(NBLogLevel level) {
this.consoleLevel = level;
return this;
@ -97,20 +125,20 @@ public class LoggerConfig extends ConfigurationFactory {
builder.setStatusLevel(internalLoggingStatusThreshold);
builder.add(
builder.newFilter(
"ThresholdFilter",
Filter.Result.ACCEPT,
Filter.Result.NEUTRAL
).addAttribute("level", builderThresholdLevel)
builder.newFilter(
"ThresholdFilter",
Filter.Result.ACCEPT,
Filter.Result.NEUTRAL
).addAttribute("level", builderThresholdLevel)
);
// CONSOLE appender
AppenderComponentBuilder appenderBuilder =
builder.newAppender("console", "CONSOLE")
.addAttribute("target", ConsoleAppender.Target.SYSTEM_OUT);
builder.newAppender("console", "CONSOLE")
.addAttribute("target", ConsoleAppender.Target.SYSTEM_OUT);
appenderBuilder.add(builder.newLayout("PatternLayout")
.addAttribute("pattern", consolePattern));
.addAttribute("pattern", consolePattern));
// appenderBuilder.add(
// builder.newFilter("MarkerFilter", Filter.Result.DENY, Filter.Result.NEUTRAL)
@ -120,8 +148,8 @@ public class LoggerConfig extends ConfigurationFactory {
// Log4J internal logging
builder.add(builder.newLogger("org.apache.logging.log4j", Level.DEBUG)
.add(builder.newAppenderRef("console"))
.addAttribute("additivity", false));
.add(builder.newAppenderRef("console"))
.addAttribute("additivity", false));
if (sessionName != null) {
@ -135,51 +163,56 @@ public class LoggerConfig extends ConfigurationFactory {
// LOGFILE appender
LayoutComponentBuilder logfileLayout = builder.newLayout("PatternLayout")
.addAttribute("pattern", logfilePattern);
.addAttribute("pattern", logfilePattern);
String filebase = getSessionName().replaceAll("\\s", "_");
String logfilePath = loggerDir.resolve(filebase + ".log").toString();
this.logfileLocation = logfilePath;
String archivePath = loggerDir.resolve(filebase + "-TIMESTAMP.log.gz").toString()
.replaceAll("TIMESTAMP", "%d{MM-dd-yy}");
.replaceAll("TIMESTAMP", "%d{MM-dd-yy}");
ComponentBuilder triggeringPolicy = builder.newComponent("Policies")
.addComponent(builder.newComponent("CronTriggeringPolicy").addAttribute("schedule", "0 0 0 * * ?"))
.addComponent(builder.newComponent("SizeBasedTriggeringPolicy").addAttribute("size", "100M"));
.addComponent(builder.newComponent("CronTriggeringPolicy").addAttribute("schedule", "0 0 0 * * ?"))
.addComponent(builder.newComponent("SizeBasedTriggeringPolicy").addAttribute("size", "100M"));
AppenderComponentBuilder logsAppenderBuilder =
builder.newAppender("SCENARIO_APPENDER", RollingFileAppender.PLUGIN_NAME)
.addAttribute("fileName", logfilePath)
.addAttribute("filePattern", archivePath)
.addAttribute("append", false)
.add(logfileLayout)
.addComponent(triggeringPolicy);
builder.newAppender("SCENARIO_APPENDER", RollingFileAppender.PLUGIN_NAME)
.addAttribute("fileName", logfilePath)
.addAttribute("filePattern", archivePath)
.addAttribute("append", false)
.add(logfileLayout)
.addComponent(triggeringPolicy);
builder.add(logsAppenderBuilder);
rootBuilder.add(
builder.newAppenderRef("SCENARIO_APPENDER")
.addAttribute("level", Level.valueOf(getEffectiveFileLevel().toString()))
builder.newAppenderRef("SCENARIO_APPENDER")
.addAttribute("level", Level.valueOf(getEffectiveFileLevel().toString()))
);
}
rootBuilder.add(
builder.newAppenderRef("console")
.addAttribute("level",
Level.valueOf(consoleLevel.toString())
)
builder.newAppenderRef("console")
.addAttribute("level",
Level.valueOf(consoleLevel.toString())
)
);
builder.add(rootBuilder);
if (logLevelOverrides != null) {
logLevelOverrides.forEach((k, v) -> {
Level olevel = Level.valueOf(v);
builder.add(builder.newLogger(k, olevel)
.add(builder.newAppenderRef("console"))
.add(builder.newAppenderRef("SCENARIO_APPENDER"))
.addAttribute("additivity", true));
});
}
BUILTIN_OVERRIDES.forEach((k, v) -> {
builder.add(builder.newLogger(k, v)
.add(builder.newAppenderRef("console"))
.add(builder.newAppenderRef("SCENARIO_APPENDER"))
.addAttribute("additivity", true));
});
logLevelOverrides.forEach((k, v) -> {
Level olevel = Level.valueOf(v);
builder.add(builder.newLogger(k, olevel)
.add(builder.newAppenderRef("console"))
.add(builder.newAppenderRef("SCENARIO_APPENDER"))
.addAttribute("additivity", true));
});
BuiltConfiguration builtConfig = builder.build();
return builtConfig;
@ -209,7 +242,7 @@ public class LoggerConfig extends ConfigurationFactory {
if (!Files.exists(loggerDir)) {
try {
FileAttribute<Set<PosixFilePermission>> attrs = PosixFilePermissions.asFileAttribute(
PosixFilePermissions.fromString("rwxrwx---")
PosixFilePermissions.fromString("rwxrwx---")
);
Path directory = Files.createDirectory(loggerDir, attrs);
} catch (Exception e) {
@ -220,7 +253,19 @@ public class LoggerConfig extends ConfigurationFactory {
}
public LoggerConfig setConsolePattern(String consoleLoggingPattern) {
this.consolePattern = consoleLoggingPattern;
consoleLoggingPattern= (ansiEnabled && STANDARD_FORMATS.containsKey(consoleLoggingPattern+"-ANSI"))
? consoleLoggingPattern+"-ANSI" : consoleLoggingPattern;
this.consolePattern = STANDARD_FORMATS.getOrDefault(consoleLoggingPattern, consoleLoggingPattern);
return this;
}
public LoggerConfig setLogfilePattern(String logfileLoggingPattern) {
logfileLoggingPattern= (logfileLoggingPattern.endsWith("-ANSI") && STANDARD_FORMATS.containsKey(logfileLoggingPattern))
? logfileLoggingPattern.substring(logfileLoggingPattern.length()-5) : logfileLoggingPattern;
this.logfileLocation = STANDARD_FORMATS.getOrDefault(logfileLoggingPattern, logfileLoggingPattern);
return this;
}
@ -263,9 +308,9 @@ public class LoggerConfig extends ConfigurationFactory {
}
List<File> toDelete = filesList.stream()
.sorted(fileTimeComparator)
.limit(remove)
.collect(Collectors.toList());
.sorted(fileTimeComparator)
.limit(remove)
.collect(Collectors.toList());
for (File file : toDelete) {
logger.info("removing extra logfile: " + file.getPath());

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -56,7 +56,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -28,7 +28,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>docsys</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -10,26 +10,29 @@ This is the same documentation you get in markdown format with the
---------------------------------------
### Command-Line Options ###
Help ( You're looking at it. )
--help
Short options, like '-v' represent simple options, like verbosity. Using multiples increases the level of the option,
like '-vvv'.
Short options, like '-v' represent simple options, like verbosity. Using multiples increases the
level of the option, like '-vvv'.
Long options, like '--help' are top-level options that may only be used once. These modify general behavior, or allow
you to get more details on how to use nosqlbench.
Long options, like '--help' are top-level options that may only be used once. These modify general
behavior, or allow you to get more details on how to use nosqlbench.
All other options are either commands, or named arguments to commands. Any single word without dashes is a command that
will be converted into script form. Any option that includes an equals sign is a named argument to the previous command.
The following example is a commandline with a command *start*, and two named arguments to that command.
All other options are either commands, or named arguments to commands. Any single word without
dashes is a command that will be converted into script form. Any option that includes an equals sign
is a named argument to the previous command. The following example is a commandline with a command *
start*, and two named arguments to that command.
./nb start driver=diag alias=example
### Discovery options ###
These options help you learn more about running nosqlbench, and about the plugins that are present in your particular
version.
These options help you learn more about running nosqlbench, and about the plugins that are
present in your particular version.
Get a list of additional help topics that have more detailed documentation:
@ -61,9 +64,10 @@ Provide the metrics that are available for scripting
### Execution Options ###
This is how you actually tell nosqlbench what scenario to run. Each of these commands appends script logic to the
scenario that will be executed. These are considered as commands, can occur in any order and quantity. The only rule is
that arguments in the arg=value form will apply to the preceding script or activity.
This is how you actually tell nosqlbench what scenario to run. Each of these commands appends
script logic to the scenario that will be executed. These are considered as commands, can occur in any order and
quantity. The only rule is that arguments in the arg=value form will apply to the preceding script
or activity.
Add the named script file to the scenario, interpolating named parameters:
@ -71,7 +75,7 @@ Add the named script file to the scenario, interpolating named parameters:
Add the named activity to the scenario, interpolating named parameters
run [arg=value]...
activity [arg=value]...
### General options ###
@ -95,12 +99,35 @@ Specify an override for one or more classes:
--log-level-override com.foobarbaz:DEBUG,com.barfoobaz:TRACE
Specify the logging pattern:
Specify the logging pattern for console and logfile:
--with-logging-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--logging-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--logging-pattern 'TERSE'
( default: '%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n' )
( See https://logback.qos.ch/manual/layouts.html#ClassicPatternLayout for format options )
Specify the logging pattern for console only:
--console-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--console-pattern 'TERSE-ANSI'
Specify the logging pattern for logfile only:
--logfile-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
--logfile-pattern 'VERBOSE'
# See https://logging.apache.org/log4j/2.x/manual/layouts.html#Pattern_Layout
# These shortcuts are allowed
TERSE %8r %-5level [%t] %-12logger{0} %msg%n%throwable
VERBOSE %d{DEFAULT}{GMT} [%t] %logger %-5level: %msg%n%throwable
TERSE-ANSI %8r %highlight{%-5level} %style{%C{1.} [%t] %-12logger{0}} %msg%n%throwable
VERBOSE-ANSI %d{DEFAULT}{GMT} [%t] %highlight{%logger %-5level}: %msg%n%throwable
# ANSI variants are auto promoted for console if --ansi=enable
# ANSI variants are auto demoted for logfile in any case
Explicitly enable or disable ANSI logging support:
(ANSI support is enabled if the TERM environment variable is defined)
--ansi=enabled
--ansi=disabled
Specify a directory and enable CSV reporting of metrics:
@ -110,68 +137,71 @@ Specify the graphite destination and enable reporting
--report-graphite-to <addr>[:<port>]
Specify the interval for graphite or CSV reporting in seconds (default: 10)
Specify the interval for graphite or CSV reporting in seconds:
--report-interval <interval-seconds>
--report-interval 10
Specify the metrics name prefix for graphite reporting
Specify the metrics name prefix for graphite reporting:
--metrics-prefix <metrics-prefix>
Log all HDR histogram data to a file
Log all HDR histogram data to a file:
--log-histograms histodata.log
--log-histograms 'histodata.log:.*'
--log-histograms 'histodata.log:.*:1m'
--log-histograms 'histodata.log:.*specialmetrics:10s'
Log HDR histogram stats to a CSV file
Log HDR histogram stats to a CSV file:
--log-histostats stats.csv
--log-histostats 'stats.csv:.*' # same as above
--log-histostats 'stats.csv:.*:1m' # with 1-minute interval
--log-histostats 'stats.csv:.*specialmetrics:10s'
Adjust the progress reporting inverval
Adjust the HDR histogram precision:
--progress console:10s
--hdr-digits 3
The default is 3 digits, which creates 1000 equal-width histogram buckets for every named metric in
every reporting interval. For longer running test or for test which require a finer grain of
precision in metrics, you can set this up to 4 or 5. Note that this only sets the global default.
Each activity can also override this value with the hdr_digits parameter. Be aware that each
increase in this number multiples the amount of detail tracked on the client by 10x, so use
caution.
Adjust the progress reporting interval:
--progress console:1m
or
--progress logonly:5m
If you want to add in classic time decaying histogram metrics for your histograms and timers, you may do so with this
option:
NOTE: The progress indicator on console is provided by default unless logging levels are turned up
or there is a script invocation on the command line.
If you want to add in classic time decaying histogram metrics for your histograms and timers, you
may do so with this option:
--classic-histograms prefix
--classic-histograms 'prefix:.*' # same as above
--classic-histograms 'prefix:.*specialmetrics' # subset of names
Name the current session, for logfile naming, etc By default, this will be "scenario-TIMESTAMP", and a logfile will be
created for this name.
Name the current session, for logfile naming, etc By default, this will be "scenario-TIMESTAMP", and
a logfile will be created for this name.
--session-name <name>
If you want to control the number of significant digits in all of the HDR metrics, including histograms and timers, then
you can do so this way:
--hdr-digits 4
The default is 4 digits, which creates 10000 equisized histogram buckets for every named metric in every reporting
interval. For longer running test or for test which do not require this level of precision in metrics, you can set this
down to 3 or 2. Note that this only sets the global default. Each activity can also override this value with the
hdr_digits parameter.
Enlist engineblock to stand up your metrics infrastructure using a local docker runtime:
Enlist nosqlbench to stand up your metrics infrastructure using a local docker runtime:
--docker-metrics
When this option is set, engineblock will start graphite, prometheus, and grafana automatically on your local docker,
configure them to work together, and point engineblock to send metrics the system automatically. It also imports a base
dashboard for engineblock and configures grafana snapshot export to share with a central DataStax grafana instance
(grafana can be found on localhost:3000 with the default credentials admin/admin).
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically on
your local docker, configure them to work together, and point nosqlbench to send metrics the system
automatically. It also imports a base dashboard for nosqlbench and configures grafana snapshot
export to share with a central DataStax grafana instance (grafana can be found on localhost:3000
with the default credentials admin/admin).
### Console Options ###
@ -183,9 +213,38 @@ Increase console logging levels: (Default console logging level is *warning*)
--progress console:1m (disables itself if -v options are used)
These levels affect *only* the console output level. Other logging level parameters affect logging to the scenario log,
stored by default in logs/...
These levels affect *only* the console output level. Other logging level parameters affect logging
to the scenario log, stored by default in logs/...
Show version, long form, with artifact coordinates.
--version
### Summary Reporting
The classic metrics logging format is used to report results into the logfile for every scenario.
This format is not generally human-friendly, so a better summary report is provided by default to
the console and/or a specified summary file by default.
Examples:
# report to console if session ran more than 60 seconds
--report-summary-to stdout:60
# report to auto-named summary file for every session
--report-summary-to _LOGS_/_SESSION_.summary
# do both (the default)
--report-summary-to stdout:60,_LOGS_/_SESSION_.summary
Values of `stdout` or `stderr` are send summaries directly to the console, and any other pattern is
taken as a file name.
You can use `_SESSION_` and `_LOGS_` to automatically name the file according to the current session
name and log directory.
The reason for the optional timing parameter is to allow for results of short scenario runs to be
squelched. Metrics for short runs are not generally accurate nor meaningful. Spamming the console
with boiler-plate in such cases is undesirable. If the minimum session length is not specified, it
is assumed to be 0, meaning that a report will always show on that channel.

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -22,7 +22,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -35,7 +35,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-cli</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -3,7 +3,7 @@
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<packaging>pom</packaging>
<properties>
@ -362,22 +362,59 @@
<dependencies>
<!-- core logging API and runtime -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.14.0</version>
<version>2.14.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.14.0</version>
<version>2.14.1</version>
</dependency>
<!-- binding for routing slf4j API calls to log4j -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.14.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.14.0</version>
<artifactId>log4j-jcl</artifactId>
<version>2.14.1</version>
</dependency>
<!-- <dependency>-->
<!-- <groupId>org.apache.logging.log4j</groupId>-->
<!-- <artifactId>log4j-1.2-api</artifactId>-->
<!-- <version>2.14.0</version>-->
<!-- </dependency>-->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>slf4j-impl</artifactId>
<version>2.0-alpha2</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.30</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.26</version>
</dependency>
<dependency>

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -32,7 +32,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>nb-annotations</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -137,9 +137,10 @@ public class SSLKsFactory implements NBMapConfigurable {
try (InputStream is = new ByteArrayInputStream(loadCertFromPem(new File(certFilePath)))) {
return cf.generateCertificate(is);
} catch (Exception e) {
throw new RuntimeException(String.format("Unable to load cert from %s. Please check.",
certFilePath),
e);
throw new RuntimeException(
String.format("Unable to load cert from %s. Please check.", certFilePath),
e
);
}
}).orElse(null);

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -24,31 +24,31 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-rest</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-cli</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-docs</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-core</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>engine-extensions</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
@ -60,91 +60,91 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-web</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-kafka</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-stdout</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-diag</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-tcp</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-http</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-jmx</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-dsegraph-shaded</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-cql-shaded</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-cqld3-shaded</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-cqlverify</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-mongodb</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-pulsar</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-cockroachdb</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-jms</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>
@ -277,7 +277,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>driver-mongodb</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>
</profile>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>mvn-defaults</relativePath>
</parent>

View File

@ -7,7 +7,7 @@
<parent>
<groupId>io.nosqlbench</groupId>
<artifactId>mvn-defaults</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -23,14 +23,14 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<artifactId>nb-api</artifactId>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-lang</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -18,6 +18,7 @@
<dependency>
<groupId>org.antlr</groupId>
<artifactId>antlr4-runtime</artifactId>
</dependency>
</dependencies>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -20,7 +20,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -0,0 +1,17 @@
package io.nosqlbench.virtdata.library.basics.shared.from_long.to_time_types;
import io.nosqlbench.virtdata.api.annotations.ThreadSafeMapper;
import java.util.function.LongUnaryOperator;
/**
* Provide the millisecond epoch time as given by <pre>{@code System.currentTimeMillis()}</pre>
* CAUTION: This does not produce deterministic test data.
*/
@ThreadSafeMapper
public class CurrentEpochMillis implements LongUnaryOperator {
@Override
public long applyAsLong(long operand) {
return System.currentTimeMillis();
}
}

View File

@ -0,0 +1,17 @@
package io.nosqlbench.virtdata.library.basics.shared.from_long.to_time_types;
import io.nosqlbench.virtdata.api.annotations.ThreadSafeMapper;
import java.util.function.LongUnaryOperator;
/**
* Provide the elapsed nano time since the process started.
* CAUTION: This does not produce deterministic test data.
*/
@ThreadSafeMapper
public class ElapsedNanoTime implements LongUnaryOperator {
@Override
public long applyAsLong(long operand) {
return System.nanoTime();
}
}

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -22,13 +22,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-lib-basics</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -20,13 +20,13 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-lib-basics</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -20,7 +20,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-lib-basics</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -18,7 +18,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>

View File

@ -4,7 +4,7 @@
<parent>
<artifactId>mvn-defaults</artifactId>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<relativePath>../mvn-defaults</relativePath>
</parent>
@ -18,36 +18,36 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-realdata</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-lib-realer</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-api</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>virtdata-lib-random</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<artifactId>virtdata-lib-basics</artifactId>
</dependency>
<dependency>
<groupId>io.nosqlbench</groupId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
<artifactId>virtdata-lib-curves4</artifactId>
</dependency>
@ -55,7 +55,7 @@
<dependency>
<groupId>io.nosqlbench</groupId>
<artifactId>docsys</artifactId>
<version>4.15.64-SNAPSHOT</version>
<version>4.15.65-SNAPSHOT</version>
</dependency>
</dependencies>