script updates, content updates

This commit is contained in:
Jonathan Shook 2020-03-23 13:05:35 -05:00
parent 59bfae6743
commit 1f084daa10
20 changed files with 137 additions and 12 deletions

View File

@ -3,6 +3,8 @@ title: Advanced Testing
weight: 13
---
# Advanced Testing
:::info
Some of the features discussed here are only for advanced testing scenarios.
:::

View File

@ -3,6 +3,8 @@ title: Core Concepts
weight: 2
---
# Refined Core Concepts
The core concepts that NoSQLBench is built on have been scrutinized,
replaced, refined, and hardened through several years of use
by users of various needs and backgrounds.

View File

@ -3,6 +3,11 @@ title: High Fidelity Metrics
weight: 12
---
# High Fidelity Metrics
Since NoSQLBench has been built as a serious testing tool for all users,
some attention was necessary on the way metric are used.
## Discrete Reservoirs
In NoSQLBench, we avoid the use of time-decaying metrics reservoirs.

View File

@ -3,6 +3,8 @@ title: NoSQLBench Showcase
weight: 10
---
# NoSQLBench Showcase
Since NoSQLBench is new on the scene in its current form, you may be wondering
why you would want to use it over any other tool. That is what this section is all
about.

View File

@ -3,6 +3,8 @@ title: Modular Architecture
weight: 11
---
# Modular Architecture
The internal architecture of NoSQLBench is modular throughout.
Everything from the scripting extensions to the data generation functions
is enumerated at compile time into a service descriptor, and then discovered

View File

@ -3,6 +3,8 @@ title: Portable Workloads
weight: 2
---
# Portable Workloads
All of the workloads that you can build with NoSQLBench are self-contained
in a workload file. This is a statement-oriented configuration file that
contains templates for the operations you want to run in a workload.

View File

@ -3,6 +3,8 @@ title: Scripting Environment
weight: 3
---
# Scripting Environment
The ability to write open-ended testing simulations is provided in
EngineBlock by means of a scripted runtime, where each scenario is
driven from a control script that can do anything the user wants.

View File

@ -1,8 +1,10 @@
---
title: Virtual DataSets
title: Virtual Datasets
weight: 1
---
# Virtual Datasets
The _Virtual Dataset_ capabilities within NoSQLBench allow you to
generate data on the fly. There are many reasons for using this technique
in testing, but it is often a topic that is overlooked or taken for granted.

View File

@ -3,6 +3,8 @@ title: 01 Commands
weight: 2
---
# Example Commands
Let's run a simple test against a cluster to establish some basic familiarity with the NoSQLBench.
## Create a Schema

View File

@ -3,6 +3,8 @@ title: 02 Results
weight: 3
---
# Example Results
We just ran a very simple workload against our database. In that example, we saw that
nosqlbench writes to a log file and it is in that log file where the most basic form of metrics are displayed.

View File

@ -3,6 +3,8 @@ title: 03 Metrics
weight: 4
---
# Example Metrics
A set of core metrics are provided for every workload that runs with nosqlbench,
regardless of the activity type and protocol used. This section explains each of
these metrics and shows an example of them from the log file.

View File

@ -3,6 +3,8 @@ title: Next Steps
weight: 5
---
# Next Steps
Now that you've run nosqlbench for the first time and seen what it does, you can
choose what level of customization you want for further testing.

View File

@ -3,6 +3,8 @@ title: Quick Start Example
weight: 20
---
# Quick Start Example
## Downloading
NoSQLBench is packaged directly as a Linux binary named `nb` and as

View File

@ -3,6 +3,8 @@ title: NoSQLBench CLI Options
weight: 01
---
# The NoSQLBEnch Command Line
This is the same documentation you get in markdown format with the
`nb --help` command.

View File

@ -3,16 +3,28 @@ title: Grafana Metrics
weight: 2
---
# (docker-based) Grafana Metrics
# Grafana Metrics
nosqlbench comes with a built-in helper to get you up and running quickly
with client-side testing metrics.
This functionality is based on docker, and a built-in method for bringing up a docker stack,
automated by NoSQLBench.
:::warning
This feature requires that you have docker running on the local system and that your user is in a group that is allowed to manage docker. Using the `--docker-metrics` command *will* attempt to manage docker on your local system.
This feature requires that you have docker running on the local system and that
your user is in a group that is allowed to manage docker.
Using the `--docker-metrics` command *will* attempt to manage docker
on your local system.
:::
To ask nosqlbench to stand up your metrics infrastructure using a local docker runtime, use this command line option with any other nosqlbench commands:
To ask nosqlbench to stand up your metrics infrastructure using a local docker runtime,
use this command line option with any other nosqlbench commands:
--docker-metrics
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically on your local docker, configure them to work together, and to send metrics the system automatically. It also imports a base dashboard for nosqlbench and configures grafana snapshot export to share with a central DataStax grafana instance (grafana can be found on localhost:3000 with the default credentials admin/admin).
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically
on your local docker, configure them to work together, and to send metrics the system
automatically. It also imports a base dashboard for nosqlbench and configures grafana
snapshot export to share with a central DataStax grafana instance (grafana can be found
on localhost:3000 with the default credentials admin/admin).

View File

@ -3,6 +3,8 @@ title: NoSQLBench Basics
weight: 30
---
# NoSQLBench Basics
This section covers the essential details that you'll need to
run nosqlbench in different ways.

View File

@ -3,9 +3,21 @@ title: Built-In Workloads
weight: 40
---
There are a few built-in workloads which you may want to run. These workloads can be run from a command without having to configure anything, or they can be tailored with their built-in parameters.
# Built-In Workloads
This section of the guidebook will explain each of them in detail.
There are a few built-in workloads which you may want to run.
These workloads can be run from a command without having to configure anything,
or they can be tailored with their built-in parameters.
There is now a way to list the built-in workloads:
`nb --list-workloads` will give you a list of all the pre-defined workloads
which have a named scenarios built-in.
## Common Built-Ins
This section of the guidebook will explain a couple of the common
scenarios in detail.
## Built-In Workload Conventions
@ -22,6 +34,7 @@ Each built-in contains the following tags that can be used to break the workload
### Parameters
Each built-in has a set of adjustable parameters which is documented below per workload. For example, the cql-iot workload has a `sources` parameter which determines the number of unique devices in the dataset.
Each built-in has a set of adjustable parameters which is documented below per workload. For example,
the cql-iot workload has a `sources` parameter which determines the number of unique devices in the dataset.

View File

@ -0,0 +1,61 @@
---
title: 10 Named Scenarios
weight: 10
---
# Named Scenarios
There is one final element of a yaml that you need to know about: _named scenarios_.
You can provide named scenarios for a workload like this:
```yaml
# contents of myworkloads.yaml
scenarios:
default:
- run type=diag cycles=10 alias=first-ten
- run type=diag cycles=10..20 alias=second-ten
longrun:
- run type=diag cycles=10M
```
This provides a way to specify more detailed workflows that users may want
to run without them having to build up a command line for themselves.
There are two ways to invoke a named scenario.
```
# runs the scenario named 'default' if it exists, or throws an error if it does not.
nb myworkloads
# or
nb myworkloads default
# runs the named scenario 'longrun' if it exists, or throws an error if it does not.
nb myworkloads longrun
```
## Named Scenario Discovery
Only workloads which include named scenarios will be easily discoverable by users
who look for pre-baked scenarios.
## Parameter Overrides
You can override parameters that are provided by named scenarios. Any parameter
that you specify for the name scenario will override parameters of the same name
in the named scenario's script.
## Examples
```yaml
# example-scenarios.yaml
scenarios:
default:
- run cycles=3 alias=A type=stdout
- run cycles=5 alias=B type=stdout
bindings:
cycle: Identity()
name: NumberNameToCycle()
statements:
- cycle: "cycle {cycle}\n"
```

View File

@ -1,6 +1,6 @@
---
title: 10 YAML Diagnostics
weight: 10
title: YAML Diagnostics
weight: 99
---
## Diagnostics

View File

@ -5,6 +5,12 @@ set -x
NBJAR_VERSION=${NBJAR_VERSION:?NBJAR_VERSION must be specified}
echo "NBJAR_VERSION: ${NBJAR_VERSION}"
JARNAME="nb-${NBJAR_VERSION}.jar"
echo "linking $JARNAME to nb.jar"
(cd target ; ln -s $JARNAME nb.jar)
cd target
if [ -e "nb.jar" ]
then
echo "nb.jar link exists, skipping"
else
echo "linking $JARNAME to nb.jar"
ln -s $JARNAME nb.jar
fi