-
+
ResInsight
diff --git a/_includes/primary-nav-items.html b/_includes/primary-nav-items.html
index f70d16d541..33b96a15f8 100644
--- a/_includes/primary-nav-items.html
+++ b/_includes/primary-nav-items.html
@@ -14,4 +14,9 @@
Download
+
+
+
diff --git a/_posts/2017-05-31-resinsight-2017.05-released.md b/_posts/2017-05-31-resinsight-2017.05-released.md
new file mode 100644
index 0000000000..1631df432c
--- /dev/null
+++ b/_posts/2017-05-31-resinsight-2017.05-released.md
@@ -0,0 +1,14 @@
+---
+layout: news_item
+title: "ResInsight 2017.05 Released"
+date: "2017-05-31 16:00:00 +0200"
+author: jacobstoren
+version: 2017.05
+categories:
+ - release
+---
+We are happy to announce the release of ResInsight v2017.05.
+
+### Download
+Have a look at the [GitHub release-page](https://github.com/OPM/ResInsight/releases) to read the release-notes or download the new release from the [Download]({{site.baseurl}}/project/download) page.
+
diff --git a/docs/BatchCommands.md b/docs/BatchCommands.md
index 9e7b82d9a5..e567ff87d2 100644
--- a/docs/BatchCommands.md
+++ b/docs/BatchCommands.md
@@ -14,7 +14,7 @@ ResInsight stores data computed by statistics calculation in a cache file. When
- Open the project file used to produce statistics
- Select the statistics object in the project tree
-- Click the button "Edit (Will DELETE current result)"
+- Click the button **Edit (Will DELETE current result)**
- Save the project file
### Examples
diff --git a/docs/BuildInstructions.md b/docs/BuildInstructions.md
index 2911308af1..f6f00c0a9d 100644
--- a/docs/BuildInstructions.md
+++ b/docs/BuildInstructions.md
@@ -11,21 +11,25 @@ The source code is hosted at [GitHub](https://github.com/opm/resinsight)
## Dependencies and prerequisites
### Windows compiler
+
Visual Studio 2015 or later is supported.
### GCC compiler
-GCC version 4.9 or later is supported.
-### Boost
-[Boost](http://www.boost.org/users/history/) version 1.58 or later is supported. Earlier versions might work, but this has not been tested.
+GCC version 4.9 or later is supported. On RedHat Linux 6 you need to install devtoolset-3, and enable it with
+
+ source /opt/rh/devtoolset-3/enable
### Qt
-[Qt](http://download.qt.io/archive/qt/) version 4.7.3 or later is supported.
+
+[Qt](http://download.qt.io/archive/qt/) Qt4 version 4.6.2 or later is supported. Qt5 is not supported yet.
+On Windows we recommend Qt-4.8.7, while the default installation will do under Linux.
+
+You will need to patch the Qt sources in order to make them build using Visual Studion 2015 using this : [Qt-patch](https://github.com/appleseedhq/appleseed/wiki/Making-Qt-4.8.7-compile-with-Visual-Studio-2015)
### CMake
[CMake](https://cmake.org/download/) version 2.8 or later is supported.
-
## Build instructions
The ResInsight build may be configured in different ways, with optional support for Octave plugins, ABAQUS ODB API, and OpenMP. This is configured using options in CMake.
@@ -34,17 +38,18 @@ If you check the button 'Grouped' in the CMake GUI, the CMake variables are grou
- Open the CMake GUI
- Set the path to the source code
- Set the path to the build directory
-- Click "Configure" and select your preferred compiler
+- Click **Configure** and select your preferred compiler
- Set the build options and click "Configure" again (see ResInsight specific options below)
-- Click "Generate" to generate the makefiles or solution file and project files in the build directory
+- Click **Generate** to generate the makefiles or solution file and project files in the build directory
- Run the compiler using the generated makefiles or solution file/project files to build ResInsight
### Windows
ResInsight has been verified to build and run on Windows 7/8/10 using Microsoft Visual Studio 2015. Typical usage on Windows is to follow the build instructions above, and then open the generated solution file in Visual Studio to build the application.
+
### Linux
-ResInsight has been verified to build and run on RedHat Linux 6. Typical usage is to follow the build instructions above to build the makefiles. Then go to the build directory, and run:
+ResInsight has been verified to build and run on RedHat Linux 6, but you need to install the Typical usage is to follow the build instructions above to build the makefiles. Then go to the build directory, and run:
- make
- make install
@@ -71,7 +76,7 @@ You will find the ResInsight binary under the Install directory in your build di
| `RESINSIGHT_USE_OPENMP` | Enable OpenMP parallellization in the code |
### Optional - Octave plugins
-To be able to compile the Octave plugins, the path to the Octave development tool `mkoctfile` must be provided.
+To be able to compile the Octave plugins, the path to the Octave development tool _`mkoctfile`_ must be provided.
It is possible to build ResInsight without compiling the Octave plugins. This can be done by specifying blank for the Octave CMake options. The Octave plugin module will not be built, and CMake will show warnings like 'Failed to find mkoctfile'. This will not break the build or compilation of ResInsight.
@@ -85,19 +90,21 @@ ResInsight has been verified to build and run with Octave versions 3.4.3, 3.8.1,
| `RESINSIGHT_OCTAVE_PLUGIN_MKOCTFILE` | Location of Octave tool mkoctfile used to compile Octave plugins |
| `RESINSIGHT_OCTAVE_PLUGIN_QMAKE` | Location of Qt version to use when compiling Octave plugins. Must be compatible with Octave runtime. (Use the Qt version embedded in Octave. The qmake executable itself is not used, only the path to the directory.) |
-### Optional - ABAQUS ODB API
-ResInsight can be built with support for ABAQUS ODB files. This requires an installation of the ABAQUS ODB API from Simulia on the build computer. The path to the ABAQUS ODB API folder containing header files and library files must be specified. Leaving this option blank gives a build without ODB support. ResInsight has been built and tested with ABAQUS ODB API version 6.14-3 on Windows 7/8 and RedHat Linux 6.
-
-#### ABAQUS ODB API related CMake options for ResInsight
-
-| CMake Name | Description |
-|--------------|---------|
-| `RESINSIGHT_ODB_API_DIR` | Optional path to the ABAQUS ODB API from Simulia |
-
-### Dependencies for Debian based distributions
+#### Octave Dependencies for Debian Based Distributions
- sudo apt-get install git cmake build-essential octave octave-headers qt4-dev-tools
If you are running Ubuntu 12.10 or newer, you will need to replace octave-headers with liboctave-dev :
- sudo apt-get install git cmake build-essential octave liboctave-dev qt4-dev-tools
+
+### Optional - ABAQUS ODB API
+
+ResInsight can be built with support for ABAQUS ODB files. This requires an installation of the ABAQUS ODB API from Simulia on the build computer. The path to the ABAQUS ODB API folder containing header files and library files must be specified. Leaving this option blank gives a build without ODB support. ResInsight has been built and tested with ABAQUS ODB API version 6.14-3 on Windows 7/8 and RedHat Linux 6.
+
+ABAQUS ODB API related CMake options for ResInsight
+
+| CMake Name | Description |
+|--------------|---------|
+| `RESINSIGHT_ODB_API_DIR` | Optional path to the ABAQUS ODB API from Simulia |
+
diff --git a/docs/CaseGroupsAndStatistics.md b/docs/CaseGroupsAndStatistics.md
index 394fd9e506..63614e3686 100644
--- a/docs/CaseGroupsAndStatistics.md
+++ b/docs/CaseGroupsAndStatistics.md
@@ -28,13 +28,13 @@ An import dialog is opened:
ResInsight then creates a **Grid Case Group** for you, and populates its **Source Cases** with the Cases you selected. Then the first of those Cases are read completely, while the others are just scanned to verify that the Grids match and to detect changes in the Active Cells layout. This makes it quite fast to load even a quite large number of realizations.
### Manually
-A Grid Case Group can be created from the context menu available when right clicking a Result Case, Input Case or a different Grid Case Group. **Source Cases** can then be added by using the mouse to *drag and drop* cases with equal grids into the **Grid Case Group**'s **Source Case** folder.
+A Grid Case Group can be created from the context menu available when right-clicking a Result Case, Input Case or a different Grid Case Group. **Source Cases** can then be added by using the mouse to *drag and drop* cases with equal grids into the **Grid Case Group**'s **Source Case** folder.
This is useful if you want to create statistics based only on a subset of the source cases in an already created **Grid Case Group**.
**Drag and Drop** of cases will normally copy the cases to the new destination, but moving them is possible by pressing and holding the **Shift** key while dropping.
## Viewing special Source Cases
-To reduce the number of views, only a view for the first case is created automatically. If you want to inspect the results of a particular source case, right click the case and select **New view** from the context menu. A new 3D View will the be created on that particular case.
+To reduce the number of views, only a view for the first case is created automatically. If you want to inspect the results of a particular source case, right-click the case and select **New view** from the context menu. A new 3D View will the be created on that particular case.
How to limit system resource allocation
@@ -49,27 +49,27 @@ The properties of non-calculated and calculated **Statistics Case** is shown bel
 
-- **Compute**: Starts to calculate requested statistical Properties.
-- **Edit** : Deletes the calculated results, and makes the controls to edit the setup available.
-- **Summary of calculation setup** : Summarizes what to calculate
-- **Properties to consider**: These options makes it possible to select what Eclipse properties to include in the Statistical calculations. Adding variables increase the memory usage and the computational time.
-- **Percentile Setup**: Selects whether to calculate percentiles, what method and what percentile levels should be used. Turning this off speeds up the calculations.
-- **Well Data Source Case**: This option selects which set of **Simulation Wells** to be shown along with the statistical results. You can select one of the **Source Cases**.
+- **Compute** -- Starts to calculate requested statistical Properties.
+- **Edit** -- Deletes the calculated results, and makes the controls to edit the setup available.
+- **Summary of calculation setup** -- Summarizes what to calculate
+- **Properties to consider** -- These options makes it possible to select what Eclipse properties to include in the Statistical calculations. Adding variables increase the memory usage and the computational time.
+- **Percentile Setup** -- Selects whether to calculate percentiles, what method and what percentile levels should be used. Turning this off speeds up the calculations.
+- **Well Data Source Case** -- This option selects which set of **Simulation Wells** to be shown along with the statistical results. You can select one of the **Source Cases**.
#### Percentile Methods
Three Percentile methods are implemented:
-- **Interpolated Observation**
+- **Interpolated Observation** --
The values are sorted, and the two observations representing the probabilities closest to the percentile are interpolated to find the value for the percentile. This is the default method.
-- **Nearest Observation**
+- **Nearest Observation** --
The values are sorted, and the first observation representing a probability higher or equal to the percentile probability is selected as the value for the percentile. This method is by some considered to be statistically more puristic.
-- **Histogram based estimate**
+- **Histogram based estimate** --
A histogram is created and the percentile is calculated based on the histogram. This method will be faster when having a large number of realizations, because no value sorting is involved. You would however need several hundred realizations before this method should be considered.
### Viewing the results
-When the computation is complete, you have to create a 3D View on the **Statistics Case** to view the results. Use the Context menu available by right clicking the **Statistics Case** to create it.
+When the computation is complete, you have to create a 3D View on the **Statistics Case** to view the results. Use the Context menu available by right-clicking the **Statistics Case** to create it.
### Adding Statistics Cases
A new statistical calculation can be created by activating the context menu for **Derived Statistic->New Statistics Case**.
diff --git a/docs/CellResults.md b/docs/CellResults.md
new file mode 100644
index 0000000000..0fa86631ed
--- /dev/null
+++ b/docs/CellResults.md
@@ -0,0 +1,109 @@
+---
+layout: docs
+title: Cell Results
+permalink: /docs/cellresults/
+published: true
+---
+
+
+
+The main results to postprocess in ResInsight are Cell Results. A Cell Result is one value, or a small set of values per
+cell over a region of the grid. A Cell Result is also referred to as a *Property*.
+
+Cell Results are used in several operations and settings:
+
+- **Cell Colors**
+- **Cell Edge Result** (Eclipse Only)
+- **Separate Fault Result** (Eclipse Only)
+- **Property Filters**
+- **Well Log Extraction Curves**
+- **Cell Result Time History Curves**
+
+In the property panel of all those, the same options are used to define the Cell Result of interest.
+In the following we will describe these options.
+
+## Eclipse Result Types
+
+As shown in the picture below, there are 6 different result types
+
+
+
+- **Dynamic** -- Time varying properties from the Eclipse simulation
+- **Static** -- Eclipse properties that does not vary with time. Some derived properties calculated by ResInsight are also present.
+- **Generated** -- Results generated by an Octave Script ends up here
+- **Input Property** -- Directly imported Eclipse properties from ascii files is shown in this category
+- **Formation Names** -- Lists only the Active Formation Names selected on the case. ( See [Formations]({{ site.baseurl }}/docs/formations) )
+- **Flow Diagnostics** -- Flow diagnostic results are derived from the flux field in the Eclipse result data file and is only
+ available if those results are present. This option is described in detail below.
+
+## Flow Diagnostic Results
+
+ResInsight has embedded Flow Diagnostics calculations made available using the **Flow Diagnostics** result type.
+These results make it easier to see how and where wells interact with the reservoir and each other.
+It is possible to select exactly what wells to investigate, and even the possible *opposite flow* part of the well.
+
+Se also [ Flow Diagnostics Plots]({{ site.baseurl }}/docs/flowdiagnosticsplots) and [ Flow Characteristics Plot]({{ site.baseurl }}/docs/flowdiagnosticsplots#flow-characteristics-plot)
+
+### Method
+
+The calculations are performed by a library called [opm-flowdiagnostics](https://github.com/OPM/opm-flowdiagnostics) developed by [SINTEF Digital](http://www.sintef.no/sintef-ikt/#/). A more elaborate description of the technique and how it can be utilized, can be found at Sintef's web site [here](http://www.sintef.no/projectweb/mrst/modules/diagnostics/). The MRST tool described is a Matlab predecessor of the flow diagnostics calculations developed for ResInsight.
+
+The methodology is also described in:
+[The application of flow diagnostics for reservoir management](http://folk.ntnu.no/andreas/papers/diagnostics.pdf) SPE J., Vol. 20, No. 2, pp. 306-323, 2015. DOI: [10.2118/171557-PA](https://dx.doi.org/10.2118/171557-PA)
+
+### Cross Flow and Opposite Flow
+
+The *opposite flow* of a well denotes the flow that is opposite to the expected normal state of the well. Eg. parts of a producer might actually be injecting due to cross flow, and an injector could be producing in some sections.
+Each well is assigned an opposite flow name by adding "-XF" to the end of the name. "-XF" was chosen as a reference to Cross Flow.
+
+In this way, a producer will have two tracer names: The "well name" as a producer tracer, and "well name-XF" as an injector tracer.
+
+### Defining results
+
+There are several options available to define the particular result you want to target, as shown below:
+
+
+
+There are two main selections you need to make: The tracers and the result property
+- **Tracers** -- Option to select how/what tracers to use. Available options are:
+ - **All Injectors and Producers** -- Selects all the wells, including the opposite flow tracers
+ - **All Producers** -- Selects all producer tracers, including the opposite flow tracers of injectors.
+ - **All Injectors** -- Selects all injector tracers, including the opposite flow tracers of producers.
+ - **By Selection** -- Displays a list of all the tracers that can be selected freely, and a **Filter** field.
+ - The list of selectable tracers can be filtered using wild card search of their names.
+ - The tracers are sorted by their overall status as producer or injectors and prefixed depending on the status.
+ Injectors are prefixed with "I :", producers with "P :" and wells with varying state "I/P:".
+- **Result property** -- Displays a list of the available results:
+ - **Time Of Flight (Average)** -- The time for some fluid in the cell to reach a producer,
+ or the time it takes to reach the cell from an injector.
+ When selecting several tracers, the time of flight values from each of the tracers are weighted
+ by their cell fraction before they are averaged.
+ - **Tracer Cell Fraction (Sum)** -- The volume fraction of a cell occupied by the selected tracers.
+ The injector and producer tracers counts as independent in this regard, so the sum of fractions for
+ all the producer tracers will be 1.0 and the same for the injector tracers. If both types of tracers
+ are selected, the total sum will normally reach 2.0.
+ - **Max Fraction Tracer** -- Shows which of the selected tracers that has the largest fraction in each cell.
+ This is shown as a category result displaying a color for each tracer, and the names in the legend.
+ - **Injector Producer Communication** -- The communication in a cell between a set of producers and a set of injectors
+ is calculated as the sum of producer fractions multiplied by the sum of injector fractions in the cell.
+ This produces values between 0.0 and 1.0 where high values indicate that both the injectors and the producers
+ have a high influence.
+
+### On-Demand Calculation
+
+The flow diagnostics results are only calculated when asked for, and only for requested timesteps. This means that statistics based on all timesteps are not available for these results.
+
+
+## Geomechanical Results
+
+Geomechanical results are sorted in different **Result Position**s:
+- **Nodal** -- Results given a value per node in the grid
+- **Element Nodal** -- Results with values per element node
+- **Integration Point** -- Results with values per integration point. These are displayed in the same way as element nodal results.
+- **Element Nodal on Face** -- Results with values transformed to element faces or intersections.
+See [Element Nodal on Face]({{ site.baseurl }}/docs/derivedresults#element-nodal-on-face) for more information
+- **Formation Names** -- Lists the **Active Formation Names** selected on the case. ( See [Formations]({{ site.baseurl }}/docs/formations) )
+
+### Relative Result Options
+
+This group of options controls time-lapse results to be calculated. ( See [Relative Results]({{ site.baseurl }}/docs/derivedresults#relative-results-time-lapse-results) for more information )
diff --git a/docs/DerivedResults.md b/docs/DerivedResults.md
index 4f0bb426f2..a1b3b56fb6 100644
--- a/docs/DerivedResults.md
+++ b/docs/DerivedResults.md
@@ -47,7 +47,7 @@ ResInsight calculates several of the presented geomechanical results based on th
### Relative Results (Time Lapse Results)
-ResInsight can calculate and display relative results, sometimes also reffered to as Time Lapse results.
+ResInsight can calculate and display relative results, sometimes also referred to as Time Lapse results.
When enabled, every result variable is calculated as :
Value'(t) = Value(t) - Value(BaseTime)
@@ -56,7 +56,7 @@ Enable the **Enable Relative Result** option in the **Relative Result Options**

-Each variable is then postfixed with "_D*TimeStepIndex*" to distinguish them from the native variables.
+Each variable is then post-fixed with "_D*TimeStepIndex*" to distinguish them from the native variables.
Note: Relative Results calculated based on Gamma values are calculated slightly differently:
@@ -99,7 +99,7 @@ The calculated result fields are:
#### Definitions of derived results
-In this text the label Sa and Ea will be used to denote the unchanged stress and strain tensor respectivly from the odb file.
+In this text the label Sa and Ea will be used to denote the unchanged stress and strain tensor respectively from the odb file.
Components with one subscript denotes the principal values 1, 2, and 3 which refers to the maximum, middle, and minimum principals respectively.
@@ -143,7 +143,7 @@ Gamma
ii = ST
ii/POR (i= 1,2,3)
Gamma
i = ST
i/POR
-In these calcualtioins we set Gamma to *undefined* if abs(POR) > 0.01 MPa.
+In these calculations we set Gamma to *undefined* if abs(POR) > 0.01 MPa.
##### SE - Effective Stress
@@ -190,7 +190,7 @@ ED = 2*(E1-E3)/3
##### Element Nodal On Face
For each face displayed, (might be an element face or an intersection/intersection box face),
-a coodinate system is established such that:
+a coordinate system is established such that:
- Ez is normal to the face, named N - Normal
- Ex is horizontal and in the plane of the face, named H - Horizontal
diff --git a/docs/ExportEclipseProperties.md b/docs/ExportEclipseProperties.md
index 99ed7b7be0..0292f15e97 100644
--- a/docs/ExportEclipseProperties.md
+++ b/docs/ExportEclipseProperties.md
@@ -5,15 +5,25 @@ permalink: /docs/exporteclipseproperties/
published: true
---
-Eclipse Properties can be exported to Eclipse ASCII files by activating the context
-menu on a **Cell Result** item in the **Project Tree**.
+Eclipse Properties can be exported to Eclipse ASCII files.
+This is particularly useful when a new property is generated using Octave.
+The generated property can be exported for further use in the simulator.
+
+### Export Command
+
+To export the property currently active in the 3D View, activate the context menu on a **Cell Result** item in the **Project Tree**.

-The command will export the property that currently is active in the 3D View.
+The following dialog will appear:
-This is particularly useful when a new property is generated using Octave.
-The generated property can be exported for further use in the simulator.
+
+
+- **Export File Name** -- The path to exported file
+- **Eclipse Keyword** -- The keyword to use for the property in the eclipse file
+- **Undefined Value** -- This value is written to the file for all values that are flagged as undefined in ResInsight
+
+### File format
The exported file has the following format, that matches the Eclipse input format:
diff --git a/docs/Faults.md b/docs/Faults.md
index cee5f5caf1..ba7b41bd9a 100644
--- a/docs/Faults.md
+++ b/docs/Faults.md
@@ -12,19 +12,19 @@ This section describes how Faults are detected and visualized. NNC's are a part
ResInsight always scans the grids for geometrical faults when they are loaded. When two opposite cell faces of I, J, K neighbor cells does not match geometrically, they are tagged.
-All the tagged cell faces are then compared to the faults possibly imported from the `*.DATA` file in order to group them. If a particular face is *not* found among the fault faces defined in the `*.DATA` file (or their opposite faces), the cell face is added to one of two predefined faults:
+All the tagged cell faces are then compared to the faults possibly imported from the _`*.DATA`_ file in order to group them. If a particular face is *not* found among the fault faces defined in the _`*.DATA`_ file (or their opposite faces), the cell face is added to one of two predefined faults:
-1. **Undefined grid faults**
-2. **Undefined grid faults With Inactive**
+- **Undefined grid faults**
+- **Undefined grid faults With Inactive**
The first fault is used if both the neighbor cells are active. If one or both of the neighbor cells are inactive, the second fault is used.
-These particular Faults will always be present, even when reading of fault information from the `*.DATA` file is disabled.
+These particular Faults will always be present, even when reading of fault information from the _`*.DATA`_ file is disabled.
### Information from `*.DATA`-files
#### Fault Information
-If enabled in **Preferences**, ResInsight will import fault information from the `*.DATA` files and use this information to group the cell faces into named items. The imported faults are ordered in ascending order based on their name.
+If enabled in **Preferences**, ResInsight will import fault information from the _`*.DATA`_ files and use this information to group the cell faces into named items. The imported faults are ordered in ascending order based on their name.
The DATA file is parsed for the FAULT keyword while respecting any INCLUDE and PATH keywords.
@@ -32,7 +32,7 @@ As import of faults can be time consuming, reading of faults can be disabled fro
#### NNC Data
-If enabled in **Preferences**, ResInsight will read Non Neighbor Connections from the Eclipse output file (`*.INIT`), and create explicit visualizations of those.
+If enabled in **Preferences**, ResInsight will read Non Neighbor Connections from the Eclipse output file (_`*.INIT`_), and create explicit visualizations of those.
The NNC's are sorted onto the Fault's and their visibility is controlled along with them.
## Fault visualization options
@@ -42,7 +42,7 @@ Faults can be hidden and shown in several ways.
- Checking or unchecking the checkbox in front of the fault will show or hide the fault.
- Visibility for multiple faults can be controlled at the same time by selecting multiple faults and use the context menu: **On**, **Off** and **Toggle**.
-- Right clicking a Fault in the 3D View will enable a context menu with a command to hide the fault.
+- Right-clicking a Fault in the 3D View will enable a context menu with a command to hide the fault.
### Fault color
Each named Fault is given a color on import. This color can be controlled by selecting the fault and edit its **Fault color** in the **Property Editor.**
@@ -66,28 +66,30 @@ By clicking the  **Fau

##### Fault labels
-- **Show labels**: Displays one label per fault with the name defined in the `*.DATA`-file
-- **Label color**: Defines the label color
+- **Show labels** -- Displays one label per fault with the name defined in the _`*.DATA`_ file
+- **Label color** -- Defines the label color
##### Fault options
-- **Show faults outside filters**: Turning this option on, will display faults outside the filter region, making the fault visualization completely ignore the Range and Property filters in action.
+- **Show faults outside filters** -- Turning this option on, will display faults outside the filter region, making the fault visualization completely ignore the Range and Property filters in action.
##### Fault Face Visibility
This group of options controls the visibility of the fault faces. Since they work together, and in some cases are overridden by the system, they can be a bit confusing.
First of all. These options are only available in **Faults-only** visualization mode. ( See *Toolbar Control* above) When not in **Faults-Only** mode, ResInsight overrides the options, and the controls are inactive.
-Secondly: The option you would normally want to adjust is **Dynamic Face Selection** (See below).
+Secondly: The option you would normally want to adjust is **Dynamic Face Selection** ( See below ).
-- **Show defined faces**: Displays the fault cell faces that are defined on the Eclipse input file (`*.DATA`)
-- **Show opposite faces**: Displays the opposite fault cell faces from what is defined on the input file, based on IJK neighbors.
-
*These two options should normally be left **On**. They are useful when investigating the exact faults information provided on the `*.DATA` file. If you need to use them, it is normally wise to set the **Dynamic Face Selection** to "Show Both".*
-- **Dynamic Face Selection**: At one particular position on a fault there are usually two cells competing for your attention: The cell closer to you as the viewer, or the one further from you. When showing results, this becomes important because these two cell faces have different result property values, and thus color.
This option controls which of the two cell faces you actually can see: The one behind the fault, or the one in front of the fault. There is also an option of showing both, which will give you an undefined mixture, making it hard to be certain what you see.
This means that ResInsight turns on or off the faces based on your view position and this option to make sure that you always see the faces (and thus the result property) you request.
+- **Show defined faces** -- Displays the fault cell faces that are defined on the Eclipse input file (_`*.DATA`_)
+- **Show opposite faces** -- Displays the opposite fault cell faces from what is defined on the input file, based on IJK neighbors.
+ *These two options should normally be left **On**. They are useful when investigating the exact faults information provided on the `*.DATA` file. If you need to use them, it is normally wise to set the **Dynamic Face Selection** to "Show Both".*
+- **Dynamic Face Selection** -- At one particular position on a fault there are usually two cells competing for your attention: The cell closer to you as the viewer, or the one further from you. When showing results, this becomes important because these two cell faces have different result property values, and thus color.
+ This option controls which of the two cell faces you actually can see: The one behind the fault, or the one in front of the fault. There is also an option of showing both, which will give you an undefined mixture, making it hard to be certain what you see.
+ This means that ResInsight turns on or off the faces based on your view position and this option to make sure that you always see the faces (and thus the result property) you request.
##### NNC Visibility
-- **Show NNCs**: Toggles whether to display the Non Neighbor Connections, or not.
-- **Hide NNC geometry if no NNC result is available**: Automatically hides NNC geometry if no NNC results are available
+- **Show NNCs** -- Toggles whether to display the Non Neighbor Connections, or not.
+- **Hide NNC geometry if no NNC result is available** -- Automatically hides NNC geometry if no NNC results are available
The color of the NNC faces are set to be a bit lighter than their corresponding named fault, and can not be controlled directly.
@@ -95,14 +97,14 @@ The color of the NNC faces are set to be a bit lighter than their corresponding
## Fault Export
-Faults can be exported to separate files in the `*grdecl` file format. This is useful for example if you need a list of the geometrically detected faults that has not been covered by entries in the eclipse FAULTS keyword.
+Faults can be exported to separate files in the _`*grdecl`_ file format. This is useful for example if you need a list of the geometrically detected faults that has not been covered by entries in the eclipse FAULTS keyword.
-To export some faults, select the faults you want to export in the **Project Tree**, right click them and select the command **Export Faults ...** from the context menu.
+To export some faults, select the faults you want to export in the **Project Tree**, right-click them and select the command **Export Faults ...** from the context menu.

-You are then prompted to select a destination folder. Each Fault is exported to a file named `Faults_
_.grdecl` and stored in the selected folder.
+You are then prompted to select a destination folder. Each Fault is exported to a file named _`Faults__.grdecl`_ and stored in the selected folder.
-The `fault name` of **Undefined Grid Faults** is simplified to "UNDEF", while **Undefinded Grid Faults With Inactive** is simplified to "UNDEF_IA". All other faults keep their original name.
+The fault name of **Undefined Grid Faults** is simplified to _`UNDEF`_, while **Undefined Grid Faults With Inactive** is simplified to _`UNDEF_IA`_. All other faults keep their original name.
diff --git a/docs/Filters.md b/docs/Filters.md
index d2d0cdd161..8af8f9de93 100644
--- a/docs/Filters.md
+++ b/docs/Filters.md
@@ -6,13 +6,17 @@ published: true
---

-Cell Filters are used to control visibility of the cells in the 3D view. Three types of filters exists:
+Cell Filters are used to control visibility of the cells in the 3D view. Two types of filters exists:
-- **Range filter** : Extracts an IJK subset of the model.
-- **Property filter** : Extracts cells with a property value matching a value range.
-- **Well cell filter** : Extracts cells that are connected to a well.
+- **Range filter** -- Extracts an IJK subset of the model.
+- **Property filter** -- Extracts cells with a property value matching a value range.
-### Common properties for Range and Property Filters
+
+The visibilities of cells connection to wells, and fences based on these cells can be controlled from
Simulation Wells .
+
(Not applicable for Geomechanical cases)
+
+
+## Common properties for Range and Property Filters
Both filter types can be turned on or off using the toggle in the **Project Tree** and controlled from their corresponding **Property Editor**.
@@ -24,7 +28,8 @@ The **Exclude** setting is used to explicitly remove cells from the visualizatio
The **Include** setting behaves differently for Range filters and Property Filters but marks the cells as visible.
The icon in front of the filters show a + or - sign to indicate the setting 
-### Range filters
+
+## Range filters
Range filters enables the user to define a set of visible regions in the 3D view based on IJK boxes.
Each *Include* range filter will *add more cells* to the visualization. The view will show the union of all the *Include* range filters.
@@ -39,36 +44,42 @@ Below is a snapshot of the **Property Editor** of the **Range Filter** :

- - **Filter Type** : The filter can either make the specified range visible ( *Include* ), or remove the range from the View ( *Exclude* ).
- - **Grid** : This option selects which of the grids the range is addressing.
- - **Apply to Subgrids** : This option tells ResInsight to use the visibility of the cells in the current grid to control the visibility of the cells in sub-LGR's. If this option is turned off, Sub LGR-cells is not included in this particular Range Filter.
+ - **Filter Type** -- The filter can either make the specified range visible ( *Include* ), or remove the range from the View ( *Exclude* ).
+ - **Grid** -- This option selects which of the grids the range is addressing.
+ - **Apply to Subgrids** -- This option tells ResInsight to use the visibility of the cells in the current grid to control the visibility of the cells in sub-LGR's. If this option is turned off, Sub LGR-cells is not included in this particular Range Filter.
The **Start** and **Width** labels in front of the sliders features a number in parenthesis denoting maximum available value.
The **Start** labels shows the index of the start of the active cells.
The **Width** labels shows the number of active cells from the start of the active cells.
-### Property filters
+## Property filters
**Property filters** applies to the results of the **Range filters** and limits the visible cells to the ones approved by the filter. For a cell to be visible it must be accepted by all the property filters.
+A new property filter can be made by activating the context menu on **Property Filters** or by right-clicking inside a 3D view. The new property filter is based on the currently viewed cell result by default.
+
+The name of the property filter is automatically set to *"propertyname (min .. max)"* as you edit the property filter.
+
+
+The context command Apply As Cell Result on a property filter, sets the Cell Color Result to the same values as the selected property filter.
+
+
Below is a snapshot of the **Property Editor** of the **Property Filter**.

-#### Property value range
+### Property value range
The filter is based on a property value range (Min - Max). Cells in the range are either shown or hidden depending on the **Filter Type** (*Include*/*Exclude*). Exclude-filters removes the selected cells from the **View** even if some other filter includes them.
-A new property filter can be made by activating the context menu for **Property Filters**. The new property filter is based on the currently viewed cell result by default.
+#### Range Behavior for Flow Diagnostic results
+Normally the available range in the sliders is the max and min of all the values in all the timesteps. For Flow Diagnostics results, however, the available range is based on the current timestep.
-The name of the property filter is automatically set to *"propertyname (min .. max)"* as you edit the property filter.
+We still need to keep the range somewhat fixed while moving from timestep to timestep, so in order to do so ResInsight tries to keep the intentions of your range settings, as the available range changes. If either the max or min value is set to the limit, ResInsight will keep that setting at the limit even when the limit changes. If you set a spesific value for the max or the min, that setting will keep its value, even if it happens to end up outside the available range at a time step.
-#### Category selection
-If the property is representing integer values or formation names, the property filter displays a list of available categories used to filter cells. The separate values can then be toggled on or off using the list in the Property Editor.
+### Category selection
+If the property is representing integer values, well tracer names or [ formation names ]({{ site.baseurl }}/docs/formations), the property filter displays a list of available categories used to filter cells. The separate values can then be toggled on or off using the list in the Property Editor.

If it is more convenient to filter the values using a value range, toggle the **Category Selection** option off.
-### Well cell filters
-Well cell filters are a special type of filters that are controlled from the **Simulation Wells** item. See [Simulation Wells]({{ site.baseurl }}/docs/simulationwells) for more details.
-They are not applicable for Geomechanical cases.
diff --git a/docs/FlowDiagnosticsPlots.md b/docs/FlowDiagnosticsPlots.md
new file mode 100644
index 0000000000..0ba406bcbd
--- /dev/null
+++ b/docs/FlowDiagnosticsPlots.md
@@ -0,0 +1,94 @@
+---
+layout: docs
+title: Flow Diagnostics Plots
+permalink: /docs/flowdiagnosticsplots/
+published: true
+---
+
+
+Flow Diagnostics Plots are managed from the **Project Tree** of the **Plot Main Window** in the folder **Flow Diagnostics Plots**. This folder contains a **Flow Characteristics Plot**, a default **Well Allocation Plot** and a **Stored Plots** folder containing stored **Well Allocation Plots**.
+
+
+
+Please refer to [Cell Results-> Flow Diagnostic Results]({{ site.baseurl }}/docs/cellresults#flow-diagnostic-results) for more description of the results and references to more information about the methodology.
+
+## Well Allocation Plots
+
+Well allocation plots show the flow along a specified well, along with either phase distribution or the amount of support from/to other wells. The total phase or allocation is shown in the legend and as a pie chart, while the well flow is shown in a depth value vs flow graph.
+
+### Branches
+
+Each branch of the well will be assigned a separate **Track**. For normal wells this is based on the branch detection algorithm used for Well Pipe visualization, and will correspond to the pipe visualization with **Branch Detection** *On*. ( See [Well Pipe Geometry]({{ site.baseurl }}/docs/simulationwells#well-pipe-geometry) )
+Multi Segment Wells will be displayed according to their branch information, but tiny branches consisting of only one connection are lumped into the main branch to make the visualization more understandable. ( See [Dummy branches]({{ site.baseurl }}/docs/simulationwells#dummy-branches) )
+
+### Creating Well Allocation Plots
+
+To plot the Well allocation for a well, right-click the well in the **Project Tree** or in the **3D View** and invoke the command **Plot Well Allocation**.
+
+
+
+The command updates the default **Well Allocation Plot** with new values based on the selection and the settings in the active view. This plot can then be copied to the **Stored Plots** folder by the context command **Add Stored Well Allocation Plot**.
+
+### Options
+
+The **Legend**, **Total Allocation** pie chart, and the **Well Flow/Allocation** can be turned on or off from the toggles in the **Project Tree**. The other options are controlled from the property panel of a Well Allocation Plot:
+
+
+
+- **Name** -- Auto generated name used as plot title
+- **Show Plot Title** -- Toggles whether to show the title in the plot
+- **Plot Data** -- Options controlling when and what the plot is based on
+ - **Case** -- The case to plot data from
+ - **Time Step** -- The selected time step
+ - **Well** -- The simulation well to plot
+- **Options**
+ - **Plot Type**
+ - **Allocation** -- Plots *Reservoir well flow rates* along with how this well supports/are
+ supported by other wells.
+ ( This option is only available for cases with Flux results available. )
+ - **Well Flow** -- Plots *Surface Well Flow Rates* together with phase split between Oil, Gas, and Water.
+ - **Flow Type**
+ - **Accumulated** -- Plots an approximation of the accumulated flow along the well
+ - **Inflow Rates** -- Plots the rate of flow from the connection into the well
+ - **Group Small Contributions** -- Groups small well contributions into a group called **Other**
+ - **Threshold** -- Threshold used by the **Group Small Contributions** option.
+
+### Depth Settings
+
+The depth value in the plot can be controlled by selecting the **Accumulated Flow**/**Inflow Rates** item in the **Project Tree**. This item represents the Well-Log-like part of the Well Allocation Plot and its properties are shown below:
+
+
+
+- **Name** -- The plot name, updated automatically based on the **Flow Type** and well
+- **Depth Type**
+ - **Pseudo Length** -- Use the length along the visualized simulation well pipe as depth.
+ In this mode the curves are extended somewhat above zero depth keeping the curve
+ values constant. This is done to make it easier to see the final values of the curves relative to each other.
+ The depth are calculated with **Branch detection** *On* and using the **Interpolated** well pipe geometry.
+ ( See [Well Pipe Geometry]({{ site.baseurl }}/docs/simulationwells#well-pipe-geometry) )
+ - **TVD** -- Use True Vertical Depth on the depth-axis.
+ This will produce distorted plots for horizontal or near horizontal wells.
+ - **Connection Number** -- Use the number of connections counted from the top on the depth-axis.
+- **Visible Depth Range** -- These options control the depth zoom
+ - **Auto Scale** -- Toggles autoscale on/off. The plot is autoscaled when significant changes to its settings are made
+ - **Min**, **Max** -- Sets the visible depth range. These are updated when zooming using the mouse wheel etc.
+
+### Accessing the Plot Data
+
+The command context command **Show Plot Data** will show a window containing the plot data in ascii format. The content of this window is easy to copy and paste into Excel or other tools for further processing.
+
+It is also possible to save the ascii data to a file directly by using the context command **Export Plot Data to Text File** on the **Accumulated Flow**/**Inflow Rates** item in the **Project Tree**.
+
+The total accumulation data can also be viewed in ascci format by the command **Show Total Allocation Data**.
+
+## Flow Characteristics Plot
+
+This window displays three different graphs describing the overall behaviour of the reservoir for each timestep from a flow diagnostics point of view.
+
+The timesteps available are only those already calculated by the flow diagnostics solver. That means timesteps for which flow diagnostic results have been requested either by Cell Results, Well Allocation Plots, or Well Log Extraction Curves.
+
+
+
+- **Lorenz Coefficient** -- This plot displays the Lorenz coefficient for the complete reservoir for each calculated timestep. The time step color is used as a reference for the timestep in the other graphs.
+- **Flow Capacity vs Storage Capacity** -- This plot displays one curve for each timestep of the F-phi curve for the reservoir.
+- **Sweep Efficiency** -- This plot displays one Sweep Efficiency curve for each calculated timestep.
diff --git a/docs/Formations.md b/docs/Formations.md
index c3a2799656..e83ca8c6d2 100644
--- a/docs/Formations.md
+++ b/docs/Formations.md
@@ -17,7 +17,7 @@ To use this functionality you will need to :
## Import of Formation Names files
Formation Names files can be imported by using the command: **File->Import->Import Formation Names**.
-The user is asked to select ```*.lyr``` files for import.
+The user is asked to select _`*.lyr`_ files for import.
The imported Formation Names files are then listed in the **Project Tree** in a folder named **Formations**.
@@ -47,7 +47,7 @@ If the formation file is modified outside ResInsight, the formation data can be
The formations can be visualized as a result property in **Cell Results**, **Cell Edge Result**, and **Separate Fault Result**. When selected, a special legend displaying formation names is activated.
### Property filter based on formations
-Formation names are available in Property Filters as Result Type **Formation Names**. This makes it easy to filter geometry based on formation specfications.
+Formation names are available in Property Filters as Result Type **Formation Names**. This makes it easy to filter geometry based on formation specifications.
See [ Cell Filters ]({{ site.baseurl }}/docs/filters) for details.
diff --git a/docs/GettingStarted.md b/docs/GettingStarted.md
index edd42ae9c2..d84bd520bd 100644
--- a/docs/GettingStarted.md
+++ b/docs/GettingStarted.md
@@ -28,11 +28,12 @@ Each of the windows can also be closed freely, but if both are closed, ResInsigh
Each of the main windows has a central area and several docking windows surrounding it. The different docking
windows can be managed from the **Windows** menu or directly using the local menu bar of the docking window.
-- **Project Tree** - contains all application objects in a tree structure.
-- **Property Editor** - displays all properties for the selected object in the **Project Tree**
-- **Process Monitor** - displays output from Octave when executing Octave scripts
-- **Result Info** - displays info for the selected object in the 3D scene
-- **Result Plot** - displays curves based on result values for the selected cells in the 3D scene
+- **Project Tree** -- contains all application objects in a tree structure.
+- **Property Editor** -- displays all properties for the selected object in the **Project Tree**
+- **Process Monitor** -- displays output from Octave when executing Octave scripts
+- **Result Info** -- displays info for the selected object in the 3D scene
+- **Result Plot** -- displays curves based on result values for the selected cells in the 3D scene
+- **Messages** -- displays occasional info and warnings related to operations executed.
Result Info and Result Plot is described in detail in [ Result Inspection ]({{ site.baseurl }}/docs/resultinspection)
@@ -56,9 +57,9 @@ Standard window management for applying minimized, normal and maximized state is
Commands to arrange the windows in the standard ways are available from the **Windows** menu
-- **Tile Windows** - distribute all open view windows to fill available view widget space
-- **Cascade Windows** - organize all open view windows sligthly offset on top of each other
-- **Close All Windows** - close all open view windows
+- **Tile Windows** -- distribute all open view windows to fill available view widget space
+- **Cascade Windows** -- organize all open view windows slightly offset on top of each other
+- **Close All Windows** -- close all open view windows
#### Editing 3D views and Plot Windows content
@@ -82,7 +83,7 @@ There are three different Eclipse Case types:
This is a Case based on the results of an Eclipse simulation, read from a grid file together with static and restart data. Multiple Cases can be selected and read from a folder.
##### Input case 
-This Case type is based on a `*.GRDECL` file, or a part of an Eclipse *Input* file. This Case type supports loading single ASCII files defining Eclipse Cell Properties, and also to export modified property sets to ASCII files.
+This Case type is based on a _`*.GRDECL`_ file, or a part of an Eclipse *Input* file. This Case type supports loading single ASCII files defining Eclipse Cell Properties, and also to export modified property sets to ASCII files.
Each of the Eclipse properties are listed as separate entities in the **Project Tree**, and can be renamed and exported.
See [ Grid Import and Property Export ]({{ site.baseurl }}/docs/gridimportexport)
@@ -91,7 +92,7 @@ This is a Case type that belongs to a *Grid Case Group* and makes statistical ca
##### Summary Case 
-This is the case type listed in the Plot Main Window, and represents an `*.SMSPEC` file. These Cases are available for Summary Plotting. See [ Summary Plots ]({{ site.baseurl }}/docs/summaryplots).
+This is the case type listed in the Plot Main Window, and represents an _`*.SMSPEC`_ file. These Cases are available for Summary Plotting. See [ Summary Plots ]({{ site.baseurl }}/docs/summaryplots).
#### Geomechanical cases 
@@ -104,12 +105,12 @@ A **Grid Case Group** is a group of Eclipse **Result Cases** with identical grid
### The Project file and the Cache directory
-ResInsight stores all the views and settings in a Project File with the extension: `*.rsp`.
+ResInsight stores all the views and settings in a Project File with the extension: _`*.rsp`_.
This file only contains *references* to the real data files, and does not in any way copy the data itself. Data files generated by ResInsight are also referenced from the Project File.
-Statistics calculations, octave generated property sets, and SSI-hub imported well paths are saved to a folder named `_cache` in the same directory as the project file. If you need to move your project, make sure you move this folder along. If you do not, the calculations or well path import needs to be done again.
+Statistics calculations, octave generated property sets, and SSI-hub imported well paths are saved to a folder named _`_cache`_ in the same directory as the project file. If you need to move your project, make sure you move this folder along. If you do not, the calculations or well path import needs to be done again.
-The *.rsp -file is an XML file, and can be edited by any text editor.
+The *.rsp file is an XML file, and can be edited by any text editor.
diff --git a/docs/GridImportExport.md b/docs/GridImportExport.md
index 450bd75458..d58988a042 100644
--- a/docs/GridImportExport.md
+++ b/docs/GridImportExport.md
@@ -5,33 +5,36 @@ permalink: /docs/gridimportexport/
published: true
---
-### Importing Eclipse cases
+## Importing Eclipse cases
ResInsight supports the following type of Eclipse input data:
-- `*.GRID` and `*.EGRID` files along with their `*.INIT` and restart files `*.XNNN` and `*.UNRST`.
-- Grid and Property data from `*.GRDECL` files.
+- _`*.GRID`_ and _`*.EGRID`_ files along with their _`*.INIT`_ and restart files _`*.XNNN`_ and _`*.UNRST`_.
+- Grid and Property data from _`*.GRDECL`_ files.
-#### Eclipse Results
-1. Select **File->Import->  Import Eclipse Case** and select an `*.EGRID` or `*.GRID` Eclipse file for import.
+### Eclipse Results
+1. Select **File->Import->  Import Eclipse Case** and select an _`*.EGRID`_ or _`*.GRID`_ Eclipse file for import.
2. The case is imported, and a view of the case is created
+The **Reload Case** command can be used to reload a previously imported case, to make sure it is up to date. This is useful if the grid or result files changes while a ResInsight session is active.
+
You can select several grid files in one go by multiple selection of files (Ctrl + left mouse button, Shift + left mouse button).
-#### Eclipse ASCII input data
-1. Select **File->Import->  Import Input Eclipse Case** and select a `*.GRDECL` file.
-2. The case is imported, and a view of the case is created
-3. Right click the **Input Properties** in the generated **Input Case** and use the context menu to import additional Eclipse Property data files.
-#### Handling missing or wrong MAPAXES
+### Eclipse ASCII input data
+1. Select **File->Import->  Import Input Eclipse Case** and select a _`*.GRDECL`_ file.
+2. The case is imported, and a view of the case is created
+3. Right-click the **Input Properties** in the generated **Input Case** and use the context menu to import additional Eclipse Property data files.
+
+### Handling missing or wrong MAPAXES
The X and Y grid data can be negated in order to make the Grid model appear correctly in ResInsight. This functionality is accessible in the **Property Editor** for all Eclipse Case types as the toggle buttons **Flip X Axis** and **Flip Y Axis** as shown in the example below.

-### Importing ABAQUS odb cases
-When ResInsight is compiled with ABAQUS-odb support, `*.odb` files can be imported by selecting the command:
+## Importing ABAQUS odb cases
+When ResInsight is compiled with ABAQUS-odb support, _`*.odb`_ files can be imported by selecting the command:
**File->Import->  Import Geo Mechanical Model**
diff --git a/docs/Home.md b/docs/Home.md
index 7fb60311a3..49f5bf57c8 100644
--- a/docs/Home.md
+++ b/docs/Home.md
@@ -1,6 +1,6 @@
---
layout: docs
-title: ResInsight 2016.11
+title: ResInsight 2017.05
permalink: /docs/home/
published: true
---
@@ -12,20 +12,21 @@ The system also constitutes a framework for further development and can be exten
### Efficient User Interface
The user interface is tailored for efficient interpretation of reservoir simulation data with specialized visualizations of properties, faults and wells. It enables easy handling of a large number of realizations and calculation of statistics. To be highly responsive, ResInsight exploits multi-core CPUs and GPUs. Efficient plotting of well log plots and summary vectors is available through selected plotting features.
+### Flow Diagnostics
+Flow diagnostics calculations are embedded in the user interface and allows instant visualization of several well-based flow diagnostics properties, such as : Time of flight, flooding and drainage regions, well pair communication, well tracer fractions, well allocation plots and well communication lines. The calculations are performed by a library called [opm-flowdiagnostics](https://github.com/OPM/opm-flowdiagnostics) developed by [SINTEF Digital](http://www.sintef.no/sintef-ikt/#/). [More...]({{site.baseurl}}/docs/cellresults#method)
+
### Octave Integration
Integration with GNU Octave enables powerful and flexible result manipulation and computations. Derived results can be returned to ResInsight for further handling and visualization. Eventually, derived and computed properties can be directly exported to Eclipse input formats for further simulation cycles and parameter studies.
### Data support
The main input data is
-`*.GRID` and `*.EGRID` files along with their `*.INIT` and restart files `*.XNNN` and `*.UNRST`.
-Summary vectors can be imported from `*.SMSPEC` files.
-
+_`*.GRID`_ and _`*.EGRID`_ files along with their _`*.INIT`_ and restart files _`*.XNNN`_ and _`*.UNRST`_.
+Summary vectors can be imported from _`*.SMSPEC`_ files.
ResInsight also supports selected parts of Eclipse input files and can read grid
-information and corresponding cell property data sets from `*.GRDECL` files.
+information and corresponding cell property data sets from _`*.GRDECL`_ files.
+Well log data can be imported from _`*.LAS`_ files.
-Well log data can be imported from `*.LAS` files.
-
-ResInsight can also be built with support for Geomechanic models from ABAQUS in the `*.odb` file format.
+ResInsight can also be built with support for Geomechanical models from ABAQUS in the _`*.odb`_ file format.
### About
ResInsight has been co-developed by [Statoil ASA](http://www.statoil.com/), [Ceetron Solutions AS](http://www.ceetronsolutions.com/), and [Ceetron AS](http://ceetron.com/) with the aim to provide a versatile tool for professionals who need to visualize and process reservoir models.
diff --git a/docs/Installation.md b/docs/Installation.md
index 3290d93313..3c101e289a 100644
--- a/docs/Installation.md
+++ b/docs/Installation.md
@@ -15,8 +15,10 @@ published: true
3. Start ResInsight.exe
### Octave installation (optional)
-1. ResInsight is delivered with support for Octave 4.0.0 which can be downloaded here: [Octave-4.0.0](ftp://ftp.gnu.org/gnu/octave/windows/octave-4.0.0_0-installer.exe)
-2. Launch ResInsight.exe, open **Edit->Preferences**. On the **Octave** tab, enter the path to the Octave command line interpreter executable, usually `C:\Your\Path\To\Octave-x.x.x\bin\octave-cli.exe`
+1. Download [Octave-4.0.0](ftp://ftp.gnu.org/gnu/octave/windows/octave-4.0.0_0-installer.exe) and install it.
+2. Launch ResInsight.exe, open **Edit->Preferences**.
+3. On the **Octave** tab, enter the path to the Octave command line interpreter executable.
+ ( Usually _`C:\Your\Path\To\Octave-x.x.x\bin\octave-cli.exe`_ )
A binary package of ResInsight will normally not work with other Octave versions than the one it is compiled with.
diff --git a/docs/InstallationLinux.md b/docs/InstallationLinux.md
index 589b0e353d..6f2b9ca120 100644
--- a/docs/InstallationLinux.md
+++ b/docs/InstallationLinux.md
@@ -14,13 +14,18 @@ published: true
2. Extract content from TAR file
3. Start ./ResInsight
-#### Octave installation (optional)
+### Installation from binary packages on Linux
+ Packages for ResInsight are available as part of the distribution by the [Opm project](http://opm-project.org/?page_id=36)
+
+### Octave installation (optional)
The precompiled octave support is only tested for RedHat 6 (ResInsight 1.3.2-dev and earlier, was also tested on RedHat 5) and is not expected to work for other configurations, unless you build ResInsight yourself. See [Build Instructions]({{ site.baseurl }}/docs/buildinstructions)
1. Install Octave directly from the package manager in Linux. See the documentation for your particular distribution.
-2. Launch ResInsight, open **Edit->Preferences** and enter the path to the Octave command line interpreter executable, usually just 'octave'.
+2. Launch ResInsight, open **Edit->Preferences**
+3. Enter the path to the Octave command line interpreter executable.
+ ( usually just _`octave`_. )
-### Display application icons in GNOME
+### Display menu icons in GNOME
By default, icons are not visible in menus in the GNOME desktop environment. ResInsight has icons for many menu items, and icons can be set visible by issuing the following commands (Tested on RHEL6) :
```
diff --git a/docs/Intersections.md b/docs/Intersections.md
index c4a1d20a5a..9f0a45ed7c 100644
--- a/docs/Intersections.md
+++ b/docs/Intersections.md
@@ -15,7 +15,7 @@ Intersections are stored in a folder named **Intersections** in a **View** as sh

-### Curve Based **Intersections**
+## Curve Based **Intersections**
There are three types of curve based intersections: Well Path, Simulation Well, and Polyline intersections.
@@ -27,21 +27,19 @@ They can also be created from the context menu in the 3D view, as described belo
To be able to see the intersections in the 3D view, the grid cells can be hidden by disabling the Grids item in the Project Tree or activating the Hide Grid Cells toolbar button.
-#### Common Curve based Intersection Options
+### Common Curve based Intersection Options
The property panel of a Well Path based Intersection is shown below:

-Property | Description
----------------|------------
-Name | Automatically created based on the item specifying the intersection. The user can customize the name by editing, but will be updated if you change the well or well path.
-Intersecting Geometry | These options controls the curve to be used for the cross section, and depends on the type of intersection you choose.
-Direction | Horizontal, vertical or Defined by two points
-Extent length | Defines how far an intersection for Well Path or Simulation Well is extended at intersection ends
-Show Inactive Cells | Controls if inactive cells are included when creating the intersection geometry
+- **Name** -- Automatically created based on the item specifying the intersection. The user can customize the name by editing, but will be updated if you change the well or well path.
+- **Intersecting Geometry** -- These options controls the curve to be used for the cross section, and depends on the type of intersection you choose.
+- **Direction** -- Horizontal, vertical or Defined by two points
+- **Extent length** -- Defines how far an intersection for Well Path or Simulation Well is extended at intersection ends
+- **Show Inactive Cells** -- Controls if inactive cells are included when creating the intersection geometry
-**Direction**
+#### Direction
The direction defined is used to extrude the curve in the defined direction, and thereby create a set of planes.
@@ -53,23 +51,23 @@ When **Defined by two points** is the active option, the user can define the dir
- To finish adding points, click the button **Stop picking points** in the **Property Editor**.
- The background color of the point list is then set to white.
-#### Well Path Intersection
+### Well Path Intersection
A new **Well Path** intersection can be created by right-clicking the well path in the 3D view or in the **Project Tree**.

When a well path intersection is created, the source well path can be changed by using the **Well Path** selection combo box in the **Property Editor**.
-#### Simulation Well Intersection
+### Simulation Well Intersection
A new **Simulation Well** intersection can be created by right-clicking the simulation well in the 3D view or in the **Project Tree**.

When a simulation well intersection is created, the source simulation well can be changed by using the **Simulation Well** selection combo box in the **Property Editor**.
-If the well contains more than one branch, the intersection geometry will be created for the selected brach in the **Branch** combo box.
+If the well contains more than one branch, the intersection geometry will be created for the selected branch in the **Branch** combo box.
-#### Polyline Intersection
+### Polyline Intersection
A new **Polyline** intersection can be created from the context menu in the 3D view. Then, by left-clicking on reservoir geometry, a polyline is created. The points are added to the point list in the **Property Editor**.

@@ -83,24 +81,22 @@ To append more points by clicking in the 3D view, push the button **Start pickin
The points in the list can be copied to clipboard using **CTRL-C** when keyboard focus is inside the point list. A new list of points can be pasted into the point list by using **CTRL-V**.
-### Intersection Box and Intersection Planes
+## Intersection Box and Intersection Planes
A new **Intersection Box** or **Intersection Plane** can be created from the context menu in the 3D view or the context menu in the **Project Tree**.

-The following table describes the properties for an **Intersection Box**:
+The following list describes the properties for an **Intersection Box**:
-Property | Description
----------------|------------
-Name | Automatically created based on the item specifying the intersection
-Box Type | Box or x-plane, y-plane or z-plane
-Show Inactive Cells | Controls if inactive cells are included when creating the intersection geometry
-X Coordinates | Coordinates for x range
-Y Coordinates | Coordinates for y range
-Depth | Coordinates for depth range
-XY Slider Step Size | Defines how much the value changes when the slider for XY values is changed, default value 1.0
-Depth Slider Step Size | Defines how much the value changes when the slider for depth values is changed, default value 0.5
+- **Name** -- Automatically created based on the item specifying the intersection
+- **Box Type** -- Box or x-plane, y-plane or z-plane
+- **Show Inactive Cells** -- Controls if inactive cells are included when creating the intersection geometry
+- **X Coordinates** -- Coordinates for x range
+- **Y Coordinates** -- Coordinates for y range
+- **Depth** -- Coordinates for depth range
+- **XY Slider Step Size** -- Defines how much the value changes when the slider for XY values is changed, default value 1.0
+- **Depth Slider Step Size** -- Defines how much the value changes when the slider for depth values is changed, default value 0.5
Direct interaction in a 3D view is activated when **Show 3D manipulator** is pressed. Handles are displayed at the sides of the intersection object, and interactive modification is done by dragging a handle in the 3D view.
diff --git a/docs/LinkedViews.md b/docs/LinkedViews.md
index cda01a7a9e..eaa7b73d84 100644
--- a/docs/LinkedViews.md
+++ b/docs/LinkedViews.md
@@ -5,26 +5,34 @@ permalink: /docs/linkedviews/
published: true
---
+
+
One or more views can be linked together to allow some settings like camera position and range filters, propagate from one view to another.
## Establish Linked Views
+
To establish a link between views, select  **Link Visible Views** from the View toolbar. This will open a dialog where the Master View is selected. When pressing Ok in this dialog, the **Linked Views** items are displayed in the top of the **Project Tree**.

## Linked view Options
+
When selecting a linked view in the project tree, the different options are available in the **Property Editor**.

#### Link Options
-- **Camera** Navigation in any of the views where this option is active will be applied to the other linked views with this option set.
-- **Time Step** Change of time step in any of the views where this option is active will be applied to the other linked views with this option set.
-- **Cell Color Result** Change of cell result in the master view will be applied to all dependent views where this option is active. **Cell Color Result** is only supported between views of the *same type*.
+
+- **Camera** -- Navigation in any of the views where this option is active will be applied to the other linked views with this option set.
+- **Show Cursor** -- Shows the position of the mouse cursor in the other views as a cross-hair
+- **Time Step** -- Change of time step in any of the views where this option is active will be applied to the other linked views with this option set.
+- **Cell Color Result** -- Change of cell result in the master view will be applied to all dependent views where this option is active. **Cell Color Result** is only supported between views of the *same type*.
+- **Legend Definition** -- Links the legend between views already linking the **Cell Results Color**
#### Link Cell Filters
-- **Range Filters** Range filters in master view will be applied to all dependent views where this option is active. Normally this is done by a direct copy, but if the master and dependent view is of different types (Eclipse and Geomechanical views) and the Eclipse case is within the bounds of the Geomechanical case, ResInsight tries to map the range filters to the corresponding cells in the other case.
-- **Property Filters** Property filters in master view will be applied to all dependent views where this option is active. This option is only enabled between views of the *same case*.
+
+- **Range Filters** -- Range filters in master view will be applied to all dependent views where this option is active. Normally this is done by a direct copy, but if the master and dependent view is of different types (Eclipse and Geomechanical views) and the Eclipse case is within the bounds of the Geomechanical case, ResInsight tries to map the range filters to the corresponding cells in the other case.
+- **Property Filters** -- Property filters in master view will be applied to all dependent views where this option is active. This option is only enabled between views of the *same case*.
## Toggle Linking from the **Project Tree**
@@ -34,17 +42,17 @@ A linked view can temporarily be disabled by unchecking the linked view. To disa
Right-clicking one of the linked view entries in the **Project Tree** displays the following menu entries:
-- **Open All Linked Views** Open all the views which is part of the linked view group
-- **Delete All Linked Views** Delete the linked views group, and thereby unlink all the views.
-- **Delete** Remove an individual view from the group of linked views
+- **Open All Linked Views** -- Open all the views which is part of the linked view group
+- **Delete All Linked Views** -- Delete the linked views group, and thereby unlink all the views.
+- **Delete** -- Remove an individual view from the group of linked views
## 3D view Context menu
To activate the menu items for a linked view, right-click inside the 3D view anywhere outside the model.
Depending on whether the view is a dependent-, or an unlinked view, some of the following commands are available:
-- **Show Link Options** Activate the linked view item in the project tree, and show its properties.
-- **Set As Master View** Use the view as Master View
-- **Link View** Add the view to list of linked views
-- **Unlink View** Delete the view from list of linked views
+- **Show Link Options** -- Activate the linked view item in the project tree, and show its properties.
+- **Set As Master View** -- Use the view as Master View
+- **Link View** -- Add the view to list of linked views
+- **Unlink View** -- Delete the view from list of linked views
Master views have no available linking commands.
diff --git a/docs/OctaveInterface.md b/docs/OctaveInterface.md
index 51a535b635..474eccf488 100644
--- a/docs/OctaveInterface.md
+++ b/docs/OctaveInterface.md
@@ -5,42 +5,60 @@ permalink: /docs/octaveinterface/
published: true
---
-ResInsight provides a flexible interface to [Octave](http://www.gnu.org/software/octave/ "Octave").
-This includes a set of Octave functions that communicates with a running ResInsight session, features in ResInsight that makes it easy to manage and edit Octave scripts, and their execution using Octave.
+ResInsight provides a flexible interface to [Octave](http://www.gnu.org/software/octave/ "Octave") including:
+
+- Octave functions that communicates with a running ResInsight session
+- Features to simplify managment and editing of Octave scripts from ResInsight
+- Commands to execute scripts using Octave.
The Octave functions are documented in [Octave Interface Reference]({{ site.baseurl }}/docs/octaveinterfacereference).
-## Script management
+
+Note: The Octave interface does not support Geomechanical cases and flow diagnostic results.
+
+
+## Octave Script Management
Octave scripts are available in the **Scripts** folder in the **Project Tree**.

-This folder contains an entry for each of the directories you have added as a **Script Folder**. Each of the folder lists available `*.m` files and sub directories.
+This folder contains an entry for each of the directories you have added as a **Script Folder**. Each of the folder lists available _`*.m`_ files and sub directories.
### Adding Script Folders
-You can add directories by right clicking the **Scripts** item to access the context menu.
+You can add directories by right-clicking the **Scripts** item to access the context menu.
Multiple standard script folder locations can also be defined in the field **Shared Script Folder(s)** in the **Preferences Dialog** (**Edit -> Preferences**).
-### Editing scripts
+### Editing Octave Scripts
To enable script editing from ResInsight you need to set up the path to a text editor in the **Script Editor** field in the **Preferences Dialog** (**Edit -> Preferences**)
When done, scripts can be edited using the context menu command **Edit** on the script item in the tree.
If you add a script file directly by creating a new file, the new script can be made visible in the user interface by activating **Refresh** in the context menu of a script folder.
-## Script execution
-Octave scripts can be executed with or without a selection of cases as context. The [Octave Interface Reference]({{ site.baseurl }}/docs/octaveinterfacereference) highlights in more depth how to design your Octave scripts to utilize these features.
+## Executing Octave Scripts
-### Without a case selection
-A script can be started by navigating to the script in the **Project Tree**, and selecting **Execute** from the context menu. The currently active case (The one with the active 3D View) will then be set as ResInsight's *Current Case*.
+ResInsight can be instructed to execute an Octave script once as a one shot operation, or several times, one time for each selected case. The [Octave Interface Reference]({{ site.baseurl }}/docs/octaveinterfacereference) highlights in more depth how to design your Octave scripts to utilize these features.
-### With a case selection
-One script can be executed on many cases by first selecting a set of cases, and then activating **Execute script** from the context menu for the case selection. The script is then executed once pr selected case. Each time ResInsight's *Current Case* is updated, making it accessible from the Octave script.
+### Executing a Script Once
+
+A script can be started by navigating to the script in the **Project Tree**, and selecting **Execute** from the context menu. The currently active case (The one with the active 3D View) will then be set as ResInsight's *Current Case*, and the script is executed once.
+
+### Executing a Script for Each Selected Case
+
+One script can be executed on many cases by first selecting a set of cases, and then activating **Execute script** from the context menu on the case selection. The script is then executed once per selected case setting the ResInsight's *Current Case* each time.

-## Script Examples
+### Process Monitor
+
+When an Octave script is executed, the **Process Monitor** pops up and displays the output from Octave during the script execution as displayed below:
+
+
+
+In addition to the output from the script, it prints a start and stop timestamp. The **Clear**-button deletes all the text in the monitor, and the **Stop**-button tries to kill the running Octave process.
+
+## Octave Script Examples
Here are some example-scripts that illustrates the use of the octave interface.
diff --git a/docs/OctaveInterfaceReference.md b/docs/OctaveInterfaceReference.md
index f476cc5093..dad142e515 100644
--- a/docs/OctaveInterfaceReference.md
+++ b/docs/OctaveInterfaceReference.md
@@ -8,6 +8,10 @@ published: true
## Introduction
To identify a ResInsight case uniquely in the Octave script, an integer Id (CaseId) is used. This Id can be retrieved in several ways, but there are two main modes of operation regarding this for a particular octave script: Either the script is designed to work on a single case (the "Current Case"), or the script is designed to access the selection and traverse the cases by itself.
+
+Note: The Octave interface does not support Geomechanical cases and flow diagnostic results.
+
+
### Single case scripts
Single case scripts do not need to address cases explicitly, but works on what ResInsight considers being the "Current Case". When the user selects several cases and executes a script on them, ResInsight loops over all cases in the selection, sets the current case and executes the script. All references to the "Current Case" from the script will then refer to the case currently being processed by ResInsight.
The Current Case can be accessed directly using **riGetCurrentCase()**, but the more direct way is to *omit the CaseId parameter* in the functions, the Current Case is then automatically used.
diff --git a/docs/Preferences.md b/docs/Preferences.md
index dd8d829bc5..15502a5934 100644
--- a/docs/Preferences.md
+++ b/docs/Preferences.md
@@ -13,49 +13,57 @@ The preferences are not stored in the project files, but rather in a platform sp
## General - tab
-### Default settings - option group
+### Default settings
This group of options controls visual settings that will be used when creating new views.
- **Viewer Background**
-- **Gridlines** - Controls whether to show the gridlines by default.
+- **Gridlines** -- Controls whether to show the gridlines by default.
- **Mesh Color**
- **Mesh Color Along Faults**
- **Well Label Color**
-- **Font Size** - This font size is used for all labels shown in the 3D Views
+- **Font Size** -- This font size is used for all labels shown in the 3D Views
+- **Default Z Scale Factor** -- Default depth scale for grid models
-### 3D views - option group
-- **Navigation mode** - Defines how to use the mouse to interact with with the 3D model.
-- **Use shaders** - This option controls the use of OpenGL shaders. Should be left **On**. Available only for testing purposes.
-- **Show 3D Information** - Displays graphical resource usage as text in the 3D view.
+### 3D views
+- **Navigation mode** -- Defines how to use the mouse to interact with with the 3D model.
+- **Use shaders** -- This option controls the use of OpenGL shaders. Should be left **On**. Available only for testing purposes.
+- **Show 3D Information** -- Displays graphical resource usage as text in the 3D view.
-### Behaviour when loading new case - option group
-- **Compute when loading new case** - If not present, compute DEPTH, DX, DY, DZ, TOP, BOTTOM when loading new case
-- **Load and show SOIL** - Control if SOIL is loaded and applied to grid
-- **Import faults/NNCs/advanced MSW data** - Disable import of data for a case to reduce case import time.
+### Other
-### SSIHUB - option group
+- **SSIHUB Address** -- Optional Url to Statoil internal web service used to import well paths.
+- **Show LAS Curve Without TVD Warning** - Turn off the warning displayed when showing LAS curves in TVD mode
-- **ssihub Address** - Optional Url to Statoil internal web service used to import well paths.
+## Eclipse - tab
+
+
+
+### Behaviour When Loading Data
+
+- **Compute DEPTH Related Properties** -- If not present, compute DEPTH, DX, DY, DZ, TOP, BOTTOM when loading new cases
+- **Load and Show SOIL** -- Control if SOIL is loaded and applied to grid
+- **Import Faults/NNCs/Advanced MSW Data** -- Disable import of data for a case to reduce case import time.
+- **Fault Include File Absolute Path Prefix** -- Prefix used on Windows if fault files use absolute UNIX paths
## Octave - tab

-### Octave - option group
+### Octave
-- **Octave executable location** - Defines the binary file location for Octave. Usually without path on Linux, and including path on Windows.
-- **Show text header when executing scripts** - Enables the default output that octave outputs when started.
+- **Octave executable location** -- Defines the binary file location for Octave. Usually without path on Linux, and including path on Windows.
+- **Show text header when executing scripts** -- Enables the default output that octave outputs when started.
-### Script files - option group
+### Script files
-- **Shared Script Folder(s)** - Defines the search paths for octave scripts
-- **Script Editor** - The text editor to invoke when editing scripts
+- **Shared Script Folder(s)** -- Defines the search paths for octave scripts
+- **Script Editor** -- The text editor to invoke when editing scripts
## Summary - tab

-- **Create Summary Plots When Importing Eclipse Case** - Automatically import the summary case and display a plot if a `*.SMSPEC` file exists when importing an Eclipse binary case
-- **Default Vector Selection Filter** - Wildcard text defining the summary vector filter to be applied by default. Default string is `F*OPT`
+- **Create Summary Plots When Importing Eclipse Case** -- Automatically import the summary case and display a plot if a _`*.SMSPEC`_ file exists when importing an Eclipse binary case
+- **Default Vector Selection Filter** -- Wildcard text defining the summary vector filter to be applied by default. Default string is _`F*OPT`_
diff --git a/docs/RegressionTestSystem.md b/docs/RegressionTestSystem.md
index ce1f50ac1b..4babee4024 100644
--- a/docs/RegressionTestSystem.md
+++ b/docs/RegressionTestSystem.md
@@ -7,12 +7,15 @@ published: true
A regression tool for QA is build into ResInsight. This tool will do the following:
-1. Scan a directory for sub directories containing a **RegressionTest.rip** files.
+1. Scan a directory for sub directories containing a **RegressionTest.rsp** files.
2. Each found project file will be opened, and all views in this project will be exported as snapshot images to file.
3. When snapshot images from all projects are completed, difference images based on generated and QA-approved images are computed.
4. Based on these three sets of images, an HTML report is created and automatically displayed.
-## Starting regression tests
+## Regression test files
+As the model size of some test files is quite large, the test data is located in a [separate repository](https://github.com/OPM/ResInsight-regression-test). In addition, some of the files are stored using [Git Large File Storage](https://git-lfs.github.com/).
+
+## How to run regression tests
To be able to run regression tests you need the **compare** tool from the [ImageMagic suite](http://www.imagemagick.org/script/compare.php).
diff --git a/docs/ReservoirViews.md b/docs/ReservoirViews.md
index 2040a89b00..2a19a05540 100644
--- a/docs/ReservoirViews.md
+++ b/docs/ReservoirViews.md
@@ -12,7 +12,7 @@ published: true
Each item has a set of properties that can be editied in the **Property Editor**.
-Several views can be added to the same case by right clicking the case or a view and select **New View**. You can also **Copy** and then **Paste** a view into a Case. All the settings are then copied to the new view.
+Several views can be added to the same case by right-clicking the case or a view and select **New View**. You can also **Copy** and then **Paste** a view into a Case. All the settings are then copied to the new view.
Views of Eclipse models and Geomechanical models has a lot in common, but Eclipse views has some features that applies to Eclipse simulations only.
@@ -20,13 +20,13 @@ Views of Eclipse models and Geomechanical models has a lot in common, but Eclips
### View properties
-Grid appearance can be controlled from the **Property Editor** when a view is selected. This includes background color and z scaling. In addition, cell visibilty controls of inactive and invalid cells. 
+Grid appearance can be controlled from the **Property Editor** when a view is selected. This includes background color and z scaling. In addition, cell visibility controls of inactive and invalid cells. 
Visibility of the grid box with labels displaying the coordinates for the reservoir can also be controlled using **Show Grid Box**.
### Cell Result 
-The **Cell Result** item defines which Eclipse property the 3D View uses for the main cell color. The property can be chosen in the property panel of the **Cell Result** item. The mapping between cell values and color is defined by the **Legend Definition**  along with some appearance settings on the Legend itself. (Number format etc.)
+The **Cell Result** item defines which Eclipse or Geomechanical property the 3D View uses for the main cell color. The property can be chosen in the property panel of the **Cell Result** item. The mapping between cell values and color is defined by the **Legend Definition**  along with some appearance settings on the Legend itself. (Number format etc.)
Please refer to [Result Color Legend]({{ site.baseurl }}/docs/resultcolorlegend) for details.
@@ -53,11 +53,13 @@ The **Histogram** shows a histogram of the complete time series of the currently

-**Statistics Time Range** controls if a single time step or all time steps are included when statistics is computed.
-**Statistics Cell Range** controls if visible cells or all active cells is included when statistics is computed.
+#### Statistics Options
+
+- **Statistics Time Range** -- controls whether all time steps or only the current time step are included when statistics is computed. Flow Diagnostic results can only use the current time step option.
+- **Statistics Cell Range** -- controls if visible cells or all active cells is included when statistics is computed.
-The Text Box settings can be activated by clicking on the text info window in the 3D view.
+The Info Box settings can be activated by clicking on the Info Text in the 3D view.
### Grids 
diff --git a/docs/ResultColorLegend.md b/docs/ResultColorLegend.md
index 9a644895e3..0e3f5fafdd 100644
--- a/docs/ResultColorLegend.md
+++ b/docs/ResultColorLegend.md
@@ -9,23 +9,21 @@ The color mapping of the displayed cell result is controlled by the **Legend Def

+- **Number of levels** -- Defines the number of tick marks displayed next to the color legend
+- **Significant digits** -- Defines the number of significant digits in the number formatting
+- **Number format** -- Defines how the numbers are formatted
+- **Colors** -- Defines the color palette
-Item | Description
--------------------|------------
-Number of levels | Defines the number of tickmarks displayed next to the color legend
-Significant digits | Defines the number of significant digits in the number formatting
-Number format | Defines how the numbers are formatted
-Colors | Defines the color palette
-
-## Mapping
-- **Discrete Linear** - Legend divided into levels defined by **Number of levels**
-- **Continuous Linear** - Continuous legend with tickmark count defined by **Number of levels**
-- **Continuous Logarithmic** - Continuous logarithmic legend with tickmark count defined by **Number of levels**
-- **Discrete Logarithmic** - Logarithmic legend divided into levels defined by **Number of levels**
-- **Category** - Special legend with one level for each category, either integer values or formation names. Only available for result names ending with ```NUM``` or formation names.
-
-
-## Range type
-- **All Timesteps** - values for all time steps are used to find min and max value of the result values represented by the color legend
-- **Current Time Step** - use current (one) time step to find min and max values
-- **User Defined Range** - user specifies numeric values for min and max
+- **Mapping** -- This option defines how the values are mappend onto the color distribution
+ - **Discrete Linear** -- Legend divided into linear levels defined by **Number of levels**
+ - **Continuous Linear** -- Continuous linear legend with tick mark count defined by **Number of levels**
+ - **Continuous Logarithmic** -- Continuous logarithmic legend with tick mark count defined by **Number of levels**
+ - **Discrete Logarithmic** -- Logarithmic legend divided into levels defined by **Number of levels**
+ - **Category** -- Special legend with one level for each category, either integer values or formation names.
+ Only available for result names ending with _`NUM`_ or formation names.
+- **Range type**
+ - **All Timesteps** -- values for all time steps are used to find min and max value of
+ the result values represented by the color legend.
+ (Not available for Flow Diagnostics results)
+ - **Current Time Step** -- use current (one) time step to find min and max values
+ - **User Defined Range** -- user specifies numeric values for min and max
diff --git a/docs/ResultInspection.md b/docs/ResultInspection.md
index d92d958942..23aa704b30 100644
--- a/docs/ResultInspection.md
+++ b/docs/ResultInspection.md
@@ -6,25 +6,16 @@ published: true
---

-The results mapped on the 3D model can be inspected in detail by left clicking the interesting cells in the 3D view.
-The selected cells will be highlighted and text information extracted from the intersection point will be displayed in the docking window **Result Info**.
-
-{% comment %}  {% endcomment %}
-
-If a dynamic result is active, the result values of the selected cells for all time steps are displayed in the docking window **Result Plot** as one curve for each cell.
-
-Additional curves can be added to the plot if CTRL-key is pressed during picking. The different cells are highlighted in different colors, and the corresponding curve is colored using the same color.
-
-{% comment %}  {% endcomment %}
-
-To clear the cell-selection, left-click outside the visible geometry.
+The results mapped on the 3D model can be inspected in detail by left clicking cells in the 3D view.
+The selected cells will be highlighted, text information displayed in the **Result Info** docking window, and the time-history values plotted in the **Result Plot**, if available.
Visibility of the docking widows can be controlled from the Windows menu.
## Result Info information
-Clicking on different type of geometry will display slightly different information as described in the following tables:
+
+Clicking cells will display slightly different information text based on the case type as described in the following tables:
### Eclipse model
@@ -37,8 +28,6 @@ Formation names| Displays name of formation the cell is part of
### Geomechanical model
-When clicking in the 3D scene, the selected geometry will be an element.
-
Name | Description
-----------------------|------------
Closest result | Closest node ID and result value
@@ -46,3 +35,22 @@ Element | Element ID and IJK coordinate for the element
Intersection point | Location of left-click intersection of the geometry
Element result details | Lists all integration point IDs and results with associated node IDs and node coordinates
Formation names | Displays name of formation the cell is part of
+
+## Result Plot
+
+If a dynamic none-Flow Diagnostics result is active, the result values of the selected cells for all time steps are displayed in the docking window **Result Plot** as one curve for each cell.
+
+Additional curves can be added to the plot if CTRL-key is pressed during picking. The different cells are highlighted in different colors, and the corresponding curve is colored using the same color.
+
+To clear the cell-selection, left-click outside the visible geometry.
+
+### Adding the curves to a Summary plot
+
+The time history curves of the selected cells can be added to a Summary Plot by right-clicking in the **Result Plot** or in the 3D View.
+
+
+
+A dialog will appear to prompt you to select an existion plot, or to create a new one.
+
+
+
diff --git a/docs/SimulationWells.md b/docs/SimulationWells.md
index 5d36886427..dd6739ba1a 100644
--- a/docs/SimulationWells.md
+++ b/docs/SimulationWells.md
@@ -5,41 +5,111 @@ permalink: /docs/simulationwells/
published: true
---
-This item controls the overall settings for how wells in the Eclipse simulation are visualized.
-The wells are shown in two ways:
+
-1. A pipe through all cells with well connections
-2. By adding the well cells to the set of visible cells
+This section describes how wells defined in the simulation are displayed, and how to control the different aspects of their visualization.
-The latter is handled internally as a special range filter, and adds cells to the set of range filtered cells.
+## Commands
-The Property Editor of the **Simulation Wells** item is shown below:
+Several commands are available as context commands on a simulation well. These commads are available either by right-clicking the well in the **3D View** or in the **Project Tree**.
-
+- **New Well Log Extraction Curve** -- Creates a new Well Log curve based on the selected well, the current timestep and cell property.
+ ( See [Well Log Plots]({{ site.baseurl }}/docs/welllogsandplots) )
+- **New Intersection** -- creates a new intersection based on the selected Simulation Well.
+ ( See [Well Log Plots]({{ site.baseurl }}/docs/intersections) )
+- **Plot Production Rates** -- Creates a summary plot of the selected wells production rates, along with the bottom hole pressure.
+ ( See [Summary Plots]({{ site.baseurl }}/docs/summaryplots) )
+- **Plot Well Allocation** -- Creates or modifies the default Well Allocation Plot to show the well
+ allocation for the selected well. If the case has no Fluxes the well flow rates are shown instead.
+ ( See [ Flow Diagnostics Plots ]({{ site.baseurl }}/docs/flowdiagnosticsplots) )
+- **Show Contributing Wells** -- This command sets up a 3D View by adding filters and modifying the Cell Result based on Flow Diagnostic Calculations to show which regions and wells that contribute to the selected well by doing:
+ - Add a property filter of **Time Of Flight** to/from the selected well to show only the cells that contribute to/are influenced by the well.
+ - Sets the **Cell Result** to show **Tracer With Max Fraction** based on **All Injectors** or **All Producers** (the opposite of the selected well)
+ - Toggles the visibility of the other Simulation wells to show only wells contributing to/influenced by the selected well.
+
+## Overall Settings for Simulation Wells
+
+The Property Panel of the **Simulation Wells** item in the **Project Tree** contains options that are applied across all the wells, while the visualization of each single well can be controlled by the options in the property panel of that particular well, and will override the overall settings in the **Simulation Wells** item.
+
+If an option is overridden in any of the wells, this will be indicated in the corresponding top level toggle which will be partially checked. Toggling such a setting will overwrite the ones set on the individual level.
+
+In the following are the different parts of the **Simulation Wells** property panel explained.
+
+### Visibility
+
+
+
+These options controls the visibility of different aspects of the simulation wells.
+
+- **Wells Trough Visible Cells Only** -- This option will only show wells with connections to cells deemed visible by the combined result of **Range Filters** and **Property Filters**.
+- **Label** -- Controls visibility of well name labels in the 3D View
+- **Well head** -- Controls visibility of the arrow displaying the production status of the well
+- **Pipe** -- A symbolic pipe can be drawn between the well connection cells to illustrate the well. This option controls the visibility of the pipes.
+- **Spheres** -- This option toggles the visibility of spheres drawn at the center of each well connection cell.
+- **Communication Lines** -- Toggles the visibility of well communication lines.
+ These arrows shows the communication between wells. Broader arrows indicate higher level of communication.
+ These arrows are based on Flow Diagnostics calculations, and are only available if the eclipse results includes fluxes.
+ Arrows representing communication in the opposite direction from what is expected (eg. producers supporting another well due to cross flow) are displayed in a layer "under" the other arrows, to make them easier to see.
+### Well Cells and Fence
-- **Add cells to range filter** This option controls how the well cells
- (cells with connections to wells) are added to the set of range filtered cells.
- - *All On* will add the cells from all wells disregarding the individual settings on the well.
- - *All Off* will prevent any well cells to be added.
- - *Individually* will respect the individual settings for each well, and add the cells from the wells with this option set on.
-- **Use Well Fence** and
-- **Well Fence direction** Controls whether to add extensions of the well cells in the I, J or K direction to the set of range filtered cells
-- **Well head** These options control the appearance and position of the well labels and and symbols of the top of the well
-- **Global Well Pipe Visibility** Controls if and when to show the pipe representation of the wells. The options are:
- - *All On* will show the pipes from all wells disregarding the individual settings on the well.
- - *All Off* will hide all simulation well pipes.
- - *Individual* Will respect the individual settings for each well, and only show the well pipes from the wells with this option set on. See below.
- - *Visible Cells Filtered* This option will only show the pipes of wells that are connected to visible cells. That means the combined result of **Range Filters**, **Property Filters** and any **Well Range Filters**.
- *NOTE* : All Wells with **Well Range Filter** turned on will always be visible with this option selected.
-- **Pipe Radius Scale** Scaling the pipe radius by the average max cell size.
-- **Geometry based Branch detection** Applies only to ordinary wells (not MSW)
- and will detect that parts of a well really is a branch. Those well parts will
- be visualized as a branch starting at the well head instead of at the previous connected cell.
+
+
+- **Show Well Cells** -- This option toggles whether to add the well connection cells to the set of visible
+ cells. If no cell filters are active, toggling this option will conveniently hide all other cells,
+ displaying only the requested well cells.
+- **Show Well Cell Fence** -- and
+- **Well Fence direction** -- Controls whether to add extensions of the well cells in the I, J or K direction to the set of visible cells
+
+
+### Size Scaling
+
+
+
+- **Well Head Scale** -- Scales the arrow displaying the production status of the well
+- **Pipe Radius Scale** -- Scaling the pipe radius by the average i,j cell size.
+- **Sphere Radius Scale** -- Scaling connection cell spheres radius by the average i,j cell size.
+
+Open Simulation Wells will be drawn with a slightly larger radius than closed wells. This makes open wells easier to see when they occupy the same cells as a closed one.
+
+### Colors
+
+
+
+- **Color Pipe Connections** -- Applies a red, green, blue or gray color to the section of the pipe touching a connection cell indicating the production status of the connection. Gas injection, oil production, water injection or closed respectively.
+- **Label Color** -- Sets the well label color in the 3D view
+- **Unique Pipe Colors** -- Pushing this apply button will apply unique colors to all the wells, overwriting the colors they had.
+- **Uniform Pipe Colors** -- Pushing the apply button will apply the displayed color to all the wells.
+
+### Well Pipe Geometry
+
+
+
+- **Type** -- Controls whether the pipe will go from cell center to cell center or in a smoother trajectory.
+- **Branch Detection** -- Enables splitting of wells into branches based on the positions of the connection cells. This option applies to ordinary wells only and has no effect on multi segment wells (MSW).
+
+### Advanced
+
+
+
+- **Well Cell Transparency** -- Controls the transparency level for the well cells
+- **Well Head Position** -- Controls the depth position of the wellhead. Either relative to the top of the active cells in the relevant IJ-column, or relative to the highest active cell overall.
+
+## Individual Simulation Well options
+
+
+
+Each of the wells has a set of individual settings which corresponds to the setting on the global level. See the above documentation of *Overall Settings for Simulation Wells*.
+
+Except for the **Size Scaling**, these options will override the corresponding setting on the global level,
+and will result in a partially checked state on the corresponding toggle in the **Simulation Wells** property panel.
+The **Size Scaling** options, however, works relative to the scaling level set on the top level.
## Well pipes of Multi Segment Wells
+ResInsight reads the MSW information in the result files and uses that to create a topologically correct visualization of the Multi Segment Well. Reading this information is somewhat time consuming, and can be turned off in the [ Preferences ]({{ site.baseurl }}/docs/preferences).
+
### Geometry approximation
The pipe geometry generated for MSW's are based on the topology of the well (branch/segment structure) and the position of the cells being connected. The segment lengths are used as hints to place the branch points at sensible places. Thus the pipe geometry itself is not geometrically correct, but makes the topology of the well easier to see.
@@ -51,19 +121,3 @@ Often MSW's are modeled using a long stem without connections and a multitude of
### Picking reveals Segment/Branch info
Branch and segment info of a MSW-connected-Cell is shown in the **Result Info** window when picking a cell in the 3D View. This can be handy when relating the visualization to the input files.
-
-## Individual Simulation Well options
-
-Each of the wells can have some individual settings. These options works as specializations of the ones set on the global level (**Simulation Wells** See above) but will *only come into play when they are not ignored by the global settings*.
-
-This is particularly important to notice for the **Show Well Pipe** and **Range Filter** options. They will not have effect if the corresponding global settings in **Simulation Wells** allows them to.
-
-The properties of a single well are shown below.
-
-
-
-One option needs further explanation:
-
-- **Pipe Radius Scale** This option is a scale that is added to the "global" scale set in the **Simulation Wells** properties.
-
-
diff --git a/docs/Snapshots.md b/docs/Snapshots.md
index 6d8d805406..96ef72c681 100644
--- a/docs/Snapshots.md
+++ b/docs/Snapshots.md
@@ -4,8 +4,11 @@ title: Snapshots
permalink: /docs/snapshots/
published: true
---
+ResInsight has several commands to create snapshots conveniently. 3 commands to take snapshots of existing Plot and 3D Views directly, and a more advanced export command that can automatically modify Eclipse 3D Views before snapshotting them.
-ResInsight can take screen shots of your different 3D Views and Plot Windows directly. These commands are available from the toolbar and the menues in the respective main window.
+## Snapshots of Existing Views
+
+The commands to snapshot existing views and plots are available from the toolbar and the **Edit** and **File**->**Export** menus in the main windows

@@ -21,9 +24,36 @@ Image export of the currently active 3D View or Plot Window can be launched from
If a project contains multiple 3D Views or Plot Windows, all of them can be exported in one go using **File -> Export -> Snapshot All Views To File**. This will either export all the 3D Views or all the Plot Windows, depending on whether you invoke the command in the 3D Main Window or the Plot Main Window.
-The files generated are stored in a folder named `snapshots` within the folder where the Project File resides.
+The files generated are stored in a folder named _`snapshots`_ within the folder where the Project File resides.
+
+## Advanced Snapshot Export 
+
+The **Advanced Snapshot Export** command is useful for exporting several images of a specified set of views while simultaneously changing some of their settings. By using this command it is easy to document all layers of a specific model, or generate images with identical setup across several different cases. It is also easy to export an image for each of the timesteps in a case, or even a combination of all these parameters.
+
+The **Advanced Snapshot Export** is available from the **File**->**Export** menu in the **3D Main Window**
+Invoking the command will display the following dialog:
+
+ 
+
+This table defines which 3D Views to modify, and how to modify them. Each row defines the modifications of a specific view, and for all the combinations a row specifies, a snapshot is generated.
+
+To edit a row, the row must be activated by toggling it on in the **Active** column, then double click on the cell to edit.
+
+Options represented by columns:
+
+- **View** -- Selects the view to modify
+- **Result Type**, **Properties** -- Defines a list of eclipse result properties to cycle through when creating snapshots. If properties from both the dynamic and static list is needed, you must create a new row.
+- **Start Time**, **End Time** -- Defines the timestep range to cycle through when creating snapshots
+- **Range Filter Slice**, **Range Start**, **Range End** -- Defines a range filter slice that will be added to the view, and then cycled from *Range Start* to *Range End* when creating snapshots.
+- **Cases** -- Defines the cases to cycle while creating snapshots. Normaly you can not change which case a view is displaying, but this option does temporarily.
+
+The number of exported snapshots from a row can easily end up being huge, so it is wise to use some caution. The total number will be Properties * Timesteps * Range Steps * Cases.
+
+Rows can be deleted and created by right-clicking in the table. 5 rows are created for convenience by default.
+
+The snapshots will be generated and saved to the folder displayed in the **Export Folder** field, when pressing the **Export** button. This might take quite some time, depending on you settings.
diff --git a/docs/SummaryPlots.md b/docs/SummaryPlots.md
index b982ef1e05..70be92a9ff 100644
--- a/docs/SummaryPlots.md
+++ b/docs/SummaryPlots.md
@@ -11,68 +11,68 @@ ResInsight can create summary plots based on vectors from SUMMARY files (`*.SMSP
When opening an Eclipse case in the 3D view, the associated summary file is opened automatically by default, and made available as a **Summary Case**.
Summary files can also be imported directly using the command: **File->Import->Import Summary Case**.
-When a summary case has been imported, a Summary Plot with a default **Curve Filter** is created.
-
-The default behaviours can be configured in the [ Preferences ]({{ site.baseurl }}/docs/preferences).
+When a summary case has been imported, a Summary Plot with a default **Curve Filter** is created. This default behaviour can be configured in the [ Preferences ]({{ site.baseurl }}/docs/preferences).
## Summary Plots
-A Summary Plot is a window displaying a graph in the main area of the Plot Mian Window. It can contain **Summary Curve Filters** and **Summary Curves** (See below) in addition to a title, left and right Y-Axis, and the Time-axis.
+A Summary Plot is a window displaying a graph in the main area of the **Plot Main Window**. It can contain **Summary Curve Filters** and **Summary Curves** ( See below ).
-A new plot can be created by using the context menu of a plot selecting  **New Summary Plot**.
+A new plot can be created by using the context menu of a plot selecting  **New Summary Plot**.

Most of the settings for the Plot itself is controlled by its sub items in the Property Tree:
-- **Time Axis** - Controls the properties for the time axis (font size, title text, time range)
-- **Left Y-axis** - Controls the properties for the left Y-axis
-- **Right Y-axis** - Controls the properties for the right Y-axis
+- **Time Axis** -- Controls the properties for the time axis (font size, title text, time range)
+- **Left Y-axis** -- Controls the properties for the left Y-axis
+- **Right Y-axis** -- Controls the properties for the right Y-axis
+
+### Time Axis Properties
+
+
+
+- **Show Title** -- Toggles whether to show the axis title
+- **Title** -- A user defined name for the axis
+- **Title Position** -- Either *Center* or *At End*
+- **Font Size** -- The font Size used for the date/times shown at the ticks of the axis
+- **Time Mode** -- Option to show the time from Simulation Start, or as real date-times.
+- **Time Unit** -- The time unit used to display **Time From Simulation Start**
+- **Max**/**Min** -- The range of the visible time in the Plot in the appropriate time unit.
+ The format of dates is _`yyyy-mm-ddThh:mm:ssZ`_
### Y-axis properties

-| Parameter | Description |
-|-----------|-------------|
-| **Auto Title** | If enabled, the y-axis title is derived from the vectors associated with the axis. Names and unit are used. |
-| **Title** | If **Auto Title** is disabled, the plot title is set using this field |
-| **Title Position** | Controls the position of the title. Center or At End |
-| **Font Size** | Defines the font size used by the axis title |
-| **Max and Min** | Defines the visible y range |
-| **Number Format** | Defines how the legend numbers are formatted |
-| **Logarithmic Scale** | Draw plot curves using a logarithmic scale |
+- **Auto Title** -- If enabled, the y-axis title is derived from the vectors associated with the axis. Names and unit are used.
+- **Title** -- If **Auto Title** is disabled, the plot title is set using this field
+- **Title Position** -- Controls the position of the title. Center or At End
+- **Font Size** -- Defines the font size used by the axis title
+- **Logarithmic Scale** - Draw plot curves using a logarithmic scale
+- **Number Format** -- Defines how the legend numbers are formatted
+- **Max and Min** -- Defines the visible y range
#### Number Format
-- **Auto** - Legend numbers are either using a scientific or decimal notation based on the number of digits of the value
-- **Decimal** - Legend numbers are displayed using decimal notation
-- **Scientific** - Legend numbers are displayed using scientific notation (ie. 1.2e+6)
-### Time Axis Properties
-
-
-| Parameter | Description |
-|-----------|-------------|
-| **Show Title** | Toggles wheter to show the axis title |
-| **Title** | A user defined name for the axis |
-| **Title Position** | Either *Center* or *At End* |
-| **Font Size** | The font Size used for the date/times shown at the ticks of the axis |
-| **Time Mode** | Option to show the time from Simulation Start, or as real date-times. |
-| | When activated, The **Time Unit** Option will appear, with options to show the relative time in different units|
-| **Max**/**Min** | The range of the visible time in the Plot in the appropriate time unit. The format of dates is yyyy-mm-ddThh:mm:ssZ |
+- **Auto** -- Legend numbers are either using a scientific or decimal notation based on the number of digits of the value
+- **Decimal** -- Legend numbers are displayed using decimal notation
+- **Scientific** -- Legend numbers are displayed using scientific notation (ie. 1.2e+6)
### Plot mouse interaction
-- **Value Tracking** - When the mouse cursor is close to a curve, the closest curve sample is highlighted and the curve sample value at this location is displayed in a tooltip.
-
-- **Selection** - Left mouse button click can be used to select several of the parts in the plot, and display them in the Property Editor:
+- **Value Tracking** -- When the mouse cursor is close to a curve, the closest curve sample is highlighted and the curve sample value at this location is displayed in a tooltip.
+- **Selection** -- Left mouse button click can be used to select several of the parts in the plot, and display them in the Property Editor:
- The closest curve
- Each of the Plot Axes
- The Plot itself if none of the above is hit and the Plot window is activated by the mouse click.
+- **Window Zoom** -- Window zoom is available by dragging the mouse when the left mouse button is pressed. Use  **Zoom All** to restore default zoom level.
+- **Wheel Zoom** -- The mouse wheel will zoom the plot in and out towards the current mouse cursor position
-- **Window Zoom** - Window zoom is available by dragging the mouse when the left mouse button is pressed. Use  **Zoom All** to restore default zoom level.
+### Accessing the Plot Data
+The command context command **Show Plot Data** will show a window containing the plot data in ascii format. The content of this window is easy to copy and paste into Excel or other tools for further processing.
+It is also possible to save the ascii data to a file directly by using the context command **Export Plot Data to Text File** on the plot.
## Summary Curve Filter
@@ -82,41 +82,89 @@ A new curve filter can be created by using the context menu of a plot selecting

-The property panel is divided in three main groups of options:
+The property panel is divided in four main groups of options:
-- **Summary Vectors** - Selecting what vectors to create curves from
-- **Appearance Settings** - Options controlling how colors, symbols etc are assigned to the curves
-- **Curve Name Configuration** - Control how the curves are named
+- **Cases** -- Selecting the cases to extract data from
+- **Vector Selection** -- Selecting what vectors to create curves from
+- **Appearance Settings** -- Options controlling how colors, symbols etc are assigned to the curves
+- **Curve Name Configuration** -- Control how the curves are named
-In the following sections these groups of options are described in more detail.
+In addition you have the following options:
-In addition to the option groups there are three general items in the property panel:
+- **Axis** -- Controls whether the curves are to be associated with the left or right Y-Axis
+- **Auto Apply Changes** -- When toggled, the changes in the property panel is instantly reflected in the generated and controlled curves
+- **Apply** -- Applies the settings, and thus generates and updates the controlled curves
-| Name | Description |
-|-----------|-------------|
-| **Axis** | Controls wether the curves are to be associated with the left or right Y-Axis |
-| **Auto Apply Changes** | When toggled, the changes in the property panel is instanly reflected in the generated and controlled curves |
-| **Apply** | Applies the settings, and thus generates and updates the controlled curves |
+In the following sections the option groups are described in more detail.
-### *Summary Vectors* - option group
+### Cases
+
+Selects the cases to be used when searching for data vectors. Several Cases can be selected at the same time and the filter will contain the union of all vectors in those cases. Curves will be created for each selected case for the selected set of vectors.
+
+### Vector Selection
This group of options is used to define the selection of summary vectors of interest. Several filtering tools are available to make this as convenient as possible.
-| Name | Description |
-|-----------|-------------|
-| **Cases** | Selects the cases to be used when searching for data vectors. Several Cases can be selected at the same time. If a particular case is not selected, unique vectors defined in this case will not appear in the filter. |
-| **Search** | This option controls the filtering mode. Several are available and controls witch search fields that are made available. This is described in more detail below.|
-| *unnamed list of vectors* | This list displays the set of vectors filtered by the search options. Use this to select which of the vectors you want as curves. **Ctrl-A** selects them all.|
+- **Search** -- This option controls the filtering mode. Several are available and controls witch search fields that are made available. The search modes are described below
+- *Options depending on Search Mode* -- Described below.
+- *list of vector names* -- This list displays the set of vectors filtered by the search options. Use this to select which of the vectors you want as curves. **Ctrl-A** selects them all.
-In the following, all the search fields are wildcard based text filters. An empty search string will match anything: any value or no value at all. A single `*` however, will only match something: There has to be some value for that particular quantity to make the filter match.
+In the following, all the search fields are wildcard-based text filters. An empty search string will match anything: any value or no value at all. A single _`*`_ however, will only match something: There has to be some value for that particular quantity to make the filter match.
-|**Search** mode | Description |
-|----------------|--------------|
-| **All** | A wildcard search filter applied to the colon-separated string that describes the complete vector. Eg `"*:*, 55, *"` or `"WBHP:*"`. This mode is the default. |
-| **Field**, **Well**, **Group**, **Completion**, **Segment** ,**Block**, **Region**, **Region-Region**, **Lgr-Well**, **Lgr-Completion**, **Lgr-Block**, **Misc**, **Aquifer**, **Network** | These filter modes will only match vectors of the corresponding Eclipse output. The **Vector Name** field will match the name of the quantity itself, while the additional mode specific fields will match the item(s) beeing addressed. |
-| **All (Advanced)** | This is a complete combined search mode with all the different search options available to create advanced cross item type searches.|
+The **Vector Name** field will match the name of the quantity itself, while the additional mode specific fields will match the item(s) being addressed.
-### *Appearance Settings* - option group
+#### Search Modes with filter fields
+
+- **All** -- A wildcard search filter applied to the colon-separated string that describes the complete vector. Eg. _`"*:*, 55, *"`_ or _`"WBHP:*"`_. This mode is the default.
+ - **Filter** -- The actual filter text to apply
+- **Field** -- Select Field related vectors only
+ - **Vector name** -- Filter for Field related vector names
+- **Well** -- Select Well related vectors only
+ - **Vector name** -- Filter for Well related vector names
+ - **Well name** -- Well name filter
+- **Group** - Select Group related vectors only
+ - **Vector name** -- Filter for Group related vector names
+ - **Group name** -- Group name filter
+- **Completion** -- Select Completion related vectors only
+ - **Vector name** -- Filter for Completion related vector names
+ - **Well name** -- Well name filter
+ - **I, J, K** -- Text based filter of the I, J, K value string of the completion. Eg _`"18,*,*"`_ to find vectors with I = 18 only
+- **Segment** -- Select Segment related vectors only
+ - **Vector name** -- Filter for Segment related vector names
+ - **Well name** -- Well name filter
+ - **Segment number** -- Text based filter of the segment numbers
+- **Block** -- Select I, J, K -- Block related vectors only
+ - **Vector name** -- Filter for cell Block related vector names
+ - **I, J, K** -- Text based filter of the I, J, K value string of the Block.
+- **Region** -- Select Region related vectors only
+ - **Vector name** -- Filter for Region related vector names
+ - **Region number** -- Text based filter of the Region numbers
+- **Region-Region** -- Select Region to Region related vectors only
+ - **Vector name** -- Filter for Region to Region related vector names
+ - **Region number** -- Text based filter of the first Region number
+ - **2. Region number** -- Text based filter of the second Region number
+- **Lgr-Well** -- Select Well in LGR related vectors only
+ - **Vector name** -- Filter for Well in Lgr related vector names
+ - **Well name** -- Well name filter
+ - **Lgr name** -- Lgr name filter
+- **Lgr-Completion** -- Select Completion in LGR related vectors only
+ - **Vector name** -- Filter for Well in Lgr related vector names
+ - **Well name** -- Well name filter
+ - **Lgr name** -- Lgr name filter
+ - **I, J, K** -- Text based filter of the I, J, K value string of the completion in the Lgr.
+- **Lgr-Block** -- Select I, J, K - Block in LGR related vectors only
+ - **Vector name** -- Filter for cell Block related vector names
+ - **Lgr name** -- Lgr name filter
+ - **I, J, K** -- Text based filter of the I, J, K value string of the Block in the Lgr.
+- **Misc** -- Select vectors in the Misc category only
+ - **Vector name** -- Filter for Misc category vector names
+- **Aquifer** -- Select Aquifer category vectors only
+ - **Vector name** -- Filter for Aquifer category vector names
+- **Network** -- Select Network category vectors only
+ - **Vector name** -- Filter for Network category vector names
+- **All (Advanced)** -- This is a complete combined search mode with all the different search options available to create advanced cross item type searches.
+
+### Appearance Settings
Curves created by a curve filter are assigned individual visual properties like colors and symbols in a systematic manner to make the plots easy to read. Different aspects of the vectors are assigned to different curve appearances. Eg. using symbols to distinguish cases, while using colors to distinguish quantity.
@@ -128,21 +176,33 @@ When set to **Auto** ResInsight assigns visual properties based on the present v
When disabling the **Auto** option, you can select which of the visual curve properties to use for which vector category. The vector Category that currently can be used is Case, Vector, Well, Group and Region. The visual properties supported types are Color, Symbols, Line Style, Gradient and Line Thickness.
-### *Curve Name Configuration* - option group
+### Curve Name Configuration
-The user can control the curve names by toggeling what part of the summary vector information to use in the name.
+The user can control the curve names by toggling what part of the summary vector information to use in the name.
+
+#### Contribute To Legend
+
+This option controls whether the curves created by the filter will be visible in the plot legend at all. In addition will Curves with an empty name also be removed from the legend.
## Summary Curve
A new curve can be created by using the context menu of a plot selecting  **New Summary Curve**.

-Many of the properties of a single curve is similar to the properties described for a curve filter. The appearance however, is controlled directly.
+Many of the properties of a single curve is similar to the properties described for a curve filter. There are some differences, however:
+### Appearance
+
+The curve's appearance is controlled directly, and not automatically as for **Curve Filters**.
-The appearance set on a curve in a curve filter will override the settings in the curvefilter until the curvefilter settings are applied again. Then the clocal changes on the curve are overwritten.
+The appearance set on a curve in a Curve Filter will override the settings in the Curve Filter until the Curve Filter settings are applied again. Then the local changes on the curve are overwritten.
+### Curve Name
+
+- **Contribute To Legend** -- This option controls whether the curve will be visible in the plot legend at all. A curves with an empty name will also be removed from the legend.
+- **Auto Name** -- If enabled, ResInsight will create a name for the curve automatically based on the settings in this option group.
+- **Curve Name** -- If **Auto Name** is off, you can enter any name here. If empty, the curve will be removed from the legend, but still visible in the plot.
## Copy and Paste
diff --git a/docs/WellLogsAndPlots.md b/docs/WellLogsAndPlots.md
index 78e235df1d..79050bf434 100644
--- a/docs/WellLogsAndPlots.md
+++ b/docs/WellLogsAndPlots.md
@@ -13,9 +13,16 @@ ResInsight can display well logs by extracting data from a simulation model alon
Well log plots can be created in several ways:
-1. Right click the empty area below all the items in the **Project Tree** and select **New Well Log Plot**. A plot is created with one **Track** and an empty **Curve**.
-2. Right click a wellpath, either in the **Project Tree** or in the 3D-view, and select either **New Well Log Extraction Curve**. A new plot with a single **Track** and a **Curve** is created. The curve is setup to match your selection (Well trajectory, active case and result)
-3. Right click a LAS-file channel in the **Project Tree** and select **Add to New Plot**. A new plot with a single **Track** and a **Curve** is created. The curve is setup to plot the selected LAS-file channel.
+1. Right-click a wellpath or a simulation well, either in the **Project Tree** or in the 3D-view.
+ Select **New Well Log Extraction Curve**.
+ A new plot with a single **Track** and a **Curve** is created. The curve is setup to match the
+ selected Well trajectory, active case, and result.
+2. Right-click the empty area below all the items in the **Project Tree**.
+ Select **New Well Log Plot**.
+ A plot is created with one **Track** and an empty **Curve**.
+3. Right-click a LAS-file channel in the **Project Tree**.
+ Select **Add to New Plot**.
+ A new plot with a single **Track** and a **Curve** displaying the selected LAS-file channel is created.
Each **Well Log Plot** can contain several *Tracks*, and each **Track** can contain several **Curves**.
@@ -23,7 +30,11 @@ Each **Well Log Plot** can contain several *Tracks*, and each **Track** can cont
Tracks and Curves can be organized using drag and drop functionality in the **Project Tree**. Tracks can be moved from one plot to another, and you can alter the order in which they appear in the **Plot**. **Curves** can be moved from one **Track** to another.
-All the **Tracks** in the same plot always display the same depth range, and share the *True Veritcal Depth (TVD)* or *Measured Depth (MD)* setting. In the property panel of the plot, the exact depth range can be adjusted along with the depth type setting (TVD/MD).
+### Measured Depth (MD), True Veritcal Depth (TVD) and Pseudo Length (PL)
+
+All the **Tracks** in the same plot always display the same depth range, and share the *True Vertical Depth (TVD)* or *Measured Depth (MD)* setting. In the property panel of the plot, the exact depth range can be adjusted along with the depth type setting (TVD/MD).
+
+**Simulation Wells** however, is using a *Pseudo Length* instead of the real *Measured Depth* when the depth type is MD, as the MD information is not available in the restart files. The *Pseudo Length* is a length along the coarsely interpolated visualization pipe, and serves only as a very coarse estimation of an MD-like depth. Pseudo length is measured from the simulation-wells first connection-cell (well head connection) to the reservoir. This is very different from MD, which would start at RKB or at sea level.
### Depth unit
@@ -34,9 +45,15 @@ The depth unit can be set using the **Depth unit** option. Currently ResInsight
The visible depth range can be panned using the mouse wheel while the mouse pointer hovers over the plot.
Pressing and holding **CTRL** while using the mouse wheel will allow you to zoom in or out depth-wise, towards the mouse position.
+### Accessing the Plot Data
+
+The command context command **Show Plot Data** will show a window containing the plot data in ascii format. The content of this window is easy to copy and paste into Excel or other tools for further processing.
+
+It is also possible to save the ascii data to a file directly by using the context command **Export Plot Data to Text File** on the plot.
+
## Tracks
-Tracks can be created by right clicking a **Well Log Plot** and select **New Track**
+Tracks can be created by right-clicking a **Well Log Plot** and select **New Track**

@@ -45,43 +62,51 @@ Logarithmic display is controlled using the **Logarithmic Scale** option.
## Curves
-Curves can be created by right clicking a **Track** in the **Project Tree**, or by the commands mentioned above.
+Curves can be created by right-clicking a **Track** in the **Project Tree**, or by the commands mentioned above.
There are two types of curves: *Well Log Extraction Curves* and *Well Log LAS Curves*.
Curve visual appearance is controlled in the **Appearance** section:
-- **Color** - Controls the color of the curve
-- **Thickness** - Number of pixels used to draw the curve
-- **Point style** - Defines the style used to draw the result points of the curve, select *None* to disable drawing of points
-- **Line style** - Defines the the style used to draw the curve, select *None* to disable line drawing
+- **Color** -- Controls the color of the curve
+- **Thickness** -- Number of pixels used to draw the curve
+- **Point style** -- Defines the style used to draw the result points of the curve, select *None* to disable drawing of points
+- **Line style** -- Defines the the style used to draw the curve, select *None* to disable line drawing
### Well Log Extraction Curves
-Ectraction curves acts as an artifical well log curve. Instead of probing the real well, a simulation model is probed instead.
+Extraction curves acts as an artificial well log curve. Instead of probing the real well, a simulation model is probed instead.
-
-The property panel for a geomechanical model is shown below:
+The property panel for an eclipse model is shown below:

The first group of options controls all the input needed to setup the data extraction for the curve - Well Path, Case, and result value. The selection of result value is somewhat different between geomechanical cases and Eclipse cases. In addition you can select what timestep to address if the selected property is a dynamic one.
-Placing keyboard focus in the Time Step drop-downbox will allow you to use the arrow keys to quickly step through the timesteps while watching the changes in the curve.
+Placing keyboard focus in the Time Step drop-downbox will allow you to use the arrow keys or the mouse wheel to quickly step through the timesteps while watching the changes in the curve.
-The disply name of a curve is normally generated automatically. The options grouped below **Auto Name** can be used to tailor the length and content of the curve name.
+The display name of a curve is normally generated automatically. The options grouped below **Auto Name** can be used to tailor the length and content of the curve name.
#### Curve extraction calculation
This section describes how the values are extracted from the grid when creating a Well log Extraction Curve.
-Ectraction curves are calculated by finding the intersections between a well trajectory and the cell-faces in a particular grid model. Usually there are two intersections at nearly the same spot; the one leaving the previous cell, and the one entering the next one. At each intersection point the measured depth along the trajectory is interpolated from the trajectory data. The result value is retreived from the corresponding cell in different ways depending on the nature of the underlying result.
+Extraction curves are calculated by finding the intersections between a well trajectory and the cell-faces in a particular grid model. Usually there are two intersections at nearly the same spot; the one leaving the previous cell, and the one entering the next one. At each intersection point the measured depth along the trajectory is interpolated from the trajectory data. The result value is retrieved from the corresponding cell in different ways depending on the nature of the underlying result.
-For Eclipse results the cell face value is used directly. This is normally the same as the corresponding cell value, but if a **Directional combined results** is used, (See [ Derived Results ]({{ site.baseurl }}/docs/derivedresults) ) it will be that particular face's value.
+For Eclipse results the cell face value is used directly. This is normally the same as the corresponding cell value, but if a **Directional combined results** is used, ( See [ Derived Results ]({{ site.baseurl }}/docs/derivedresults) ) it will be that particular face's value.
Abaqus results are interpolated across the intersected cell-face from the result values associated with the nodes of that face. This is also the case for integration point results, as they are directly associated with their corresponding element node in ResInsight.
+#### Change Data Source for several curves
+
+It is possible to change either the Case or the Well Path in several Well Log Extraction curves in one go. To do this, select the curves to change, and access the context menu. Select the command **Change Data Source**. The following dialog will appear:
+
+
+
+- **Case** -- Sets this case for all the curves
+- **Well Path** -- Applies this well path to all the curves. Will not affect curves using a Simulation Well.
+
### Well Log LAS Curves
LAS-curves shows the values in a particular channel in a LAS-file.
@@ -110,11 +135,11 @@ If the LAS-file does not contain a well name, the file name is used instead.
### Exporting LAS-files
-A set of curves can be exported to LAS files by right clicking the curves, well log track, or well log plots and select **Export To LAS Files ...**. An export dialog is diplayed, allowing the user to configure how to export curve data.
+A set of curves can be exported to LAS files by right-clicking the curves, well log track, or well log plots and select **Export To LAS Files ...**. An export dialog is displayed, allowing the user to configure how to export curve data.

-- **Export Folder** - Location of the exported LAS files, one file per unique triplet of well path, case and time step
-- **Resample Curve Data** - If enabled, all curves are resampled at the given resample interval before export
-- **TVDRKB** - If enabled, TVDRKB for all curves based on the listed well paths are exported. If the difference field is blank, no TVDRKB values are exported.
+- **Export Folder** -- Location of the exported LAS files, one file per unique triplet of well path, case and time step
+- **Resample Curve Data** -- If enabled, all curves are resampled at the given resample interval before export
+- **TVDRKB** -- If enabled, TVDRKB for all curves based on the listed well paths are exported. If the difference field is blank, no TVDRKB values are exported.
diff --git a/docs/WellPaths.md b/docs/WellPaths.md
index c20d785235..32293fa615 100644
--- a/docs/WellPaths.md
+++ b/docs/WellPaths.md
@@ -15,14 +15,14 @@ The command **File -> Import -> Import Well Paths From File** will read the well
The supported ASCII format is quite flexible but the main requirements are:
-1. Each data line must contain four numbers: X Y TVD MD separated with white-space.
-2. Lines starting with "--" or "#" is considered to be comment lines
-3. A line starting with none-number-characters are used as a well name after the following rules:
- 1. If the line contains a pair of : ', `, ´, ’ or ‘ the text between the quotation marks is used as a well name.
- 2. If the line contains the case insensitive string "name" with an optional ":" after, the rest of the line is used as a well name.
- 3. If there are no quotes or "name"'s, the complete line is used as a well name.
- 4. If there are several consecutive name-like lines, only the last one will be used
-3. If a well name is found, a new well is created and the following data points ends up in it.
+- Each data line must contain four numbers: X Y TVD MD separated with white-space.
+- Lines starting with "--" or "#" is considered to be comment lines
+- A line starting with none-number-characters are used as a well name after the following rules:
+ - If the line contains a pair of : _``` "'", "`", "´", "’" or "‘" ```_ the text between the quotation marks is used as a well name.
+ - If the line contains the case insensitive string "name" with an optional ":" after then the rest of the line is used as a well name.
+ - If there are no quotes or "name"'s, the complete line is used as a well name.
+ - If there are several consecutive name-like lines, only the last one will be used
+- If a well name is found, a new well is created and the following data points are added to it.
#### Example 1:
@@ -91,7 +91,7 @@ The visible wells are always shown in all the 3D Views in the complete project,

-- **Global well path visibility** This option forces the well paths on or off, ignoring the individual settings unless it is set to Individual.
-- **Clip Well Paths** This option hides the top of the Well Trajectories to avoid displaying the very long lines from the reservoir to the sea surface.
-- **Well Path clipping depth distance** This number is the distance from the top of the reservoir to the clipping depth.
+- **Global well path visibility** -- This option forces the well paths on or off, ignoring the individual settings unless it is set to Individual.
+- **Clip Well Paths** -- This option hides the top of the Well Trajectories to avoid displaying the very long lines from the reservoir to the sea surface.
+- **Well Path clipping depth distance** -- This number is the distance from the top of the reservoir to the clipping depth.
diff --git a/docs/odbSupport.md b/docs/odbSupport.md
index 903b65cbd9..80d9bce2ea 100644
--- a/docs/odbSupport.md
+++ b/docs/odbSupport.md
@@ -5,7 +5,7 @@ permalink: /docs/odbsupport/
published: true
---
-ResInsight can be built with support for reading and displaying geomechanical analysis models produced by ABAQUS in the '*.odb' format. This is only possible if you or your organization has a copy of the ODB-Api from Simulia, and a valid license to use it.
+ResInsight can be built with support for reading and displaying geomechanical analysis models produced by ABAQUS in the _`*.odb`_ format. This is only possible if you or your organization has a copy of the ODB-Api from Simulia, and a valid license to use it.
If you have, and would like to a use these features, please see [ Build Instructions ]({{ site.baseurl }}/docs/buildinstructions) for a description on how to build ResInsight and how to include the support for odb-files.
diff --git a/images/3d_view_context_menu.png b/images/3d_view_context_menu.png
index b13991eeb2..dc0e02a882 100644
Binary files a/images/3d_view_context_menu.png and b/images/3d_view_context_menu.png differ
diff --git a/images/CaseProperties.png b/images/CaseProperties.png
index d40b3f47ee..8ee4db1d5f 100644
Binary files a/images/CaseProperties.png and b/images/CaseProperties.png differ
diff --git a/images/CellResultFlowDiagnostics.png b/images/CellResultFlowDiagnostics.png
new file mode 100644
index 0000000000..32a25f1705
Binary files /dev/null and b/images/CellResultFlowDiagnostics.png differ
diff --git a/images/CellResultTypes.png b/images/CellResultTypes.png
new file mode 100644
index 0000000000..0750eca5fc
Binary files /dev/null and b/images/CellResultTypes.png differ
diff --git a/images/CellResultsOverview.png b/images/CellResultsOverview.png
new file mode 100644
index 0000000000..86bdf15f1f
Binary files /dev/null and b/images/CellResultsOverview.png differ
diff --git a/images/DerivedRelativeResults.png b/images/DerivedRelativeResults.png
index 4ba6ada02c..094d7200ce 100644
Binary files a/images/DerivedRelativeResults.png and b/images/DerivedRelativeResults.png differ
diff --git a/images/EclipsePreferences.png b/images/EclipsePreferences.png
new file mode 100644
index 0000000000..4e994e7b2b
Binary files /dev/null and b/images/EclipsePreferences.png differ
diff --git a/images/ExecuteOctaveScriptOnSelectedCases.png b/images/ExecuteOctaveScriptOnSelectedCases.png
index 1e265210bf..c90eaa3462 100644
Binary files a/images/ExecuteOctaveScriptOnSelectedCases.png and b/images/ExecuteOctaveScriptOnSelectedCases.png differ
diff --git a/images/ExportProperty.png b/images/ExportProperty.png
index 4b459d0709..f38748d6fc 100644
Binary files a/images/ExportProperty.png and b/images/ExportProperty.png differ
diff --git a/images/ExportPropertyDialog.png b/images/ExportPropertyDialog.png
new file mode 100644
index 0000000000..515e8ee4ed
Binary files /dev/null and b/images/ExportPropertyDialog.png differ
diff --git a/images/FaultProperties.png b/images/FaultProperties.png
index d76188adcd..6d36fda11a 100644
Binary files a/images/FaultProperties.png and b/images/FaultProperties.png differ
diff --git a/images/FlowCharacteristicsPlot.png b/images/FlowCharacteristicsPlot.png
new file mode 100644
index 0000000000..df52eb50f8
Binary files /dev/null and b/images/FlowCharacteristicsPlot.png differ
diff --git a/images/FlowDiagnosticsPlotsOverview.png b/images/FlowDiagnosticsPlotsOverview.png
new file mode 100644
index 0000000000..055d05f17b
Binary files /dev/null and b/images/FlowDiagnosticsPlotsOverview.png differ
diff --git a/images/FlowDiagnosticsPlotsProjectTree.png b/images/FlowDiagnosticsPlotsProjectTree.png
new file mode 100644
index 0000000000..8adee2e39e
Binary files /dev/null and b/images/FlowDiagnosticsPlotsProjectTree.png differ
diff --git a/images/GeoMechCasePropertyPanel.png b/images/GeoMechCasePropertyPanel.png
index 237b906b86..46d7e073cc 100644
Binary files a/images/GeoMechCasePropertyPanel.png and b/images/GeoMechCasePropertyPanel.png differ
diff --git a/images/GeoMechCases24x24.png b/images/GeoMechCases24x24.png
index f5a61d5f06..4fd4e85903 100644
Binary files a/images/GeoMechCases24x24.png and b/images/GeoMechCases24x24.png differ
diff --git a/images/IntersectionPolyline.png b/images/IntersectionPolyline.png
index 48fe9b4f05..b37af0c38e 100644
Binary files a/images/IntersectionPolyline.png and b/images/IntersectionPolyline.png differ
diff --git a/images/IntersectionSimulationWellProperties.png b/images/IntersectionSimulationWellProperties.png
index cc9708487c..9cfe5846a4 100644
Binary files a/images/IntersectionSimulationWellProperties.png and b/images/IntersectionSimulationWellProperties.png differ
diff --git a/images/IntersectionWellPath.png b/images/IntersectionWellPath.png
index 994f664913..013af3b4d8 100644
Binary files a/images/IntersectionWellPath.png and b/images/IntersectionWellPath.png differ
diff --git a/images/LinkedViewsOverview.png b/images/LinkedViewsOverview.png
new file mode 100644
index 0000000000..6c846c36c0
Binary files /dev/null and b/images/LinkedViewsOverview.png differ
diff --git a/images/LinkedViewsProperties.png b/images/LinkedViewsProperties.png
index 9829070f17..73e7b054b7 100644
Binary files a/images/LinkedViewsProperties.png and b/images/LinkedViewsProperties.png differ
diff --git a/images/OctavePreferences.png b/images/OctavePreferences.png
index c6eba0d4ef..52a9c9426d 100644
Binary files a/images/OctavePreferences.png and b/images/OctavePreferences.png differ
diff --git a/images/Preferences.png b/images/Preferences.png
index 2d6a9da0e9..a9a024e626 100644
Binary files a/images/Preferences.png and b/images/Preferences.png differ
diff --git a/images/ProcessMonitor.png b/images/ProcessMonitor.png
new file mode 100644
index 0000000000..8bd0d0589f
Binary files /dev/null and b/images/ProcessMonitor.png differ
diff --git a/images/PropertyFilterProperties.png b/images/PropertyFilterProperties.png
index daaf3c71b8..c84412330c 100644
Binary files a/images/PropertyFilterProperties.png and b/images/PropertyFilterProperties.png differ
diff --git a/images/PropertyFilterWithCategories.png b/images/PropertyFilterWithCategories.png
index 23e5a837b8..4cafef16ec 100644
Binary files a/images/PropertyFilterWithCategories.png and b/images/PropertyFilterWithCategories.png differ
diff --git a/images/RegressionTestDialog.png b/images/RegressionTestDialog.png
index d9c4ebc9ec..51e8db827a 100644
Binary files a/images/RegressionTestDialog.png and b/images/RegressionTestDialog.png differ
diff --git a/images/ResInsightMainPlotMediumSize.png b/images/ResInsightMainPlotMediumSize.png
index d4ab2154e8..bcbe873eca 100644
Binary files a/images/ResInsightMainPlotMediumSize.png and b/images/ResInsightMainPlotMediumSize.png differ
diff --git a/images/ResInsightUIFullSize.png b/images/ResInsightUIFullSize.png
index 006086050f..8bb1100984 100644
Binary files a/images/ResInsightUIFullSize.png and b/images/ResInsightUIFullSize.png differ
diff --git a/images/ResInsightUIMediumSize.png b/images/ResInsightUIMediumSize.png
index 406df900f9..3fad77db12 100644
Binary files a/images/ResInsightUIMediumSize.png and b/images/ResInsightUIMediumSize.png differ
diff --git a/images/ResInsight_WellPathWithSimulationWell.png b/images/ResInsight_WellPathWithSimulationWell.png
index b9f6c2afa7..39a4d1aadc 100644
Binary files a/images/ResInsight_WellPathWithSimulationWell.png and b/images/ResInsight_WellPathWithSimulationWell.png differ
diff --git a/images/RestoreDown.PNG b/images/RestoreDown.PNG
index afa5277146..8209c6b10e 100644
Binary files a/images/RestoreDown.PNG and b/images/RestoreDown.PNG differ
diff --git a/images/ResultPlotToSummaryPlotCommand.png b/images/ResultPlotToSummaryPlotCommand.png
new file mode 100644
index 0000000000..4ff654bb7b
Binary files /dev/null and b/images/ResultPlotToSummaryPlotCommand.png differ
diff --git a/images/ResultPlotToSummaryPlotDialog.png b/images/ResultPlotToSummaryPlotDialog.png
new file mode 100644
index 0000000000..9f35b54847
Binary files /dev/null and b/images/ResultPlotToSummaryPlotDialog.png differ
diff --git a/images/SimulationWellContextMenu.png b/images/SimulationWellContextMenu.png
new file mode 100644
index 0000000000..023afad906
Binary files /dev/null and b/images/SimulationWellContextMenu.png differ
diff --git a/images/SimulationWells.png b/images/SimulationWells.png
new file mode 100644
index 0000000000..92184aba63
Binary files /dev/null and b/images/SimulationWells.png differ
diff --git a/images/SimulationWellsAdvancedProperties.png b/images/SimulationWellsAdvancedProperties.png
new file mode 100644
index 0000000000..0e566ed0fe
Binary files /dev/null and b/images/SimulationWellsAdvancedProperties.png differ
diff --git a/images/SimulationWellsColorsProperties.png b/images/SimulationWellsColorsProperties.png
new file mode 100644
index 0000000000..3844cbdd0f
Binary files /dev/null and b/images/SimulationWellsColorsProperties.png differ
diff --git a/images/SimulationWellsPipeGeometryProperties.png b/images/SimulationWellsPipeGeometryProperties.png
new file mode 100644
index 0000000000..ee0225d04d
Binary files /dev/null and b/images/SimulationWellsPipeGeometryProperties.png differ
diff --git a/images/SimulationWellsProperties.png b/images/SimulationWellsProperties.png
index 9d47cc4963..51989898fd 100644
Binary files a/images/SimulationWellsProperties.png and b/images/SimulationWellsProperties.png differ
diff --git a/images/SimulationWellsScalingProperties.png b/images/SimulationWellsScalingProperties.png
new file mode 100644
index 0000000000..c70fcc150f
Binary files /dev/null and b/images/SimulationWellsScalingProperties.png differ
diff --git a/images/SimulationWellsVisibilityProperties.png b/images/SimulationWellsVisibilityProperties.png
new file mode 100644
index 0000000000..3297288dbe
Binary files /dev/null and b/images/SimulationWellsVisibilityProperties.png differ
diff --git a/images/SimulationWellsWellCellsProperties.png b/images/SimulationWellsWellCellsProperties.png
new file mode 100644
index 0000000000..e3d87fddcf
Binary files /dev/null and b/images/SimulationWellsWellCellsProperties.png differ
diff --git a/images/SnapShotToolBar.png b/images/SnapShotToolBar.png
index 6b803a2877..2a29f13eb3 100644
Binary files a/images/SnapShotToolBar.png and b/images/SnapShotToolBar.png differ
diff --git a/images/SnapshotAdvancedExport.png b/images/SnapshotAdvancedExport.png
new file mode 100644
index 0000000000..77324d9292
Binary files /dev/null and b/images/SnapshotAdvancedExport.png differ
diff --git a/images/StatisticsCaseProperties.png b/images/StatisticsCaseProperties.png
index e31f00dd80..3fa67b0a39 100644
Binary files a/images/StatisticsCaseProperties.png and b/images/StatisticsCaseProperties.png differ
diff --git a/images/StatisticsCasePropertiesCalculated.png b/images/StatisticsCasePropertiesCalculated.png
index db72e512cf..dc7f0a6926 100644
Binary files a/images/StatisticsCasePropertiesCalculated.png and b/images/StatisticsCasePropertiesCalculated.png differ
diff --git a/images/SummaryCurveFilterAppearance.png b/images/SummaryCurveFilterAppearance.png
index 104b164035..09e5d6f4ab 100644
Binary files a/images/SummaryCurveFilterAppearance.png and b/images/SummaryCurveFilterAppearance.png differ
diff --git a/images/SummaryPlotTree.png b/images/SummaryPlotTree.png
index a614eb3e76..6a1e41c848 100644
Binary files a/images/SummaryPlotTree.png and b/images/SummaryPlotTree.png differ
diff --git a/images/SummaryPreferences.png b/images/SummaryPreferences.png
index cfd5451731..449625780b 100644
Binary files a/images/SummaryPreferences.png and b/images/SummaryPreferences.png differ
diff --git a/images/SummaryTimeAxisProperties.png b/images/SummaryTimeAxisProperties.png
index 4c54273da6..fd8da54007 100644
Binary files a/images/SummaryTimeAxisProperties.png and b/images/SummaryTimeAxisProperties.png differ
diff --git a/images/TrackProperties.png b/images/TrackProperties.png
index 788bfd8a02..d6708f9d57 100644
Binary files a/images/TrackProperties.png and b/images/TrackProperties.png differ
diff --git a/images/TreeViewToggle.png b/images/TreeViewToggle.png
index 29ff6ba8be..8c4ddab95b 100644
Binary files a/images/TreeViewToggle.png and b/images/TreeViewToggle.png differ
diff --git a/images/ViewProperties.png b/images/ViewProperties.png
index b54f1f68a3..a9730e374c 100644
Binary files a/images/ViewProperties.png and b/images/ViewProperties.png differ
diff --git a/images/ViewTree.png b/images/ViewTree.png
index bcccd97727..0e065eee1b 100644
Binary files a/images/ViewTree.png and b/images/ViewTree.png differ
diff --git a/images/WellAllocationProperties.png b/images/WellAllocationProperties.png
new file mode 100644
index 0000000000..f020e70ebc
Binary files /dev/null and b/images/WellAllocationProperties.png differ
diff --git a/images/WellAllocationWellLogProperties.png b/images/WellAllocationWellLogProperties.png
new file mode 100644
index 0000000000..7a7414488b
Binary files /dev/null and b/images/WellAllocationWellLogProperties.png differ
diff --git a/images/WellLogExtractionChangeDataSource.png b/images/WellLogExtractionChangeDataSource.png
new file mode 100644
index 0000000000..082c64669f
Binary files /dev/null and b/images/WellLogExtractionChangeDataSource.png differ
diff --git a/images/WellLogExtractionCurveProperties.png b/images/WellLogExtractionCurveProperties.png
index ae8940a9b8..c6b238c7a4 100644
Binary files a/images/WellLogExtractionCurveProperties.png and b/images/WellLogExtractionCurveProperties.png differ
diff --git a/images/WellLogLasCurveProperties.png b/images/WellLogLasCurveProperties.png
index 2fef9001f0..c7d16774d4 100644
Binary files a/images/WellLogLasCurveProperties.png and b/images/WellLogLasCurveProperties.png differ
diff --git a/images/WellLogPlotOverview.png b/images/WellLogPlotOverview.png
index 3b14cbba05..c5b33a99ae 100644
Binary files a/images/WellLogPlotOverview.png and b/images/WellLogPlotOverview.png differ
diff --git a/images/WellPathCollectionProperties.png b/images/WellPathCollectionProperties.png
index a8626cbf75..8dda4332a4 100644
Binary files a/images/WellPathCollectionProperties.png and b/images/WellPathCollectionProperties.png differ
diff --git a/images/WellProperties.png b/images/WellProperties.png
index 2af593ad16..b0f1590648 100644
Binary files a/images/WellProperties.png and b/images/WellProperties.png differ
diff --git a/images/formations_property_editor.PNG b/images/formations_property_editor.PNG
index 209d6e20d9..497b3d9b34 100644
Binary files a/images/formations_property_editor.PNG and b/images/formations_property_editor.PNG differ
diff --git a/images/legend_configuration.PNG b/images/legend_configuration.PNG
index af234cf62a..b4e618f914 100644
Binary files a/images/legend_configuration.PNG and b/images/legend_configuration.PNG differ
diff --git a/images/summary_curve_filter_properties.PNG b/images/summary_curve_filter_properties.PNG
index 41f48e3091..1075f9ddbe 100644
Binary files a/images/summary_curve_filter_properties.PNG and b/images/summary_curve_filter_properties.PNG differ
diff --git a/images/summary_curve_properties.png b/images/summary_curve_properties.png
index 68cae58562..e23a015b52 100644
Binary files a/images/summary_curve_properties.png and b/images/summary_curve_properties.png differ
diff --git a/images/summary_plot_yaxis_properties.png b/images/summary_plot_yaxis_properties.png
index c55774b2cf..55a1381e6d 100644
Binary files a/images/summary_plot_yaxis_properties.png and b/images/summary_plot_yaxis_properties.png differ
diff --git a/index.html b/index.html
index cb08f673ea..066f4127b9 100644
--- a/index.html
+++ b/index.html
@@ -21,8 +21,9 @@ overview: true
✓ Open source
✓ Efficient user interface
+ ✓ Handles large Eclipse cases
✓ Plotting of summary vectors
- ✓ Handles large Eclipse cases
+ ✓ Embedded Flow Diagnostics
diff --git a/js/lunr.js b/js/lunr.js
new file mode 100644
index 0000000000..d5d1fc0ff5
--- /dev/null
+++ b/js/lunr.js
@@ -0,0 +1,2787 @@
+/**
+ * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.0.3
+ * Copyright (C) 2017 Oliver Nightingale
+ * @license MIT
+ */
+
+;(function(){
+
+/**
+ * A convenience function for configuring and constructing
+ * a new lunr Index.
+ *
+ * A lunr.Builder instance is created and the pipeline setup
+ * with a trimmer, stop word filter and stemmer.
+ *
+ * This builder object is yielded to the configuration function
+ * that is passed as a parameter, allowing the list of fields
+ * and other builder parameters to be customised.
+ *
+ * All documents _must_ be added within the passed config function.
+ *
+ * @example
+ * var idx = lunr(function () {
+ * this.field('title')
+ * this.field('body')
+ * this.ref('id')
+ *
+ * documents.forEach(function (doc) {
+ * this.add(doc)
+ * }, this)
+ * })
+ *
+ * @see {@link lunr.Builder}
+ * @see {@link lunr.Pipeline}
+ * @see {@link lunr.trimmer}
+ * @see {@link lunr.stopWordFilter}
+ * @see {@link lunr.stemmer}
+ * @namespace {function} lunr
+ */
+var lunr = function (config) {
+ var builder = new lunr.Builder
+
+ builder.pipeline.add(
+ lunr.trimmer,
+ lunr.stopWordFilter,
+ lunr.stemmer
+ )
+
+ builder.searchPipeline.add(
+ lunr.stemmer
+ )
+
+ config.call(builder, builder)
+ return builder.build()
+}
+
+lunr.version = "2.0.3"
+/*!
+ * lunr.utils
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * A namespace containing utils for the rest of the lunr library
+ */
+lunr.utils = {}
+
+/**
+ * Print a warning message to the console.
+ *
+ * @param {String} message The message to be printed.
+ * @memberOf Utils
+ */
+lunr.utils.warn = (function (global) {
+ /* eslint-disable no-console */
+ return function (message) {
+ if (global.console && console.warn) {
+ console.warn(message)
+ }
+ }
+ /* eslint-enable no-console */
+})(this)
+
+/**
+ * Convert an object to a string.
+ *
+ * In the case of `null` and `undefined` the function returns
+ * the empty string, in all other cases the result of calling
+ * `toString` on the passed object is returned.
+ *
+ * @param {Any} obj The object to convert to a string.
+ * @return {String} string representation of the passed object.
+ * @memberOf Utils
+ */
+lunr.utils.asString = function (obj) {
+ if (obj === void 0 || obj === null) {
+ return ""
+ } else {
+ return obj.toString()
+ }
+}
+/**
+ * A function to calculate the inverse document frequency for
+ * a posting. This is shared between the builder and the index
+ *
+ * @private
+ * @param {object} posting - The posting for a given term
+ * @param {number} documentCount - The total number of documents.
+ */
+lunr.idf = function (posting, documentCount) {
+ var documentsWithTerm = 0
+
+ for (var fieldName in posting) {
+ if (fieldName == '_index') continue // Ignore the term index, its not a field
+ documentsWithTerm += Object.keys(posting[fieldName]).length
+ }
+
+ return (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5)
+}
+
+/**
+ * A token wraps a string representation of a token
+ * as it is passed through the text processing pipeline.
+ *
+ * @constructor
+ * @param {string} [str=''] - The string token being wrapped.
+ * @param {object} [metadata={}] - Metadata associated with this token.
+ */
+lunr.Token = function (str, metadata) {
+ this.str = str || ""
+ this.metadata = metadata || {}
+}
+
+/**
+ * Returns the token string that is being wrapped by this object.
+ *
+ * @returns {string}
+ */
+lunr.Token.prototype.toString = function () {
+ return this.str
+}
+
+/**
+ * A token update function is used when updating or optionally
+ * when cloning a token.
+ *
+ * @callback lunr.Token~updateFunction
+ * @param {string} str - The string representation of the token.
+ * @param {Object} metadata - All metadata associated with this token.
+ */
+
+/**
+ * Applies the given function to the wrapped string token.
+ *
+ * @example
+ * token.update(function (str, metadata) {
+ * return str.toUpperCase()
+ * })
+ *
+ * @param {lunr.Token~updateFunction} fn - A function to apply to the token string.
+ * @returns {lunr.Token}
+ */
+lunr.Token.prototype.update = function (fn) {
+ this.str = fn(this.str, this.metadata)
+ return this
+}
+
+/**
+ * Creates a clone of this token. Optionally a function can be
+ * applied to the cloned token.
+ *
+ * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token.
+ * @returns {lunr.Token}
+ */
+lunr.Token.prototype.clone = function (fn) {
+ fn = fn || function (s) { return s }
+ return new lunr.Token (fn(this.str, this.metadata), this.metadata)
+}
+/*!
+ * lunr.tokenizer
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * A function for splitting a string into tokens ready to be inserted into
+ * the search index. Uses `lunr.tokenizer.separator` to split strings, change
+ * the value of this property to change how strings are split into tokens.
+ *
+ * This tokenizer will convert its parameter to a string by calling `toString` and
+ * then will split this string on the character in `lunr.tokenizer.separator`.
+ * Arrays will have their elements converted to strings and wrapped in a lunr.Token.
+ *
+ * @static
+ * @param {?(string|object|object[])} obj - The object to convert into tokens
+ * @returns {lunr.Token[]}
+ */
+lunr.tokenizer = function (obj) {
+ if (obj == null || obj == undefined) {
+ return []
+ }
+
+ if (Array.isArray(obj)) {
+ return obj.map(function (t) {
+ return new lunr.Token(lunr.utils.asString(t).toLowerCase())
+ })
+ }
+
+ var str = obj.toString().trim().toLowerCase(),
+ len = str.length,
+ tokens = []
+
+ for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) {
+ var char = str.charAt(sliceEnd),
+ sliceLength = sliceEnd - sliceStart
+
+ if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) {
+
+ if (sliceLength > 0) {
+ tokens.push(
+ new lunr.Token (str.slice(sliceStart, sliceEnd), {
+ position: [sliceStart, sliceLength],
+ index: tokens.length
+ })
+ )
+ }
+
+ sliceStart = sliceEnd + 1
+ }
+
+ }
+
+ return tokens
+}
+
+/**
+ * The separator used to split a string into tokens. Override this property to change the behaviour of
+ * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens.
+ *
+ * @static
+ * @see lunr.tokenizer
+ */
+lunr.tokenizer.separator = /[\s\-]+/
+/*!
+ * lunr.Pipeline
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * lunr.Pipelines maintain an ordered list of functions to be applied to all
+ * tokens in documents entering the search index and queries being ran against
+ * the index.
+ *
+ * An instance of lunr.Index created with the lunr shortcut will contain a
+ * pipeline with a stop word filter and an English language stemmer. Extra
+ * functions can be added before or after either of these functions or these
+ * default functions can be removed.
+ *
+ * When run the pipeline will call each function in turn, passing a token, the
+ * index of that token in the original list of all tokens and finally a list of
+ * all the original tokens.
+ *
+ * The output of functions in the pipeline will be passed to the next function
+ * in the pipeline. To exclude a token from entering the index the function
+ * should return undefined, the rest of the pipeline will not be called with
+ * this token.
+ *
+ * For serialisation of pipelines to work, all functions used in an instance of
+ * a pipeline should be registered with lunr.Pipeline. Registered functions can
+ * then be loaded. If trying to load a serialised pipeline that uses functions
+ * that are not registered an error will be thrown.
+ *
+ * If not planning on serialising the pipeline then registering pipeline functions
+ * is not necessary.
+ *
+ * @constructor
+ */
+lunr.Pipeline = function () {
+ this._stack = []
+}
+
+lunr.Pipeline.registeredFunctions = Object.create(null)
+
+/**
+ * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token
+ * string as well as all known metadata. A pipeline function can mutate the token string
+ * or mutate (or add) metadata for a given token.
+ *
+ * A pipeline function can indicate that the passed token should be discarded by returning
+ * null. This token will not be passed to any downstream pipeline functions and will not be
+ * added to the index.
+ *
+ * Multiple tokens can be returned by returning an array of tokens. Each token will be passed
+ * to any downstream pipeline functions and all will returned tokens will be added to the index.
+ *
+ * Any number of pipeline functions may be chained together using a lunr.Pipeline.
+ *
+ * @interface lunr.PipelineFunction
+ * @param {lunr.Token} token - A token from the document being processed.
+ * @param {number} i - The index of this token in the complete list of tokens for this document/field.
+ * @param {lunr.Token[]} tokens - All tokens for this document/field.
+ * @returns {(?lunr.Token|lunr.Token[])}
+ */
+
+/**
+ * Register a function with the pipeline.
+ *
+ * Functions that are used in the pipeline should be registered if the pipeline
+ * needs to be serialised, or a serialised pipeline needs to be loaded.
+ *
+ * Registering a function does not add it to a pipeline, functions must still be
+ * added to instances of the pipeline for them to be used when running a pipeline.
+ *
+ * @param {lunr.PipelineFunction} fn - The function to check for.
+ * @param {String} label - The label to register this function with
+ */
+lunr.Pipeline.registerFunction = function (fn, label) {
+ if (label in this.registeredFunctions) {
+ lunr.utils.warn('Overwriting existing registered function: ' + label)
+ }
+
+ fn.label = label
+ lunr.Pipeline.registeredFunctions[fn.label] = fn
+}
+
+/**
+ * Warns if the function is not registered as a Pipeline function.
+ *
+ * @param {lunr.PipelineFunction} fn - The function to check for.
+ * @private
+ */
+lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) {
+ var isRegistered = fn.label && (fn.label in this.registeredFunctions)
+
+ if (!isRegistered) {
+ lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn)
+ }
+}
+
+/**
+ * Loads a previously serialised pipeline.
+ *
+ * All functions to be loaded must already be registered with lunr.Pipeline.
+ * If any function from the serialised data has not been registered then an
+ * error will be thrown.
+ *
+ * @param {Object} serialised - The serialised pipeline to load.
+ * @returns {lunr.Pipeline}
+ */
+lunr.Pipeline.load = function (serialised) {
+ var pipeline = new lunr.Pipeline
+
+ serialised.forEach(function (fnName) {
+ var fn = lunr.Pipeline.registeredFunctions[fnName]
+
+ if (fn) {
+ pipeline.add(fn)
+ } else {
+ throw new Error('Cannot load unregistered function: ' + fnName)
+ }
+ })
+
+ return pipeline
+}
+
+/**
+ * Adds new functions to the end of the pipeline.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline.
+ */
+lunr.Pipeline.prototype.add = function () {
+ var fns = Array.prototype.slice.call(arguments)
+
+ fns.forEach(function (fn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(fn)
+ this._stack.push(fn)
+ }, this)
+}
+
+/**
+ * Adds a single function after a function that already exists in the
+ * pipeline.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.
+ * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.
+ */
+lunr.Pipeline.prototype.after = function (existingFn, newFn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(newFn)
+
+ var pos = this._stack.indexOf(existingFn)
+ if (pos == -1) {
+ throw new Error('Cannot find existingFn')
+ }
+
+ pos = pos + 1
+ this._stack.splice(pos, 0, newFn)
+}
+
+/**
+ * Adds a single function before a function that already exists in the
+ * pipeline.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.
+ * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.
+ */
+lunr.Pipeline.prototype.before = function (existingFn, newFn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(newFn)
+
+ var pos = this._stack.indexOf(existingFn)
+ if (pos == -1) {
+ throw new Error('Cannot find existingFn')
+ }
+
+ this._stack.splice(pos, 0, newFn)
+}
+
+/**
+ * Removes a function from the pipeline.
+ *
+ * @param {lunr.PipelineFunction} fn The function to remove from the pipeline.
+ */
+lunr.Pipeline.prototype.remove = function (fn) {
+ var pos = this._stack.indexOf(fn)
+ if (pos == -1) {
+ return
+ }
+
+ this._stack.splice(pos, 1)
+}
+
+/**
+ * Runs the current list of functions that make up the pipeline against the
+ * passed tokens.
+ *
+ * @param {Array} tokens The tokens to run through the pipeline.
+ * @returns {Array}
+ */
+lunr.Pipeline.prototype.run = function (tokens) {
+ var stackLength = this._stack.length
+
+ for (var i = 0; i < stackLength; i++) {
+ var fn = this._stack[i]
+
+ tokens = tokens.reduce(function (memo, token, j) {
+ var result = fn(token, j, tokens)
+
+ if (result === void 0 || result === '') return memo
+
+ return memo.concat(result)
+ }, [])
+ }
+
+ return tokens
+}
+
+/**
+ * Convenience method for passing a string through a pipeline and getting
+ * strings out. This method takes care of wrapping the passed string in a
+ * token and mapping the resulting tokens back to strings.
+ *
+ * @param {string} str - The string to pass through the pipeline.
+ * @returns {string[]}
+ */
+lunr.Pipeline.prototype.runString = function (str) {
+ var token = new lunr.Token (str)
+
+ return this.run([token]).map(function (t) {
+ return t.toString()
+ })
+}
+
+/**
+ * Resets the pipeline by removing any existing processors.
+ *
+ */
+lunr.Pipeline.prototype.reset = function () {
+ this._stack = []
+}
+
+/**
+ * Returns a representation of the pipeline ready for serialisation.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @returns {Array}
+ */
+lunr.Pipeline.prototype.toJSON = function () {
+ return this._stack.map(function (fn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(fn)
+
+ return fn.label
+ })
+}
+/*!
+ * lunr.Vector
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * A vector is used to construct the vector space of documents and queries. These
+ * vectors support operations to determine the similarity between two documents or
+ * a document and a query.
+ *
+ * Normally no parameters are required for initializing a vector, but in the case of
+ * loading a previously dumped vector the raw elements can be provided to the constructor.
+ *
+ * For performance reasons vectors are implemented with a flat array, where an elements
+ * index is immediately followed by its value. E.g. [index, value, index, value]. This
+ * allows the underlying array to be as sparse as possible and still offer decent
+ * performance when being used for vector calculations.
+ *
+ * @constructor
+ * @param {Number[]} [elements] - The flat list of element index and element value pairs.
+ */
+lunr.Vector = function (elements) {
+ this._magnitude = 0
+ this.elements = elements || []
+}
+
+
+/**
+ * Calculates the position within the vector to insert a given index.
+ *
+ * This is used internally by insert and upsert. If there are duplicate indexes then
+ * the position is returned as if the value for that index were to be updated, but it
+ * is the callers responsibility to check whether there is a duplicate at that index
+ *
+ * @param {Number} insertIdx - The index at which the element should be inserted.
+ * @returns {Number}
+ */
+lunr.Vector.prototype.positionForIndex = function (index) {
+ // For an empty vector the tuple can be inserted at the beginning
+ if (this.elements.length == 0) {
+ return 0
+ }
+
+ var start = 0,
+ end = this.elements.length / 2,
+ sliceLength = end - start,
+ pivotPoint = Math.floor(sliceLength / 2),
+ pivotIndex = this.elements[pivotPoint * 2]
+
+ while (sliceLength > 1) {
+ if (pivotIndex < index) {
+ start = pivotPoint
+ }
+
+ if (pivotIndex > index) {
+ end = pivotPoint
+ }
+
+ if (pivotIndex == index) {
+ break
+ }
+
+ sliceLength = end - start
+ pivotPoint = start + Math.floor(sliceLength / 2)
+ pivotIndex = this.elements[pivotPoint * 2]
+ }
+
+ if (pivotIndex == index) {
+ return pivotPoint * 2
+ }
+
+ if (pivotIndex > index) {
+ return pivotPoint * 2
+ }
+
+ if (pivotIndex < index) {
+ return (pivotPoint + 1) * 2
+ }
+}
+
+/**
+ * Inserts an element at an index within the vector.
+ *
+ * Does not allow duplicates, will throw an error if there is already an entry
+ * for this index.
+ *
+ * @param {Number} insertIdx - The index at which the element should be inserted.
+ * @param {Number} val - The value to be inserted into the vector.
+ */
+lunr.Vector.prototype.insert = function (insertIdx, val) {
+ this.upsert(insertIdx, val, function () {
+ throw "duplicate index"
+ })
+}
+
+/**
+ * Inserts or updates an existing index within the vector.
+ *
+ * @param {Number} insertIdx - The index at which the element should be inserted.
+ * @param {Number} val - The value to be inserted into the vector.
+ * @param {function} fn - A function that is called for updates, the existing value and the
+ * requested value are passed as arguments
+ */
+lunr.Vector.prototype.upsert = function (insertIdx, val, fn) {
+ this._magnitude = 0
+ var position = this.positionForIndex(insertIdx)
+
+ if (this.elements[position] == insertIdx) {
+ this.elements[position + 1] = fn(this.elements[position + 1], val)
+ } else {
+ this.elements.splice(position, 0, insertIdx, val)
+ }
+}
+
+/**
+ * Calculates the magnitude of this vector.
+ *
+ * @returns {Number}
+ */
+lunr.Vector.prototype.magnitude = function () {
+ if (this._magnitude) return this._magnitude
+
+ var sumOfSquares = 0,
+ elementsLength = this.elements.length
+
+ for (var i = 1; i < elementsLength; i += 2) {
+ var val = this.elements[i]
+ sumOfSquares += val * val
+ }
+
+ return this._magnitude = Math.sqrt(sumOfSquares)
+}
+
+/**
+ * Calculates the dot product of this vector and another vector.
+ *
+ * @param {lunr.Vector} otherVector - The vector to compute the dot product with.
+ * @returns {Number}
+ */
+lunr.Vector.prototype.dot = function (otherVector) {
+ var dotProduct = 0,
+ a = this.elements, b = otherVector.elements,
+ aLen = a.length, bLen = b.length,
+ aVal = 0, bVal = 0,
+ i = 0, j = 0
+
+ while (i < aLen && j < bLen) {
+ aVal = a[i], bVal = b[j]
+ if (aVal < bVal) {
+ i += 2
+ } else if (aVal > bVal) {
+ j += 2
+ } else if (aVal == bVal) {
+ dotProduct += a[i + 1] * b[j + 1]
+ i += 2
+ j += 2
+ }
+ }
+
+ return dotProduct
+}
+
+/**
+ * Calculates the cosine similarity between this vector and another
+ * vector.
+ *
+ * @param {lunr.Vector} otherVector - The other vector to calculate the
+ * similarity with.
+ * @returns {Number}
+ */
+lunr.Vector.prototype.similarity = function (otherVector) {
+ return this.dot(otherVector) / (this.magnitude() * otherVector.magnitude())
+}
+
+/**
+ * Converts the vector to an array of the elements within the vector.
+ *
+ * @returns {Number[]}
+ */
+lunr.Vector.prototype.toArray = function () {
+ var output = new Array (this.elements.length / 2)
+
+ for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) {
+ output[j] = this.elements[i]
+ }
+
+ return output
+}
+
+/**
+ * A JSON serializable representation of the vector.
+ *
+ * @returns {Number[]}
+ */
+lunr.Vector.prototype.toJSON = function () {
+ return this.elements
+}
+/* eslint-disable */
+/*!
+ * lunr.stemmer
+ * Copyright (C) 2017 Oliver Nightingale
+ * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt
+ */
+
+/**
+ * lunr.stemmer is an english language stemmer, this is a JavaScript
+ * implementation of the PorterStemmer taken from http://tartarus.org/~martin
+ *
+ * @static
+ * @implements {lunr.PipelineFunction}
+ * @param {lunr.Token} token - The string to stem
+ * @returns {lunr.Token}
+ * @see {@link lunr.Pipeline}
+ */
+lunr.stemmer = (function(){
+ var step2list = {
+ "ational" : "ate",
+ "tional" : "tion",
+ "enci" : "ence",
+ "anci" : "ance",
+ "izer" : "ize",
+ "bli" : "ble",
+ "alli" : "al",
+ "entli" : "ent",
+ "eli" : "e",
+ "ousli" : "ous",
+ "ization" : "ize",
+ "ation" : "ate",
+ "ator" : "ate",
+ "alism" : "al",
+ "iveness" : "ive",
+ "fulness" : "ful",
+ "ousness" : "ous",
+ "aliti" : "al",
+ "iviti" : "ive",
+ "biliti" : "ble",
+ "logi" : "log"
+ },
+
+ step3list = {
+ "icate" : "ic",
+ "ative" : "",
+ "alize" : "al",
+ "iciti" : "ic",
+ "ical" : "ic",
+ "ful" : "",
+ "ness" : ""
+ },
+
+ c = "[^aeiou]", // consonant
+ v = "[aeiouy]", // vowel
+ C = c + "[^aeiouy]*", // consonant sequence
+ V = v + "[aeiou]*", // vowel sequence
+
+ mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0
+ meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1
+ mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1
+ s_v = "^(" + C + ")?" + v; // vowel in stem
+
+ var re_mgr0 = new RegExp(mgr0);
+ var re_mgr1 = new RegExp(mgr1);
+ var re_meq1 = new RegExp(meq1);
+ var re_s_v = new RegExp(s_v);
+
+ var re_1a = /^(.+?)(ss|i)es$/;
+ var re2_1a = /^(.+?)([^s])s$/;
+ var re_1b = /^(.+?)eed$/;
+ var re2_1b = /^(.+?)(ed|ing)$/;
+ var re_1b_2 = /.$/;
+ var re2_1b_2 = /(at|bl|iz)$/;
+ var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$");
+ var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$");
+
+ var re_1c = /^(.+?[^aeiou])y$/;
+ var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
+
+ var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
+
+ var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
+ var re2_4 = /^(.+?)(s|t)(ion)$/;
+
+ var re_5 = /^(.+?)e$/;
+ var re_5_1 = /ll$/;
+ var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$");
+
+ var porterStemmer = function porterStemmer(w) {
+ var stem,
+ suffix,
+ firstch,
+ re,
+ re2,
+ re3,
+ re4;
+
+ if (w.length < 3) { return w; }
+
+ firstch = w.substr(0,1);
+ if (firstch == "y") {
+ w = firstch.toUpperCase() + w.substr(1);
+ }
+
+ // Step 1a
+ re = re_1a
+ re2 = re2_1a;
+
+ if (re.test(w)) { w = w.replace(re,"$1$2"); }
+ else if (re2.test(w)) { w = w.replace(re2,"$1$2"); }
+
+ // Step 1b
+ re = re_1b;
+ re2 = re2_1b;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ re = re_mgr0;
+ if (re.test(fp[1])) {
+ re = re_1b_2;
+ w = w.replace(re,"");
+ }
+ } else if (re2.test(w)) {
+ var fp = re2.exec(w);
+ stem = fp[1];
+ re2 = re_s_v;
+ if (re2.test(stem)) {
+ w = stem;
+ re2 = re2_1b_2;
+ re3 = re3_1b_2;
+ re4 = re4_1b_2;
+ if (re2.test(w)) { w = w + "e"; }
+ else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); }
+ else if (re4.test(w)) { w = w + "e"; }
+ }
+ }
+
+ // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say)
+ re = re_1c;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ w = stem + "i";
+ }
+
+ // Step 2
+ re = re_2;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ suffix = fp[2];
+ re = re_mgr0;
+ if (re.test(stem)) {
+ w = stem + step2list[suffix];
+ }
+ }
+
+ // Step 3
+ re = re_3;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ suffix = fp[2];
+ re = re_mgr0;
+ if (re.test(stem)) {
+ w = stem + step3list[suffix];
+ }
+ }
+
+ // Step 4
+ re = re_4;
+ re2 = re2_4;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = re_mgr1;
+ if (re.test(stem)) {
+ w = stem;
+ }
+ } else if (re2.test(w)) {
+ var fp = re2.exec(w);
+ stem = fp[1] + fp[2];
+ re2 = re_mgr1;
+ if (re2.test(stem)) {
+ w = stem;
+ }
+ }
+
+ // Step 5
+ re = re_5;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = re_mgr1;
+ re2 = re_meq1;
+ re3 = re3_5;
+ if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {
+ w = stem;
+ }
+ }
+
+ re = re_5_1;
+ re2 = re_mgr1;
+ if (re.test(w) && re2.test(w)) {
+ re = re_1b_2;
+ w = w.replace(re,"");
+ }
+
+ // and turn initial Y back to y
+
+ if (firstch == "y") {
+ w = firstch.toLowerCase() + w.substr(1);
+ }
+
+ return w;
+ };
+
+ return function (token) {
+ return token.update(porterStemmer);
+ }
+})();
+
+lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer')
+/*!
+ * lunr.stopWordFilter
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * lunr.generateStopWordFilter builds a stopWordFilter function from the provided
+ * list of stop words.
+ *
+ * The built in lunr.stopWordFilter is built using this generator and can be used
+ * to generate custom stopWordFilters for applications or non English languages.
+ *
+ * @param {Array} token The token to pass through the filter
+ * @returns {lunr.PipelineFunction}
+ * @see lunr.Pipeline
+ * @see lunr.stopWordFilter
+ */
+lunr.generateStopWordFilter = function (stopWords) {
+ var words = stopWords.reduce(function (memo, stopWord) {
+ memo[stopWord] = stopWord
+ return memo
+ }, {})
+
+ return function (token) {
+ if (token && words[token.toString()] !== token.toString()) return token
+ }
+}
+
+/**
+ * lunr.stopWordFilter is an English language stop word list filter, any words
+ * contained in the list will not be passed through the filter.
+ *
+ * This is intended to be used in the Pipeline. If the token does not pass the
+ * filter then undefined will be returned.
+ *
+ * @implements {lunr.PipelineFunction}
+ * @params {lunr.Token} token - A token to check for being a stop word.
+ * @returns {lunr.Token}
+ * @see {@link lunr.Pipeline}
+ */
+lunr.stopWordFilter = lunr.generateStopWordFilter([
+ 'a',
+ 'able',
+ 'about',
+ 'across',
+ 'after',
+ 'all',
+ 'almost',
+ 'also',
+ 'am',
+ 'among',
+ 'an',
+ 'and',
+ 'any',
+ 'are',
+ 'as',
+ 'at',
+ 'be',
+ 'because',
+ 'been',
+ 'but',
+ 'by',
+ 'can',
+ 'cannot',
+ 'could',
+ 'dear',
+ 'did',
+ 'do',
+ 'does',
+ 'either',
+ 'else',
+ 'ever',
+ 'every',
+ 'for',
+ 'from',
+ 'get',
+ 'got',
+ 'had',
+ 'has',
+ 'have',
+ 'he',
+ 'her',
+ 'hers',
+ 'him',
+ 'his',
+ 'how',
+ 'however',
+ 'i',
+ 'if',
+ 'in',
+ 'into',
+ 'is',
+ 'it',
+ 'its',
+ 'just',
+ 'least',
+ 'let',
+ 'like',
+ 'likely',
+ 'may',
+ 'me',
+ 'might',
+ 'most',
+ 'must',
+ 'my',
+ 'neither',
+ 'no',
+ 'nor',
+ 'not',
+ 'of',
+ 'off',
+ 'often',
+ 'on',
+ 'only',
+ 'or',
+ 'other',
+ 'our',
+ 'own',
+ 'rather',
+ 'said',
+ 'say',
+ 'says',
+ 'she',
+ 'should',
+ 'since',
+ 'so',
+ 'some',
+ 'than',
+ 'that',
+ 'the',
+ 'their',
+ 'them',
+ 'then',
+ 'there',
+ 'these',
+ 'they',
+ 'this',
+ 'tis',
+ 'to',
+ 'too',
+ 'twas',
+ 'us',
+ 'wants',
+ 'was',
+ 'we',
+ 'were',
+ 'what',
+ 'when',
+ 'where',
+ 'which',
+ 'while',
+ 'who',
+ 'whom',
+ 'why',
+ 'will',
+ 'with',
+ 'would',
+ 'yet',
+ 'you',
+ 'your'
+])
+
+lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter')
+/*!
+ * lunr.trimmer
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * lunr.trimmer is a pipeline function for trimming non word
+ * characters from the beginning and end of tokens before they
+ * enter the index.
+ *
+ * This implementation may not work correctly for non latin
+ * characters and should either be removed or adapted for use
+ * with languages with non-latin characters.
+ *
+ * @static
+ * @implements {lunr.PipelineFunction}
+ * @param {lunr.Token} token The token to pass through the filter
+ * @returns {lunr.Token}
+ * @see lunr.Pipeline
+ */
+lunr.trimmer = function (token) {
+ return token.update(function (s) {
+ return s.replace(/^\W+/, '').replace(/\W+$/, '')
+ })
+}
+
+lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer')
+/*!
+ * lunr.TokenSet
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * A token set is used to store the unique list of all tokens
+ * within an index. Token sets are also used to represent an
+ * incoming query to the index, this query token set and index
+ * token set are then intersected to find which tokens to look
+ * up in the inverted index.
+ *
+ * A token set can hold multiple tokens, as in the case of the
+ * index token set, or it can hold a single token as in the
+ * case of a simple query token set.
+ *
+ * Additionally token sets are used to perform wildcard matching.
+ * Leading, contained and trailing wildcards are supported, and
+ * from this edit distance matching can also be provided.
+ *
+ * Token sets are implemented as a minimal finite state automata,
+ * where both common prefixes and suffixes are shared between tokens.
+ * This helps to reduce the space used for storing the token set.
+ *
+ * @constructor
+ */
+lunr.TokenSet = function () {
+ this.final = false
+ this.edges = {}
+ this.id = lunr.TokenSet._nextId
+ lunr.TokenSet._nextId += 1
+}
+
+/**
+ * Keeps track of the next, auto increment, identifier to assign
+ * to a new tokenSet.
+ *
+ * TokenSets require a unique identifier to be correctly minimised.
+ *
+ * @private
+ */
+lunr.TokenSet._nextId = 1
+
+/**
+ * Creates a TokenSet instance from the given sorted array of words.
+ *
+ * @param {String[]} arr - A sorted array of strings to create the set from.
+ * @returns {lunr.TokenSet}
+ * @throws Will throw an error if the input array is not sorted.
+ */
+lunr.TokenSet.fromArray = function (arr) {
+ var builder = new lunr.TokenSet.Builder
+
+ for (var i = 0, len = arr.length; i < len; i++) {
+ builder.insert(arr[i])
+ }
+
+ builder.finish()
+ return builder.root
+}
+
+/**
+ * Creates a token set from a query clause.
+ *
+ * @private
+ * @param {Object} clause - A single clause from lunr.Query.
+ * @param {string} clause.term - The query clause term.
+ * @param {number} [clause.editDistance] - The optional edit distance for the term.
+ * @returns {lunr.TokenSet}
+ */
+lunr.TokenSet.fromClause = function (clause) {
+ if ('editDistance' in clause) {
+ return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance)
+ } else {
+ return lunr.TokenSet.fromString(clause.term)
+ }
+}
+
+/**
+ * Creates a token set representing a single string with a specified
+ * edit distance.
+ *
+ * Insertions, deletions, substitutions and transpositions are each
+ * treated as an edit distance of 1.
+ *
+ * Increasing the allowed edit distance will have a dramatic impact
+ * on the performance of both creating and intersecting these TokenSets.
+ * It is advised to keep the edit distance less than 3.
+ *
+ * @param {string} str - The string to create the token set from.
+ * @param {number} editDistance - The allowed edit distance to match.
+ * @returns {lunr.Vector}
+ */
+lunr.TokenSet.fromFuzzyString = function (str, editDistance) {
+ var root = new lunr.TokenSet
+
+ var stack = [{
+ node: root,
+ editsRemaining: editDistance,
+ str: str
+ }]
+
+ while (stack.length) {
+ var frame = stack.pop()
+
+ // no edit
+ if (frame.str.length > 0) {
+ var char = frame.str.charAt(0),
+ noEditNode
+
+ if (char in frame.node.edges) {
+ noEditNode = frame.node.edges[char]
+ } else {
+ noEditNode = new lunr.TokenSet
+ frame.node.edges[char] = noEditNode
+ }
+
+ if (frame.str.length == 1) {
+ noEditNode.final = true
+ } else {
+ stack.push({
+ node: noEditNode,
+ editsRemaining: frame.editsRemaining,
+ str: frame.str.slice(1)
+ })
+ }
+ }
+
+ // deletion
+ // can only do a deletion if we have enough edits remaining
+ // and if there are characters left to delete in the string
+ if (frame.editsRemaining > 0 && frame.str.length > 1) {
+ var char = frame.str.charAt(1),
+ deletionNode
+
+ if (char in frame.node.edges) {
+ deletionNode = frame.node.edges[char]
+ } else {
+ deletionNode = new lunr.TokenSet
+ frame.node.edges[char] = deletionNode
+ }
+
+ if (frame.str.length <= 2) {
+ deletionNode.final = true
+ } else {
+ stack.push({
+ node: deletionNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: frame.str.slice(2)
+ })
+ }
+ }
+
+ // deletion
+ // just removing the last character from the str
+ if (frame.editsRemaining > 0 && frame.str.length == 1) {
+ frame.node.final = true
+ }
+
+ // substitution
+ // can only do a substitution if we have enough edits remaining
+ // and if there are characters left to substitute
+ if (frame.editsRemaining > 0 && frame.str.length >= 1) {
+ if ("*" in frame.node.edges) {
+ var substitutionNode = frame.node.edges["*"]
+ } else {
+ var substitutionNode = new lunr.TokenSet
+ frame.node.edges["*"] = substitutionNode
+ }
+
+ if (frame.str.length == 1) {
+ substitutionNode.final = true
+ } else {
+ stack.push({
+ node: substitutionNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: frame.str.slice(1)
+ })
+ }
+ }
+
+ // insertion
+ // can only do insertion if there are edits remaining
+ if (frame.editsRemaining > 0) {
+ if ("*" in frame.node.edges) {
+ var insertionNode = frame.node.edges["*"]
+ } else {
+ var insertionNode = new lunr.TokenSet
+ frame.node.edges["*"] = insertionNode
+ }
+
+ if (frame.str.length == 0) {
+ insertionNode.final = true
+ } else {
+ stack.push({
+ node: insertionNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: frame.str
+ })
+ }
+ }
+
+ // transposition
+ // can only do a transposition if there are edits remaining
+ // and there are enough characters to transpose
+ if (frame.editsRemaining > 0 && frame.str.length > 1) {
+ var charA = frame.str.charAt(0),
+ charB = frame.str.charAt(1),
+ transposeNode
+
+ if (charB in frame.node.edges) {
+ transposeNode = frame.node.edges[charB]
+ } else {
+ transposeNode = new lunr.TokenSet
+ frame.node.edges[charB] = transposeNode
+ }
+
+ if (frame.str.length == 1) {
+ transposeNode.final = true
+ } else {
+ stack.push({
+ node: transposeNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: charA + frame.str.slice(2)
+ })
+ }
+ }
+ }
+
+ return root
+}
+
+/**
+ * Creates a TokenSet from a string.
+ *
+ * The string may contain one or more wildcard characters (*)
+ * that will allow wildcard matching when intersecting with
+ * another TokenSet.
+ *
+ * @param {string} str - The string to create a TokenSet from.
+ * @returns {lunr.TokenSet}
+ */
+lunr.TokenSet.fromString = function (str) {
+ var node = new lunr.TokenSet,
+ root = node,
+ wildcardFound = false
+
+ /*
+ * Iterates through all characters within the passed string
+ * appending a node for each character.
+ *
+ * As soon as a wildcard character is found then a self
+ * referencing edge is introduced to continually match
+ * any number of any characters.
+ */
+ for (var i = 0, len = str.length; i < len; i++) {
+ var char = str[i],
+ final = (i == len - 1)
+
+ if (char == "*") {
+ wildcardFound = true
+ node.edges[char] = node
+ node.final = final
+
+ } else {
+ var next = new lunr.TokenSet
+ next.final = final
+
+ node.edges[char] = next
+ node = next
+
+ // TODO: is this needed anymore?
+ if (wildcardFound) {
+ node.edges["*"] = root
+ }
+ }
+ }
+
+ return root
+}
+
+/**
+ * Converts this TokenSet into an array of strings
+ * contained within the TokenSet.
+ *
+ * @returns {string[]}
+ */
+lunr.TokenSet.prototype.toArray = function () {
+ var words = []
+
+ var stack = [{
+ prefix: "",
+ node: this
+ }]
+
+ while (stack.length) {
+ var frame = stack.pop(),
+ edges = Object.keys(frame.node.edges),
+ len = edges.length
+
+ if (frame.node.final) {
+ words.push(frame.prefix)
+ }
+
+ for (var i = 0; i < len; i++) {
+ var edge = edges[i]
+
+ stack.push({
+ prefix: frame.prefix.concat(edge),
+ node: frame.node.edges[edge]
+ })
+ }
+ }
+
+ return words
+}
+
+/**
+ * Generates a string representation of a TokenSet.
+ *
+ * This is intended to allow TokenSets to be used as keys
+ * in objects, largely to aid the construction and minimisation
+ * of a TokenSet. As such it is not designed to be a human
+ * friendly representation of the TokenSet.
+ *
+ * @returns {string}
+ */
+lunr.TokenSet.prototype.toString = function () {
+ // NOTE: Using Object.keys here as this.edges is very likely
+ // to enter 'hash-mode' with many keys being added
+ //
+ // avoiding a for-in loop here as it leads to the function
+ // being de-optimised (at least in V8). From some simple
+ // benchmarks the performance is comparable, but allowing
+ // V8 to optimize may mean easy performance wins in the future.
+
+ if (this._str) {
+ return this._str
+ }
+
+ var str = this.final ? '1' : '0',
+ labels = Object.keys(this.edges).sort(),
+ len = labels.length
+
+ for (var i = 0; i < len; i++) {
+ var label = labels[i],
+ node = this.edges[label]
+
+ str = str + label + node.id
+ }
+
+ return str
+}
+
+/**
+ * Returns a new TokenSet that is the intersection of
+ * this TokenSet and the passed TokenSet.
+ *
+ * This intersection will take into account any wildcards
+ * contained within the TokenSet.
+ *
+ * @param {lunr.TokenSet} b - An other TokenSet to intersect with.
+ * @returns {lunr.TokenSet}
+ */
+lunr.TokenSet.prototype.intersect = function (b) {
+ var output = new lunr.TokenSet,
+ frame = undefined
+
+ var stack = [{
+ qNode: b,
+ output: output,
+ node: this
+ }]
+
+ while (stack.length) {
+ frame = stack.pop()
+
+ // NOTE: As with the #toString method, we are using
+ // Object.keys and a for loop instead of a for-in loop
+ // as both of these objects enter 'hash' mode, causing
+ // the function to be de-optimised in V8
+ var qEdges = Object.keys(frame.qNode.edges),
+ qLen = qEdges.length,
+ nEdges = Object.keys(frame.node.edges),
+ nLen = nEdges.length
+
+ for (var q = 0; q < qLen; q++) {
+ var qEdge = qEdges[q]
+
+ for (var n = 0; n < nLen; n++) {
+ var nEdge = nEdges[n]
+
+ if (nEdge == qEdge || qEdge == '*') {
+ var node = frame.node.edges[nEdge],
+ qNode = frame.qNode.edges[qEdge],
+ final = node.final && qNode.final,
+ next = undefined
+
+ if (nEdge in frame.output.edges) {
+ // an edge already exists for this character
+ // no need to create a new node, just set the finality
+ // bit unless this node is already final
+ next = frame.output.edges[nEdge]
+ next.final = next.final || final
+
+ } else {
+ // no edge exists yet, must create one
+ // set the finality bit and insert it
+ // into the output
+ next = new lunr.TokenSet
+ next.final = final
+ frame.output.edges[nEdge] = next
+ }
+
+ stack.push({
+ qNode: qNode,
+ output: next,
+ node: node
+ })
+ }
+ }
+ }
+ }
+
+ return output
+}
+lunr.TokenSet.Builder = function () {
+ this.previousWord = ""
+ this.root = new lunr.TokenSet
+ this.uncheckedNodes = []
+ this.minimizedNodes = {}
+}
+
+lunr.TokenSet.Builder.prototype.insert = function (word) {
+ var node,
+ commonPrefix = 0
+
+ if (word < this.previousWord) {
+ throw new Error ("Out of order word insertion")
+ }
+
+ for (var i = 0; i < word.length && i < this.previousWord.length; i++) {
+ if (word[i] != this.previousWord[i]) break
+ commonPrefix++
+ }
+
+ this.minimize(commonPrefix)
+
+ if (this.uncheckedNodes.length == 0) {
+ node = this.root
+ } else {
+ node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child
+ }
+
+ for (var i = commonPrefix; i < word.length; i++) {
+ var nextNode = new lunr.TokenSet,
+ char = word[i]
+
+ node.edges[char] = nextNode
+
+ this.uncheckedNodes.push({
+ parent: node,
+ char: char,
+ child: nextNode
+ })
+
+ node = nextNode
+ }
+
+ node.final = true
+ this.previousWord = word
+}
+
+lunr.TokenSet.Builder.prototype.finish = function () {
+ this.minimize(0)
+}
+
+lunr.TokenSet.Builder.prototype.minimize = function (downTo) {
+ for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) {
+ var node = this.uncheckedNodes[i],
+ childKey = node.child.toString()
+
+ if (childKey in this.minimizedNodes) {
+ node.parent.edges[node.char] = this.minimizedNodes[childKey]
+ } else {
+ // Cache the key for this node since
+ // we know it can't change anymore
+ node.child._str = childKey
+
+ this.minimizedNodes[childKey] = node.child
+ }
+
+ this.uncheckedNodes.pop()
+ }
+}
+/*!
+ * lunr.Index
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * An index contains the built index of all documents and provides a query interface
+ * to the index.
+ *
+ * Usually instances of lunr.Index will not be created using this constructor, instead
+ * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be
+ * used to load previously built and serialized indexes.
+ *
+ * @constructor
+ * @param {Object} attrs - The attributes of the built search index.
+ * @param {Object} attrs.invertedIndex - An index of term/field to document reference.
+ * @param {Object
} attrs.documentVectors - Document vectors keyed by document reference.
+ * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens.
+ * @param {number} attrs.documentCount - The total number of documents held in the index.
+ * @param {number} attrs.averageDocumentLength - The average length of all documents in the index.
+ * @param {number} attrs.b - A parameter for the document scoring algorithm.
+ * @param {number} attrs.k1 - A parameter for the document scoring algorithm.
+ * @param {string[]} attrs.fields - The names of indexed document fields.
+ * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms.
+ */
+lunr.Index = function (attrs) {
+ this.invertedIndex = attrs.invertedIndex
+ this.documentVectors = attrs.documentVectors
+ this.tokenSet = attrs.tokenSet
+ this.documentCount = attrs.documentCount
+ this.averageDocumentLength = attrs.averageDocumentLength
+ this.b = attrs.b
+ this.k1 = attrs.k1
+ this.fields = attrs.fields
+ this.pipeline = attrs.pipeline
+}
+
+/**
+ * A result contains details of a document matching a search query.
+ * @typedef {Object} lunr.Index~Result
+ * @property {string} ref - The reference of the document this result represents.
+ * @property {number} score - A number between 0 and 1 representing how similar this document is to the query.
+ * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match.
+ */
+
+/**
+ * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple
+ * query language which itself is parsed into an instance of lunr.Query.
+ *
+ * For programmatically building queries it is advised to directly use lunr.Query, the query language
+ * is best used for human entered text rather than program generated text.
+ *
+ * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported
+ * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello'
+ * or 'world', though those that contain both will rank higher in the results.
+ *
+ * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can
+ * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding
+ * wildcards will increase the number of documents that will be found but can also have a negative
+ * impact on query performance, especially with wildcards at the beginning of a term.
+ *
+ * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term
+ * hello in the title field will match this query. Using a field not present in the index will lead
+ * to an error being thrown.
+ *
+ * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term
+ * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported
+ * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2.
+ * Avoid large values for edit distance to improve query performance.
+ *
+ * @typedef {string} lunr.Index~QueryString
+ * @example Simple single term query
+ * hello
+ * @example Multiple term query
+ * hello world
+ * @example term scoped to a field
+ * title:hello
+ * @example term with a boost of 10
+ * hello^10
+ * @example term with an edit distance of 2
+ * hello~2
+ */
+
+/**
+ * Performs a search against the index using lunr query syntax.
+ *
+ * Results will be returned sorted by their score, the most relevant results
+ * will be returned first.
+ *
+ * For more programmatic querying use lunr.Index#query.
+ *
+ * @param {lunr.Index~QueryString} queryString - A string containing a lunr query.
+ * @throws {lunr.QueryParseError} If the passed query string cannot be parsed.
+ * @returns {lunr.Index~Result[]}
+ */
+lunr.Index.prototype.search = function (queryString) {
+ return this.query(function (query) {
+ var parser = new lunr.QueryParser(queryString, query)
+ parser.parse()
+ })
+}
+
+/**
+ * A query builder callback provides a query object to be used to express
+ * the query to perform on the index.
+ *
+ * @callback lunr.Index~queryBuilder
+ * @param {lunr.Query} query - The query object to build up.
+ * @this lunr.Query
+ */
+
+/**
+ * Performs a query against the index using the yielded lunr.Query object.
+ *
+ * If performing programmatic queries against the index, this method is preferred
+ * over lunr.Index#search so as to avoid the additional query parsing overhead.
+ *
+ * A query object is yielded to the supplied function which should be used to
+ * express the query to be run against the index.
+ *
+ * Note that although this function takes a callback parameter it is _not_ an
+ * asynchronous operation, the callback is just yielded a query object to be
+ * customized.
+ *
+ * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query.
+ * @returns {lunr.Index~Result[]}
+ */
+lunr.Index.prototype.query = function (fn) {
+ // for each query clause
+ // * process terms
+ // * expand terms from token set
+ // * find matching documents and metadata
+ // * get document vectors
+ // * score documents
+
+ var query = new lunr.Query(this.fields),
+ matchingDocuments = Object.create(null),
+ queryVector = new lunr.Vector
+
+ fn.call(query, query)
+
+ for (var i = 0; i < query.clauses.length; i++) {
+ /*
+ * Unless the pipeline has been disabled for this term, which is
+ * the case for terms with wildcards, we need to pass the clause
+ * term through the search pipeline. A pipeline returns an array
+ * of processed terms. Pipeline functions may expand the passed
+ * term, which means we may end up performing multiple index lookups
+ * for a single query term.
+ */
+ var clause = query.clauses[i],
+ terms = null
+
+ if (clause.usePipeline) {
+ terms = this.pipeline.runString(clause.term)
+ } else {
+ terms = [clause.term]
+ }
+
+ for (var m = 0; m < terms.length; m++) {
+ var term = terms[m]
+
+ /*
+ * Each term returned from the pipeline needs to use the same query
+ * clause object, e.g. the same boost and or edit distance. The
+ * simplest way to do this is to re-use the clause object but mutate
+ * its term property.
+ */
+ clause.term = term
+
+ /*
+ * From the term in the clause we create a token set which will then
+ * be used to intersect the indexes token set to get a list of terms
+ * to lookup in the inverted index
+ */
+ var termTokenSet = lunr.TokenSet.fromClause(clause),
+ expandedTerms = this.tokenSet.intersect(termTokenSet).toArray()
+
+ for (var j = 0; j < expandedTerms.length; j++) {
+ /*
+ * For each term calculate the score as the term relates to the
+ * query using the same calculation used to score documents during
+ * indexing. This score will be used to build a vector space
+ * representation of the query.
+ *
+ * Also need to discover the terms index to insert into the query
+ * vector at the right position
+ */
+ var expandedTerm = expandedTerms[j],
+ posting = this.invertedIndex[expandedTerm],
+ termIndex = posting._index,
+ idf = lunr.idf(posting, this.documentCount),
+ tf = 1,
+ score = idf * ((this.k1 + 1) * tf) / (this.k1 * (1 - this.b + this.b * (query.clauses.length / this.averageDocumentLength)) + tf)
+
+ /*
+ * Upserting the found query term, along with its term index
+ * into the vector representing the query. It is here that
+ * any boosts are applied to the score. They could have been
+ * applied when calculating the score above, but that expression
+ * is already quite busy.
+ *
+ * Using upsert because there could already be an entry in the vector
+ * for the term we are working with. In that case we just add the scores
+ * together.
+ */
+ queryVector.upsert(termIndex, score * clause.boost, function (a, b) { return a + b })
+
+ for (var k = 0; k < clause.fields.length; k++) {
+ /*
+ * For each field that this query term is scoped by (by default
+ * all fields are in scope) we need to get all the document refs
+ * that have this term in that field.
+ *
+ * The posting is the entry in the invertedIndex for the matching
+ * term from above.
+ */
+ var field = clause.fields[k],
+ fieldPosting = posting[field],
+ matchingDocumentRefs = Object.keys(fieldPosting)
+
+ for (var l = 0; l < matchingDocumentRefs.length; l++) {
+ /*
+ * All metadata for this term/field/document triple
+ * are then extracted and collected into an instance
+ * of lunr.MatchData ready to be returned in the query
+ * results
+ */
+ var matchingDocumentRef = matchingDocumentRefs[l],
+ documentMetadata, matchData
+
+ documentMetadata = fieldPosting[matchingDocumentRef]
+ matchData = new lunr.MatchData (expandedTerm, field, documentMetadata)
+
+ if (matchingDocumentRef in matchingDocuments) {
+ matchingDocuments[matchingDocumentRef].combine(matchData)
+ } else {
+ matchingDocuments[matchingDocumentRef] = matchData
+ }
+
+ }
+ }
+ }
+ }
+ }
+
+ var matchingDocumentRefs = Object.keys(matchingDocuments),
+ results = []
+
+ for (var i = 0; i < matchingDocumentRefs.length; i++) {
+ /*
+ * With all the matching documents found they now need
+ * to be sorted by their relevance to the query. This
+ * is done by retrieving the documents vector representation
+ * and then finding its similarity with the query vector
+ * that was constructed earlier.
+ *
+ * This score, along with the document ref and any metadata
+ * we collected into a lunr.MatchData instance are stored
+ * in the results array ready for returning to the caller
+ */
+ var ref = matchingDocumentRefs[i],
+ documentVector = this.documentVectors[ref],
+ score = queryVector.similarity(documentVector)
+
+ results.push({
+ ref: ref,
+ score: score,
+ matchData: matchingDocuments[ref]
+ })
+ }
+
+ return results.sort(function (a, b) {
+ return b.score - a.score
+ })
+}
+
+/**
+ * Prepares the index for JSON serialization.
+ *
+ * The schema for this JSON blob will be described in a
+ * separate JSON schema file.
+ *
+ * @returns {Object}
+ */
+lunr.Index.prototype.toJSON = function () {
+ var invertedIndex = Object.keys(this.invertedIndex)
+ .sort()
+ .map(function (term) {
+ return [term, this.invertedIndex[term]]
+ }, this)
+
+ var documentVectors = Object.keys(this.documentVectors)
+ .map(function (ref) {
+ return [ref, this.documentVectors[ref].toJSON()]
+ }, this)
+
+ return {
+ version: lunr.version,
+ averageDocumentLength: this.averageDocumentLength,
+ b: this.b,
+ k1: this.k1,
+ fields: this.fields,
+ documentVectors: documentVectors,
+ invertedIndex: invertedIndex,
+ pipeline: this.pipeline.toJSON()
+ }
+}
+
+/**
+ * Loads a previously serialized lunr.Index
+ *
+ * @param {Object} serializedIndex - A previously serialized lunr.Index
+ * @returns {lunr.Index}
+ */
+lunr.Index.load = function (serializedIndex) {
+ var attrs = {},
+ documentVectors = {},
+ serializedVectors = serializedIndex.documentVectors,
+ documentCount = 0,
+ invertedIndex = {},
+ serializedInvertedIndex = serializedIndex.invertedIndex,
+ tokenSetBuilder = new lunr.TokenSet.Builder,
+ pipeline = lunr.Pipeline.load(serializedIndex.pipeline)
+
+ if (serializedIndex.version != lunr.version) {
+ lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'")
+ }
+
+ for (var i = 0; i < serializedVectors.length; i++, documentCount++) {
+ var tuple = serializedVectors[i],
+ ref = tuple[0],
+ elements = tuple[1]
+
+ documentVectors[ref] = new lunr.Vector(elements)
+ }
+
+ for (var i = 0; i < serializedInvertedIndex.length; i++) {
+ var tuple = serializedInvertedIndex[i],
+ term = tuple[0],
+ posting = tuple[1]
+
+ tokenSetBuilder.insert(term)
+ invertedIndex[term] = posting
+ }
+
+ tokenSetBuilder.finish()
+
+ attrs.b = serializedIndex.b
+ attrs.k1 = serializedIndex.k1
+ attrs.fields = serializedIndex.fields
+ attrs.averageDocumentLength = serializedIndex.averageDocumentLength
+
+ attrs.documentCount = documentCount
+ attrs.documentVectors = documentVectors
+ attrs.invertedIndex = invertedIndex
+ attrs.tokenSet = tokenSetBuilder.root
+ attrs.pipeline = pipeline
+
+ return new lunr.Index(attrs)
+}
+/*!
+ * lunr.Builder
+ * Copyright (C) 2017 Oliver Nightingale
+ */
+
+/**
+ * lunr.Builder performs indexing on a set of documents and
+ * returns instances of lunr.Index ready for querying.
+ *
+ * All configuration of the index is done via the builder, the
+ * fields to index, the document reference, the text processing
+ * pipeline and document scoring parameters are all set on the
+ * builder before indexing.
+ *
+ * @constructor
+ * @property {string} _ref - Internal reference to the document reference field.
+ * @property {string[]} _fields - Internal reference to the document fields to index.
+ * @property {object} invertedIndex - The inverted index maps terms to document fields.
+ * @property {object} documentTermFrequencies - Keeps track of document term frequencies.
+ * @property {object} documentLengths - Keeps track of the length of documents added to the index.
+ * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing.
+ * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing.
+ * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index.
+ * @property {number} documentCount - Keeps track of the total number of documents indexed.
+ * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75.
+ * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2.
+ * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space.
+ * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index.
+ */
+lunr.Builder = function () {
+ this._ref = "id"
+ this._fields = []
+ this.invertedIndex = Object.create(null)
+ this.documentTermFrequencies = {}
+ this.documentLengths = {}
+ this.tokenizer = lunr.tokenizer
+ this.pipeline = new lunr.Pipeline
+ this.searchPipeline = new lunr.Pipeline
+ this.documentCount = 0
+ this._b = 0.75
+ this._k1 = 1.2
+ this.termIndex = 0
+ this.metadataWhitelist = []
+}
+
+/**
+ * Sets the document field used as the document reference. Every document must have this field.
+ * The type of this field in the document should be a string, if it is not a string it will be
+ * coerced into a string by calling toString.
+ *
+ * The default ref is 'id'.
+ *
+ * The ref should _not_ be changed during indexing, it should be set before any documents are
+ * added to the index. Changing it during indexing can lead to inconsistent results.
+ *
+ * @param {string} ref - The name of the reference field in the document.
+ */
+lunr.Builder.prototype.ref = function (ref) {
+ this._ref = ref
+}
+
+/**
+ * Adds a field to the list of document fields that will be indexed. Every document being
+ * indexed should have this field. Null values for this field in indexed documents will
+ * not cause errors but will limit the chance of that document being retrieved by searches.
+ *
+ * All fields should be added before adding documents to the index. Adding fields after
+ * a document has been indexed will have no effect on already indexed documents.
+ *
+ * @param {string} field - The name of a field to index in all documents.
+ */
+lunr.Builder.prototype.field = function (field) {
+ this._fields.push(field)
+}
+
+/**
+ * A parameter to tune the amount of field length normalisation that is applied when
+ * calculating relevance scores. A value of 0 will completely disable any normalisation
+ * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b
+ * will be clamped to the range 0 - 1.
+ *
+ * @param {number} number - The value to set for this tuning parameter.
+ */
+lunr.Builder.prototype.b = function (number) {
+ if (number < 0) {
+ this._b = 0
+ } else if (number > 1) {
+ this._b = 1
+ } else {
+ this._b = number
+ }
+}
+
+/**
+ * A parameter that controls the speed at which a rise in term frequency results in term
+ * frequency saturation. The default value is 1.2. Setting this to a higher value will give
+ * slower saturation levels, a lower value will result in quicker saturation.
+ *
+ * @param {number} number - The value to set for this tuning parameter.
+ */
+lunr.Builder.prototype.k1 = function (number) {
+ this._k1 = number
+}
+
+/**
+ * Adds a document to the index.
+ *
+ * Before adding fields to the index the index should have been fully setup, with the document
+ * ref and all fields to index already having been specified.
+ *
+ * The document must have a field name as specified by the ref (by default this is 'id') and
+ * it should have all fields defined for indexing, though null or undefined values will not
+ * cause errors.
+ *
+ * @param {object} doc - The document to add to the index.
+ */
+lunr.Builder.prototype.add = function (doc) {
+ var docRef = doc[this._ref],
+ documentTerms = {}
+
+ this.documentCount += 1
+ this.documentTermFrequencies[docRef] = documentTerms
+ this.documentLengths[docRef] = 0
+
+ for (var i = 0; i < this._fields.length; i++) {
+ var fieldName = this._fields[i],
+ field = doc[fieldName],
+ tokens = this.tokenizer(field),
+ terms = this.pipeline.run(tokens)
+
+ // store the length of this field for this document
+ this.documentLengths[docRef] += terms.length
+
+ // calculate term frequencies for this field
+ for (var j = 0; j < terms.length; j++) {
+ var term = terms[j]
+
+ if (documentTerms[term] == undefined) {
+ documentTerms[term] = 0
+ }
+
+ documentTerms[term] += 1
+
+ // add to inverted index
+ // create an initial posting if one doesn't exist
+ if (this.invertedIndex[term] == undefined) {
+ var posting = Object.create(null)
+ posting["_index"] = this.termIndex
+ this.termIndex += 1
+
+ for (var k = 0; k < this._fields.length; k++) {
+ posting[this._fields[k]] = Object.create(null)
+ }
+
+ this.invertedIndex[term] = posting
+ }
+
+ // add an entry for this term/fieldName/docRef to the invertedIndex
+ if (this.invertedIndex[term][fieldName][docRef] == undefined) {
+ this.invertedIndex[term][fieldName][docRef] = Object.create(null)
+ }
+
+ // store all whitelisted metadata about this token in the
+ // inverted index
+ for (var l = 0; l < this.metadataWhitelist.length; l++) {
+ var metadataKey = this.metadataWhitelist[l],
+ metadata = term.metadata[metadataKey]
+
+ if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) {
+ this.invertedIndex[term][fieldName][docRef][metadataKey] = []
+ }
+
+ this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata)
+ }
+ }
+
+ }
+}
+
+/**
+ * Calculates the average document length for this index
+ *
+ * @private
+ */
+lunr.Builder.prototype.calculateAverageDocumentLengths = function () {
+
+ var documentRefs = Object.keys(this.documentLengths),
+ numberOfDocuments = documentRefs.length,
+ allDocumentsLength = 0
+
+ for (var i = 0; i < numberOfDocuments; i++) {
+ var documentRef = documentRefs[i]
+ allDocumentsLength += this.documentLengths[documentRef]
+ }
+
+ this.averageDocumentLength = allDocumentsLength / numberOfDocuments
+}
+
+/**
+ * Builds a vector space model of every document using lunr.Vector
+ *
+ * @private
+ */
+lunr.Builder.prototype.createDocumentVectors = function () {
+ var documentVectors = {},
+ docRefs = Object.keys(this.documentTermFrequencies),
+ docRefsLength = docRefs.length
+
+ for (var i = 0; i < docRefsLength; i++) {
+ var docRef = docRefs[i],
+ documentLength = this.documentLengths[docRef],
+ documentVector = new lunr.Vector,
+ termFrequencies = this.documentTermFrequencies[docRef],
+ terms = Object.keys(termFrequencies),
+ termsLength = terms.length
+
+ for (var j = 0; j < termsLength; j++) {
+ var term = terms[j],
+ tf = termFrequencies[term],
+ termIndex = this.invertedIndex[term]._index,
+ idf = lunr.idf(this.invertedIndex[term], this.documentCount),
+ score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (documentLength / this.averageDocumentLength)) + tf),
+ scoreWithPrecision = Math.round(score * 1000) / 1000
+ // Converts 1.23456789 to 1.234.
+ // Reducing the precision so that the vectors take up less
+ // space when serialised. Doing it now so that they behave
+ // the same before and after serialisation. Also, this is
+ // the fastest approach to reducing a number's precision in
+ // JavaScript.
+
+ documentVector.insert(termIndex, scoreWithPrecision)
+ }
+
+ documentVectors[docRef] = documentVector
+ }
+
+ this.documentVectors = documentVectors
+}
+
+/**
+ * Creates a token set of all tokens in the index using lunr.TokenSet
+ *
+ * @private
+ */
+lunr.Builder.prototype.createTokenSet = function () {
+ this.tokenSet = lunr.TokenSet.fromArray(
+ Object.keys(this.invertedIndex).sort()
+ )
+}
+
+/**
+ * Builds the index, creating an instance of lunr.Index.
+ *
+ * This completes the indexing process and should only be called
+ * once all documents have been added to the index.
+ *
+ * @private
+ * @returns {lunr.Index}
+ */
+lunr.Builder.prototype.build = function () {
+ this.calculateAverageDocumentLengths()
+ this.createDocumentVectors()
+ this.createTokenSet()
+
+ return new lunr.Index({
+ invertedIndex: this.invertedIndex,
+ documentVectors: this.documentVectors,
+ tokenSet: this.tokenSet,
+ averageDocumentLength: this.averageDocumentLength,
+ documentCount: this.documentCount,
+ fields: this._fields,
+ pipeline: this.searchPipeline,
+ b: this._b,
+ k1: this._k1
+ })
+}
+
+/**
+ * Applies a plugin to the index builder.
+ *
+ * A plugin is a function that is called with the index builder as its context.
+ * Plugins can be used to customise or extend the behaviour of the index
+ * in some way. A plugin is just a function, that encapsulated the custom
+ * behaviour that should be applied when building the index.
+ *
+ * The plugin function will be called with the index builder as its argument, additional
+ * arguments can also be passed when calling use. The function will be called
+ * with the index builder as its context.
+ *
+ * @param {Function} plugin The plugin to apply.
+ */
+lunr.Builder.prototype.use = function (fn) {
+ var args = Array.prototype.slice.call(arguments, 1)
+ args.unshift(this)
+ fn.apply(this, args)
+}
+/**
+ * Contains and collects metadata about a matching document.
+ * A single instance of lunr.MatchData is returned as part of every
+ * lunr.Index~Result.
+ *
+ * @constructor
+ * @property {object} metadata - A collection of metadata associated with this document.
+ * @see {@link lunr.Index~Result}
+ */
+lunr.MatchData = function (term, field, metadata) {
+ this.metadata = {}
+ this.metadata[term] = {}
+ this.metadata[term][field] = metadata
+}
+
+/**
+ * An instance of lunr.MatchData will be created for every term that matches a
+ * document. However only one instance is required in a lunr.Index~Result. This
+ * method combines metadata from another instance of lunr.MatchData with this
+ * objects metadata.
+ *
+ * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one.
+ * @see {@link lunr.Index~Result}
+ */
+lunr.MatchData.prototype.combine = function (otherMatchData) {
+ var terms = Object.keys(otherMatchData.metadata)
+
+ for (var i = 0; i < terms.length; i++) {
+ var term = terms[i],
+ fields = Object.keys(otherMatchData.metadata[term])
+
+ if (this.metadata[term] == undefined) {
+ this.metadata[term] = {}
+ }
+
+ for (var j = 0; j < fields.length; j++) {
+ var field = fields[j],
+ keys = Object.keys(otherMatchData.metadata[term][field])
+
+ if (this.metadata[term][field] == undefined) {
+ this.metadata[term][field] = {}
+ }
+
+ for (var k = 0; k < keys.length; k++) {
+ var key = keys[k]
+
+ if (this.metadata[term][field][key] == undefined) {
+ this.metadata[term][field][key] = otherMatchData.metadata[term][field][key]
+ } else {
+ this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key])
+ }
+
+ }
+ }
+ }
+}
+/**
+ * A lunr.Query provides a programmatic way of defining queries to be performed
+ * against a {@link lunr.Index}.
+ *
+ * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method
+ * so the query object is pre-initialized with the right index fields.
+ *
+ * @constructor
+ * @property {lunr.Query~Clause[]} clauses - An array of query clauses.
+ * @property {string[]} allFields - An array of all available fields in a lunr.Index.
+ */
+lunr.Query = function (allFields) {
+ this.clauses = []
+ this.allFields = allFields
+}
+
+/**
+ * A single clause in a {@link lunr.Query} contains a term and details on how to
+ * match that term against a {@link lunr.Index}.
+ *
+ * @typedef {Object} lunr.Query~Clause
+ * @property {string[]} fields - The fields in an index this clause should be matched against.
+ * @property {number} boost - Any boost that should be applied when matching this clause.
+ * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be.
+ * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline.
+ */
+
+/**
+ * Adds a {@link lunr.Query~Clause} to this query.
+ *
+ * Unless the clause contains the fields to be matched all fields will be matched. In addition
+ * a default boost of 1 is applied to the clause.
+ *
+ * @param {lunr.Query~Clause} clause - The clause to add to this query.
+ * @returns {lunr.Query}
+ */
+lunr.Query.prototype.clause = function (clause) {
+ if (!('fields' in clause)) {
+ clause.fields = this.allFields
+ }
+
+ if (!('boost' in clause)) {
+ clause.boost = 1
+ }
+
+ if (!('usePipeline' in clause)) {
+ clause.usePipeline = true
+ }
+
+ this.clauses.push(clause)
+
+ return this
+}
+
+/**
+ * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause}
+ * to the list of clauses that make up this query.
+ *
+ * @param {string} term - The term to add to the query.
+ * @param {Object} [options] - Any additional properties to add to the query clause.
+ * @returns {lunr.Query}
+ */
+lunr.Query.prototype.term = function (term, options) {
+ var clause = options || {}
+ clause.term = term
+
+ this.clause(clause)
+
+ return this
+}
+lunr.QueryParseError = function (message, start, end) {
+ this.name = "QueryParseError"
+ this.message = message
+ this.start = start
+ this.end = end
+}
+
+lunr.QueryParseError.prototype = new Error
+lunr.QueryLexer = function (str) {
+ this.lexemes = []
+ this.str = str
+ this.length = str.length
+ this.pos = 0
+ this.start = 0
+}
+
+lunr.QueryLexer.prototype.run = function () {
+ var state = lunr.QueryLexer.lexText
+
+ while (state) {
+ state = state(this)
+ }
+}
+
+lunr.QueryLexer.prototype.emit = function (type) {
+ this.lexemes.push({
+ type: type,
+ str: this.str.slice(this.start, this.pos),
+ start: this.start,
+ end: this.pos
+ })
+
+ this.start = this.pos
+}
+
+lunr.QueryLexer.prototype.next = function () {
+ if (this.pos == this.length) {
+ return lunr.QueryLexer.EOS
+ }
+
+ var char = this.str.charAt(this.pos)
+ this.pos += 1
+ return char
+}
+
+lunr.QueryLexer.prototype.width = function () {
+ return this.pos - this.start
+}
+
+lunr.QueryLexer.prototype.ignore = function () {
+ if (this.start == this.pos) {
+ this.pos += 1
+ }
+
+ this.start = this.pos
+}
+
+lunr.QueryLexer.prototype.backup = function () {
+ this.pos -= 1
+}
+
+lunr.QueryLexer.prototype.acceptDigitRun = function () {
+ var char, charCode
+
+ do {
+ char = this.next()
+ charCode = char.charCodeAt(0)
+ } while (charCode > 47 && charCode < 58)
+
+ if (char != lunr.QueryLexer.EOS) {
+ this.backup()
+ }
+}
+
+lunr.QueryLexer.prototype.more = function () {
+ return this.pos < this.length
+}
+
+lunr.QueryLexer.EOS = 'EOS'
+lunr.QueryLexer.FIELD = 'FIELD'
+lunr.QueryLexer.TERM = 'TERM'
+lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE'
+lunr.QueryLexer.BOOST = 'BOOST'
+
+lunr.QueryLexer.lexField = function (lexer) {
+ lexer.backup()
+ lexer.emit(lunr.QueryLexer.FIELD)
+ lexer.ignore()
+ return lunr.QueryLexer.lexText
+}
+
+lunr.QueryLexer.lexTerm = function (lexer) {
+ if (lexer.width() > 1) {
+ lexer.backup()
+ lexer.emit(lunr.QueryLexer.TERM)
+ }
+
+ lexer.ignore()
+
+ if (lexer.more()) {
+ return lunr.QueryLexer.lexText
+ }
+}
+
+lunr.QueryLexer.lexEditDistance = function (lexer) {
+ lexer.ignore()
+ lexer.acceptDigitRun()
+ lexer.emit(lunr.QueryLexer.EDIT_DISTANCE)
+ return lunr.QueryLexer.lexText
+}
+
+lunr.QueryLexer.lexBoost = function (lexer) {
+ lexer.ignore()
+ lexer.acceptDigitRun()
+ lexer.emit(lunr.QueryLexer.BOOST)
+ return lunr.QueryLexer.lexText
+}
+
+lunr.QueryLexer.lexEOS = function (lexer) {
+ if (lexer.width() > 0) {
+ lexer.emit(lunr.QueryLexer.TERM)
+ }
+}
+
+// This matches the separator used when tokenising fields
+// within a document. These should match otherwise it is
+// not possible to search for some tokens within a document.
+//
+// It is possible for the user to change the separator on the
+// tokenizer so it _might_ clash with any other of the special
+// characters already used within the search string, e.g. :.
+//
+// This means that it is possible to change the separator in
+// such a way that makes some words unsearchable using a search
+// string.
+lunr.QueryLexer.termSeparator = lunr.tokenizer.separator
+
+lunr.QueryLexer.lexText = function (lexer) {
+ while (true) {
+ var char = lexer.next()
+
+ if (char == lunr.QueryLexer.EOS) {
+ return lunr.QueryLexer.lexEOS
+ }
+
+ if (char == ":") {
+ return lunr.QueryLexer.lexField
+ }
+
+ if (char == "~") {
+ lexer.backup()
+ if (lexer.width() > 0) {
+ lexer.emit(lunr.QueryLexer.TERM)
+ }
+ return lunr.QueryLexer.lexEditDistance
+ }
+
+ if (char == "^") {
+ lexer.backup()
+ if (lexer.width() > 0) {
+ lexer.emit(lunr.QueryLexer.TERM)
+ }
+ return lunr.QueryLexer.lexBoost
+ }
+
+ if (char.match(lunr.QueryLexer.termSeparator)) {
+ return lunr.QueryLexer.lexTerm
+ }
+ }
+}
+
+lunr.QueryParser = function (str, query) {
+ this.lexer = new lunr.QueryLexer (str)
+ this.query = query
+ this.currentClause = {}
+ this.lexemeIdx = 0
+}
+
+lunr.QueryParser.prototype.parse = function () {
+ this.lexer.run()
+ this.lexemes = this.lexer.lexemes
+
+ var state = lunr.QueryParser.parseFieldOrTerm
+
+ while (state) {
+ state = state(this)
+ }
+
+ return this.query
+}
+
+lunr.QueryParser.prototype.peekLexeme = function () {
+ return this.lexemes[this.lexemeIdx]
+}
+
+lunr.QueryParser.prototype.consumeLexeme = function () {
+ var lexeme = this.peekLexeme()
+ this.lexemeIdx += 1
+ return lexeme
+}
+
+lunr.QueryParser.prototype.nextClause = function () {
+ var completedClause = this.currentClause
+ this.query.clause(completedClause)
+ this.currentClause = {}
+}
+
+lunr.QueryParser.parseFieldOrTerm = function (parser) {
+ var lexeme = parser.peekLexeme()
+
+ if (lexeme == undefined) {
+ return
+ }
+
+ switch (lexeme.type) {
+ case lunr.QueryLexer.FIELD:
+ return lunr.QueryParser.parseField
+ case lunr.QueryLexer.TERM:
+ return lunr.QueryParser.parseTerm
+ default:
+ var errorMessage = "expected either a field or a term, found " + lexeme.type + " with value '" + lexeme.str + "'"
+ throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)
+ }
+}
+
+lunr.QueryParser.parseField = function (parser) {
+ var lexeme = parser.consumeLexeme()
+
+ if (lexeme == undefined) {
+ return
+ }
+
+ if (parser.query.allFields.indexOf(lexeme.str) == -1) {
+ var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(),
+ errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields
+
+ throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)
+ }
+
+ parser.currentClause.fields = [lexeme.str]
+
+ var nextLexeme = parser.peekLexeme()
+
+ if (nextLexeme == undefined) {
+ var errorMessage = "expecting term, found nothing"
+ throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)
+ }
+
+ switch (nextLexeme.type) {
+ case lunr.QueryLexer.TERM:
+ return lunr.QueryParser.parseTerm
+ default:
+ var errorMessage = "expecting a field, found '" + nextLexeme.type + "'"
+ throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)
+ }
+}
+
+lunr.QueryParser.parseTerm = function (parser) {
+ var lexeme = parser.consumeLexeme()
+
+ if (lexeme == undefined) {
+ return
+ }
+
+ parser.currentClause.term = lexeme.str.toLowerCase()
+
+ if (lexeme.str.indexOf("*") != -1) {
+ parser.currentClause.usePipeline = false
+ }
+
+ var nextLexeme = parser.peekLexeme()
+
+ if (nextLexeme == undefined) {
+ parser.nextClause()
+ return
+ }
+
+ switch (nextLexeme.type) {
+ case lunr.QueryLexer.TERM:
+ parser.nextClause()
+ return lunr.QueryParser.parseTerm
+ case lunr.QueryLexer.FIELD:
+ parser.nextClause()
+ return lunr.QueryParser.parseField
+ case lunr.QueryLexer.EDIT_DISTANCE:
+ return lunr.QueryParser.parseEditDistance
+ case lunr.QueryLexer.BOOST:
+ return lunr.QueryParser.parseBoost
+ default:
+ var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'"
+ throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)
+ }
+}
+
+lunr.QueryParser.parseEditDistance = function (parser) {
+ var lexeme = parser.consumeLexeme()
+
+ if (lexeme == undefined) {
+ return
+ }
+
+ var editDistance = parseInt(lexeme.str, 10)
+
+ if (isNaN(editDistance)) {
+ var errorMessage = "edit distance must be numeric"
+ throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)
+ }
+
+ parser.currentClause.editDistance = editDistance
+
+ var nextLexeme = parser.peekLexeme()
+
+ if (nextLexeme == undefined) {
+ parser.nextClause()
+ return
+ }
+
+ switch (nextLexeme.type) {
+ case lunr.QueryLexer.TERM:
+ parser.nextClause()
+ return lunr.QueryParser.parseTerm
+ case lunr.QueryLexer.FIELD:
+ parser.nextClause()
+ return lunr.QueryParser.parseField
+ case lunr.QueryLexer.EDIT_DISTANCE:
+ return lunr.QueryParser.parseEditDistance
+ case lunr.QueryLexer.BOOST:
+ return lunr.QueryParser.parseBoost
+ default:
+ var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'"
+ throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)
+ }
+}
+
+lunr.QueryParser.parseBoost = function (parser) {
+ var lexeme = parser.consumeLexeme()
+
+ if (lexeme == undefined) {
+ return
+ }
+
+ var boost = parseInt(lexeme.str, 10)
+
+ if (isNaN(boost)) {
+ var errorMessage = "boost must be numeric"
+ throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end)
+ }
+
+ parser.currentClause.boost = boost
+
+ var nextLexeme = parser.peekLexeme()
+
+ if (nextLexeme == undefined) {
+ parser.nextClause()
+ return
+ }
+
+ switch (nextLexeme.type) {
+ case lunr.QueryLexer.TERM:
+ parser.nextClause()
+ return lunr.QueryParser.parseTerm
+ case lunr.QueryLexer.FIELD:
+ parser.nextClause()
+ return lunr.QueryParser.parseField
+ case lunr.QueryLexer.EDIT_DISTANCE:
+ return lunr.QueryParser.parseEditDistance
+ case lunr.QueryLexer.BOOST:
+ return lunr.QueryParser.parseBoost
+ default:
+ var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'"
+ throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end)
+ }
+}
+
+ /**
+ * export the module via AMD, CommonJS or as a browser global
+ * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js
+ */
+ ;(function (root, factory) {
+ if (typeof define === 'function' && define.amd) {
+ // AMD. Register as an anonymous module.
+ define(factory)
+ } else if (typeof exports === 'object') {
+ /**
+ * Node. Does not work with strict CommonJS, but
+ * only CommonJS-like enviroments that support module.exports,
+ * like Node.
+ */
+ module.exports = factory()
+ } else {
+ // Browser globals (root is window)
+ root.lunr = factory()
+ }
+ }(this, function () {
+ /**
+ * Just return a value to define the module export.
+ * This example returns an object, but the module
+ * can return a function as the exported value.
+ */
+ return lunr
+ }))
+})();
diff --git a/js/search.js b/js/search.js
new file mode 100644
index 0000000000..d59b1831a9
--- /dev/null
+++ b/js/search.js
@@ -0,0 +1,130 @@
+(function() {
+ function noResults() {
+ var searchResults = document.getElementById("search-results");
+ searchResults.innerHTML = "No results found
";
+ }
+
+ function hilightText(regexes, text) {
+ for (var i = 0; i < regexes.length; i++) {
+ text = text.replace(regexes[i], "$1");
+ }
+ return text;
+ }
+
+ function matchRegexes(metadata) {
+ var regexes = [];
+ for (var kw in metadata) {
+ regexes.push(new RegExp("\\b("+kw+"\\w*)", "mig"));
+ }
+ return regexes;
+ }
+
+ function getMatchingSubtitle(item, from) {
+ var subtitle = $(from).prevAll(":header:first");
+ if (subtitle.length > 0) {
+ return {
+ content: ""+$(subtitle[0]).text()+"
",
+ id: subtitle[0].id
+ };
+ }
+ if ($(from).parent().length === 0) {
+ return null;
+ } else {
+ return getMatchingSubtitle(item, $(from).parent()[0]);
+ }
+ }
+
+ function getMatchingText(regexes, item, paragraph, lastSubtitle) {
+ var matchingText = "";
+ var text = paragraph.text();
+ for (var j = 0; j < regexes.length; j++) {
+ var match = regexes[j].exec(text);
+ if (match) {
+ var subtitle = getMatchingSubtitle(item, paragraph);
+ if (subtitle !== null && lastSubtitle != subtitle.id) {
+ matchingText += subtitle.content;
+ lastSubtitle = subtitle.id;
+ }
+ matchingText += "" + hilightText(regexes, text) + "
";
+ return {
+ content: matchingText,
+ lastSubtitle: lastSubtitle
+ };
+ }
+ }
+ return null;
+ }
+
+ function getItemText(item, metadata) {
+ var itemText = "";
+ var dummy = document.createElement("div");
+ dummy.innerHTML = item.html;
+ var paragraphs = $(dummy).find("p, li, div, td");
+ var lastSubtitle = null;
+ var regexes = matchRegexes(metadata);
+ for (var p = 0; p < paragraphs.length; p++) {
+ var matchingText = getMatchingText(regexes, item, $(paragraphs[p]), lastSubtitle);
+ if (matchingText !== null) {
+ itemText += matchingText.content;
+ lastSubtitle = matchingText.lastSubtitle;
+ }
+ }
+ itemText += "
";
+ return itemText;
+ }
+
+ function displaySearchResults(term, results, store) {
+ if (results.length === 0) {
+ noResults();
+ } else {
+ var searchResults = document.getElementById("search-results");
+ var append = "";
+ for (var i = 0; i < results.length; i++) {
+ var item = store[results[i].ref];
+ append += getItemText(item, results[i].matchData.metadata);
+ }
+ searchResults.innerHTML = append;
+ }
+ }
+
+ function getQueryVariable(variable) {
+ var query = window.location.search.substring(1);
+ var vars = query.split("&");
+ for (var i = 0; i < vars.length; i++) {
+ var pair = vars[i].split("=");
+
+ if (pair[0] === variable) {
+ return decodeURIComponent(pair[1].replace(/\+/g, "%20"));
+ }
+ }
+ }
+
+ function createIndex() {
+ var index = lunr(function() {
+ this.field("title");
+ this.field("content");
+
+ for (var key in window.store) {
+ this.add({
+ "id": key,
+ "title": window.store[key].title,
+ "content": window.store[key].content
+ });
+ }
+ });
+
+ return index;
+ }
+
+ var searchTerm = getQueryVariable("q");
+ if (searchTerm) {
+ document.getElementById("search-title").innerHTML = searchTerm;
+
+ var index = createIndex();
+
+ var results = index.search(searchTerm);
+ displaySearchResults(searchTerm, results, window.store);
+ } else {
+ noResults();
+ }
+})();
diff --git a/project/about.md b/project/about.md
index 7814a783f7..0aa83fb0fd 100644
--- a/project/about.md
+++ b/project/about.md
@@ -7,17 +7,16 @@ published: true
### Project organization
-The development of ResInsight was initiated by Statoil Petroleum AS in 2011 following a series of projects. ResInsight is still under active development, c.f. [ Roadmap ]({{ site.baseurl }}/project/roadmap). Ceetron Solutions AS is responsible for the software development work.
-
-ResInsight is part of the [Open Porous Media](http://opm-project.org/) project.
+The development of ResInsight was initiated by Statoil Petroleum AS in 2011, and was followed up by a series of projects. ResInsight is still under active development, mainly by Ceetron Solutions AS who is responsible for the software development work.
+ResInsight is a part of the [Open Porous Media](http://opm-project.org/) project.
### Licensing
The software is licensed under GPL 3+, see [Licensing details](https://github.com/OPM/ResInsight/blob/master/COPYING).
### Project hosting
-The software is hosted at [GitHub](https://github.com/OPM/ResInsight)
+The software is hosted at [GitHub](https://github.com/OPM/ResInsight), and the development progress can be monitored there. The GitHub issue tracker is heavily used to organize our development process.
### Web site programming and design
The programming and design of this site is based on work by [Tom Preston-Werner](http://tom.preston-werner.com/). This is also the current theme of [Jekyll](http://jekyllrb.com/), the publishing engine used to produce content of [GitHub Pages](https://pages.github.com/).
diff --git a/project/contact.md b/project/contact.md
index 86f143a1c5..aa6def4b46 100644
--- a/project/contact.md
+++ b/project/contact.md
@@ -12,16 +12,3 @@ Phone : +47 73 60 43 00
e-mail : info@ceetronsolutions.com
Bug reports and general feature requests can be filed directly on [GitHub](https://github.com/OPM/ResInsight/issues?state=open)
-
-## Newsletter subscription
-By subscribing to the **ResInsight Newsletter** you will get notified when new releases are available. Please use the button below to send a request for subscription mail.
-
-
diff --git a/project/download.md b/project/download.md
index 0f96e41d3f..bd08dd1a3e 100644
--- a/project/download.md
+++ b/project/download.md
@@ -5,8 +5,8 @@ permalink: /project/download/
published: true
---
-Windows : [ResInsight-bin-2016.11.0-win64.zip](https://github.com/OPM/ResInsight/releases/download/v2016.11/ResInsight-bin-2016.11.0-win64.zip) (17.9 MB)
+Windows : [ResInsight-2017.05.1_oct-4.0.0_win64.zip](https://github.com/OPM/ResInsight/releases/download/v2017.05/ResInsight-2017.05.1_oct-4.0.0_win64.zip)
-Linux - RHEL6 : [ResInsight-bin-2016.11.0-el6.tar.gz](https://github.com/OPM/ResInsight/releases/download/v2016.11/ResInsight-bin-2016.11.0-el6.tar.gz) (16 MB)
+Linux - RHEL6 : [ResInsight-2017.05.1_oct-3.4.3_el6.tar.gz](https://github.com/OPM/ResInsight/releases/download/v2017.05/ResInsight-2017.05.1_oct-3.4.3_el6.tar.gz)
For older versions, releasenotes and source code, please visit [ResInsight on Github](https://github.com/OPM/ResInsight/releases/)
diff --git a/project/releasenotification.md b/project/releasenotification.md
new file mode 100644
index 0000000000..e771cf7d56
--- /dev/null
+++ b/project/releasenotification.md
@@ -0,0 +1,19 @@
+---
+layout: project
+title: Release Notification Subscription
+permalink: /project/releasenotification/
+published: true
+---
+
+By subscribing to the **Release Notification** you will get notified when new releases are available.
+Please use the button below to send a request for subscription mail.
+
+
diff --git a/project/roadmap.md b/project/roadmap.md
deleted file mode 100644
index ec682d8f63..0000000000
--- a/project/roadmap.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-layout: project
-title: Roadmap
-permalink: /project/roadmap/
-published: true
----
-
-
-
-
-## Ongoing activities
-The following main features are part of ongoing development activities:
-
-### Planned delivery Q1 2017
-
-* Implementation of selected features from [MRST](http://www.sintef.no/projectweb/mrst/) (developed by SINTEF)
-* New visualization features based on flow diagnostics data
-
-## Candidate features for future projects
-
-The following lists a few project propolsals that have been up for discussion
-
-* Visualization of seismic data
-* Support for selected well planning features
-* Improvements related to visualization of ABAQUS reservoir models
diff --git a/search.md b/search.md
new file mode 100644
index 0000000000..4f3b2b55eb
--- /dev/null
+++ b/search.md
@@ -0,0 +1,37 @@
+---
+layout: default
+title: ResInsight • Search
+overview: true
+---
+
+
+
+
+
+
+