The initial use case is calculating the phase-filled pore-volume
weighted average of the fluid mass densities per PVT region. This
value goes into calculating depth-corrected per-cell phase pressure
values such as the BPPO and BPPG summary vectors.
This class manages a single linear array which separately tracks the
averages' numerators and denominators as running sums per region and
region set. We pick this data structure to simplify the cross-rank
reduction needed in MPI parallel runs. Client code is expected to
add individual per-cell and per-phase contributions using the
addCell() member function and then call the accumulateParallel()
member to affect the cross-rank reduction. The averages will then
be available through the fieldValue() and value() member functions.
As a further view towards the initial use case, we track two
different types of average per phase--one for the phase-filled
volume and one for the pore-volume filled volume. The latter is the
average we would get for the case of the phase saturation being one
throughout the region. This alternative value is the fallback
option for the case of the phase saturation being identically zero
throughout the region.
preconditioner. Uses graph coloring to exploit
parallelism in upper and triangular solves when
computing a diagonal approximate inverse of a
sparse matrix. Supports blocksizes up to 3.
This makes them available for use in other places. The function
std::string to_string(const ConvergenceReport::WellFailure& wf) is new,
but uses the format already established.
Invokes Zoltan library and requires MPI. Client code constructs an
abstract connectivity graph by defining connections/edges through
the 'registerConnection()' member function. May also impose a
restriction that certain cells/vertices be placed in the same
domain/block in the resulting partition. Client code must supply a
callback function that defines globally unique cell/vertex/object
IDs, across all MPI ranks, for each vertex in the connectivity
graph.
Member function 'partitionElement()' forms the resulting partition
vector, the size of which is the total number of objects visible to
the local rank-typically the number of cells owned by the rank, and
the number of overlap cells--i.e., the size of the local grid view.
Implement calls to cuBlas, cuSparse and implement necessary
CUDA kernels to perform a single iteration of the jacobi preconditioner.
Add tests that verify new kernels and the preconditioner in its totality.
The preconditioner is verified on 2x2 and 3x3 blocks, which as of now
are the only supported sizes. 1x1 are not supported because cuSparse
does not support it.
added access to DUNE mesh geometry and passing through data to Damaris;
Updated command line so users can specifiy Python or Paraview script names and other paramaters that control Damaris
- Simulation name
- Number of dedicated cores or dedicated nodes
- Shared memory region size
- switch to turn off HDF5 output.
- Damaris logging level
Step one for moving Damaris calls out of EclWriter class and into its own DamarisWriter class;
EclProblem now calls both writeOutput methods and passes in the data::Solution object;
Add fix for first writeOutput() call not having PRESSURE data available;
data::Solution is now passed by rvalue ref into eclWriter::writeOutput();
guard added to prevent inclusion of damariswriter.hh
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class. This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.
We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
This commit adds a new container class,
ParallelPAvgDynamicSourceData
which inherits from PAvgDynamicSourceData and provides a parallel
view of source contributions. Member function
collectLocalSources
will call the user-provided source term evaluation function for each
source location in its purview--typically those locations owned by
the current MPI rank. Those values will be distributed to other MPI
ranks through member function synchroniseSources which will fill the
base class' 'src_' data member, and become available to clients
through read-only item spans.
This commit enables outputting non-linear convergence metrics, i.e.,
the MB and CNV values, per phase, for each non-linear iteration in
each timestep. If the user passes the option value "iterations" to
the --extra-convergence-output command line option, this commit will
create a new output file, CASE.INFOITER, that holds
* report step
* time step within that report step
* elapsed time
* MB and CNV values per phase
* well convergence status
for each non-linear iteration.
We use an asynchronous file writing procedure and confer ownership
of the report step's unprocessed convergence reports to this
procedure just before the end of
SimulatorFullyImplicitBlackoilEbos::runStep()
At that point, the convergence reports are about to go out of scope.
The asynchronous protocol uses a dedicated queue of output requests,
class ConvergenceReportQueue, into which the producer-i.e., member
function runStep()-inserts new convergence reports and from which
the output thread, ConvergenceOutputThread::writeASynchronous(),
retrieves those requests before writing the file data.
This commit introduces a new helper class,
ConvergenceOutputConfiguration
which parses comma separated option strings into a runtime
configuration object for whether to output additional convergence
information and, if so, what information to output.
Supported option string values are
* "none" -- Dont want any additional convergence output.
* "steps" -- Want additional convergence output pertaining to the
converged solution at the end of each timestep.
* "iterations" -- Want additional convergence output pertaining to each
non-linar ieration in each timestep.
Option value "none" overrides all other options. In other words, if the
user requests "none", then there will be no additional convergence
output, even if there are other options in the option string.
We add a new option, ExtraConvergenceOutput (command line option
--extra-convergence-output), which takes a string argument expected
to be a comma separated combination of these options. The default
value is "none". Finally, make the INFOSTEP file output conditional
on the user supplying "steps" as an argument to the new option.
Damaris initialization is added after InitMpi but before starting the simulation. Damaris will invoke a separate core for writing in
parallel and leave the rest of cores for the simulator. The main changes are in main where start_damaris and then in eclwriterm where
we use damaris to output the PRESSURE. To test Damaris one can use --enable-damaris-output=true and to use parallel HDF5 one can use
--enable-async-damaris-output=true (false is the default choice)
to limit the overload party, we put packing details for
specific types in separate structs.
emit compiler error if unsupported type is given,
better to detect this on compile time rather than runtime
this has already led to some confusion. move some of the code
upstream to opm-models and remove the rest of the duplicated code.
the remainder of MatrixBlock.hpp is renamed to SmallDenseMatrixUtils.hpp
this has already led to some confusion. move some of the code
upstream to opm-models and remove the rest of the duplicated code.
the remainder of MatrixBlock.hpp is renamed to SmallDenseMatrixUtils.hpp
i chose to split in a separate _impl file because this code is so
generic that there may be downstream users who want to use on other
matrix types than what we use in opm-simulators.
This commit introduces a new helper class
Opm::EclInterRegFlowMapSingleFIP
that wraps an object of type
Opm::data::InterRegFlowMap
along with the MPI rank's notion of a FIP region array definition
(e.g., the local FIPNUM array). The new single-FIP flow map is
responsible for accumulating local contributions to the inter-region
flows defined by that FIP array. In the case of connections between
MPI ranks, the rank that owns the lowest region ID accumulates the
associate flow rates.
Add unit tests to exercise the new class, including simulating
multiple MPI ranks that are communicated to a single I/O rank for
summary output purposes.
The headers BlackoilWellModel.hpp, StandardWell.hpp,and WellInterface.hpp
all include various GasLift*.hpp headers directly. That means that any
client code that uses those well-related headers will need to have the
GasLift* headers available too.
Also sorts the headers under opm/simulators/wells to make the it easier
to read.
Introduces a gaslift debugging variable in ALQState in WellState. This
variable will persist between timesteps in contrast to when debugging
variables are defined in GasLiftSingleWell, GasLiftGroupState, or GasLiftStage2.
Currently only an integer variable debug_counter is added to ALQState,
which can be used as follows: First debugging is switched on globally
for BlackOilWellModel, GasLiftSingleWell, GasLiftGroupState, and
GasLiftStage2 by setting glift_debug to a true value in BlackOilWellModelGeneric.
Then, the following debugging code can be added to e.g. one of
GasLiftSingleWell, GasLiftGroupState, or GasLiftStage2 :
auto count = debugUpdateGlobalCounter_();
if (count == some_integer) {
displayDebugMessage_("stop here");
}
Here, the integer "some_integer" is determined typically by looking at
the debugging output of a previous run. This can be done since the
call to debugUpdateGlobalCounter_() will print out the current value
of the counter and then increment the counter by one. And it will be
easy to recognize these values in the debug ouput. If you find a place
in the output that looks suspect, just take a note of the counter
value in the output around that point and insert the value for
"some_integer", then after recompiling the code with the desired value
for "some_integer", it is now easy to set a breakpoint in GDB at the
line
displayDebugMessage_("stop here").
shown in the above snippet. This should improve the ability to quickly
to set a breakpoint in GDB around at a given time and point in the simulation.