This commit adds a parallel calculation object derived from the serial
PAvgCalculator class. This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.
We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
This commit adds a new container class,
ParallelPAvgDynamicSourceData
which inherits from PAvgDynamicSourceData and provides a parallel
view of source contributions. Member function
collectLocalSources
will call the user-provided source term evaluation function for each
source location in its purview--typically those locations owned by
the current MPI rank. Those values will be distributed to other MPI
ranks through member function synchroniseSources which will fill the
base class' 'src_' data member, and become available to clients
through read-only item spans.
This commit enables outputting non-linear convergence metrics, i.e.,
the MB and CNV values, per phase, for each non-linear iteration in
each timestep. If the user passes the option value "iterations" to
the --extra-convergence-output command line option, this commit will
create a new output file, CASE.INFOITER, that holds
* report step
* time step within that report step
* elapsed time
* MB and CNV values per phase
* well convergence status
for each non-linear iteration.
We use an asynchronous file writing procedure and confer ownership
of the report step's unprocessed convergence reports to this
procedure just before the end of
SimulatorFullyImplicitBlackoilEbos::runStep()
At that point, the convergence reports are about to go out of scope.
The asynchronous protocol uses a dedicated queue of output requests,
class ConvergenceReportQueue, into which the producer-i.e., member
function runStep()-inserts new convergence reports and from which
the output thread, ConvergenceOutputThread::writeASynchronous(),
retrieves those requests before writing the file data.
This commit introduces a new helper class,
ConvergenceOutputConfiguration
which parses comma separated option strings into a runtime
configuration object for whether to output additional convergence
information and, if so, what information to output.
Supported option string values are
* "none" -- Dont want any additional convergence output.
* "steps" -- Want additional convergence output pertaining to the
converged solution at the end of each timestep.
* "iterations" -- Want additional convergence output pertaining to each
non-linar ieration in each timestep.
Option value "none" overrides all other options. In other words, if the
user requests "none", then there will be no additional convergence
output, even if there are other options in the option string.
We add a new option, ExtraConvergenceOutput (command line option
--extra-convergence-output), which takes a string argument expected
to be a comma separated combination of these options. The default
value is "none". Finally, make the INFOSTEP file output conditional
on the user supplying "steps" as an argument to the new option.
Damaris initialization is added after InitMpi but before starting the simulation. Damaris will invoke a separate core for writing in
parallel and leave the rest of cores for the simulator. The main changes are in main where start_damaris and then in eclwriterm where
we use damaris to output the PRESSURE. To test Damaris one can use --enable-damaris-output=true and to use parallel HDF5 one can use
--enable-async-damaris-output=true (false is the default choice)
to limit the overload party, we put packing details for
specific types in separate structs.
emit compiler error if unsupported type is given,
better to detect this on compile time rather than runtime
this has already led to some confusion. move some of the code
upstream to opm-models and remove the rest of the duplicated code.
the remainder of MatrixBlock.hpp is renamed to SmallDenseMatrixUtils.hpp
this has already led to some confusion. move some of the code
upstream to opm-models and remove the rest of the duplicated code.
the remainder of MatrixBlock.hpp is renamed to SmallDenseMatrixUtils.hpp
i chose to split in a separate _impl file because this code is so
generic that there may be downstream users who want to use on other
matrix types than what we use in opm-simulators.
This commit introduces a new helper class
Opm::EclInterRegFlowMapSingleFIP
that wraps an object of type
Opm::data::InterRegFlowMap
along with the MPI rank's notion of a FIP region array definition
(e.g., the local FIPNUM array). The new single-FIP flow map is
responsible for accumulating local contributions to the inter-region
flows defined by that FIP array. In the case of connections between
MPI ranks, the rank that owns the lowest region ID accumulates the
associate flow rates.
Add unit tests to exercise the new class, including simulating
multiple MPI ranks that are communicated to a single I/O rank for
summary output purposes.
The headers BlackoilWellModel.hpp, StandardWell.hpp,and WellInterface.hpp
all include various GasLift*.hpp headers directly. That means that any
client code that uses those well-related headers will need to have the
GasLift* headers available too.
Also sorts the headers under opm/simulators/wells to make the it easier
to read.
Introduces a gaslift debugging variable in ALQState in WellState. This
variable will persist between timesteps in contrast to when debugging
variables are defined in GasLiftSingleWell, GasLiftGroupState, or GasLiftStage2.
Currently only an integer variable debug_counter is added to ALQState,
which can be used as follows: First debugging is switched on globally
for BlackOilWellModel, GasLiftSingleWell, GasLiftGroupState, and
GasLiftStage2 by setting glift_debug to a true value in BlackOilWellModelGeneric.
Then, the following debugging code can be added to e.g. one of
GasLiftSingleWell, GasLiftGroupState, or GasLiftStage2 :
auto count = debugUpdateGlobalCounter_();
if (count == some_integer) {
displayDebugMessage_("stop here");
}
Here, the integer "some_integer" is determined typically by looking at
the debugging output of a previous run. This can be done since the
call to debugUpdateGlobalCounter_() will print out the current value
of the counter and then increment the counter by one. And it will be
easy to recognize these values in the debug ouput. If you find a place
in the output that looks suspect, just take a note of the counter
value in the output around that point and insert the value for
"some_integer", then after recompiling the code with the desired value
for "some_integer", it is now easy to set a breakpoint in GDB at the
line
displayDebugMessage_("stop here").
shown in the above snippet. This should improve the ability to quickly
to set a breakpoint in GDB around at a given time and point in the simulation.
split in API specific classes for Cuda/OpenCL
this to
1) it's cleaner
2) it avoids pulling in openCL code in cuda classes which leads
to clashes between nvidia headers and opencl.hpp
there is still too much API specific things in interface between the
bda components to work through a virtual interface so we still have to cast
to the relevant implementation in various places.
The class ISTLSolverEbos has all features of the removed class, and
is not much more complex. The flow_blackoil_dunecpr is the only
program using it, and is redundant.