The initial use case is calculating the phase-filled pore-volume
weighted average of the fluid mass densities per PVT region. This
value goes into calculating depth-corrected per-cell phase pressure
values such as the BPPO and BPPG summary vectors.
This class manages a single linear array which separately tracks the
averages' numerators and denominators as running sums per region and
region set. We pick this data structure to simplify the cross-rank
reduction needed in MPI parallel runs. Client code is expected to
add individual per-cell and per-phase contributions using the
addCell() member function and then call the accumulateParallel()
member to affect the cross-rank reduction. The averages will then
be available through the fieldValue() and value() member functions.
As a further view towards the initial use case, we track two
different types of average per phase--one for the phase-filled
volume and one for the pore-volume filled volume. The latter is the
average we would get for the case of the phase saturation being one
throughout the region. This alternative value is the fallback
option for the case of the phase saturation being identically zero
throughout the region.
We accomplish that by passing the module version as a string to the
constructors of LogOutputHelper and EclGenericOutputBlackoilModel
instead of calling moduleVersionName() in LogOutputHelper. That way
moduleVersionName is not needed by libopmsimulators anymore and
compilation works again for people requesting shared libraries via
CMake's BUILD_SHARED_LIBS variable.
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class. This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.
We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
commit e5e7ff7287 introduced a dependency on a generated header in a
header file. this is problematic with super-builds as there is no
explicit dependency for the simulator objects to these generated
headers.
with the ninja-generator it is smart enough to figure out this across
the subdirectories, but the make generator is not. hence we explicitly
add a dependency on the opmcommon target in this case. ideally we would
only depend on the generated header to allow compiling opmcommon and
simulator objects in parallel. however there is as far as i know no way to
depend on OUTPUT targets across subdirectories.
This commit adds a new container class,
ParallelPAvgDynamicSourceData
which inherits from PAvgDynamicSourceData and provides a parallel
view of source contributions. Member function
collectLocalSources
will call the user-provided source term evaluation function for each
source location in its purview--typically those locations owned by
the current MPI rank. Those values will be distributed to other MPI
ranks through member function synchroniseSources which will fill the
base class' 'src_' data member, and become available to clients
through read-only item spans.
We rely on parallel hdf5 for a parallel build of OPM flow. Otherwise
compilation will fail. Hence if this is a parallel build but only a
serial hdf5 library is found, we now issue an informative warning and
deactivate hdf5/parallel restart.
ClosesOPM/opm-common#3405