This commit adds a new public member function
SatfuncConsistencyChecks<>::collectFailures(root, comm)
which aggregates consistency check violations from all ranks in the
MPI communication object 'comm' onto rank 'root' of 'comm'. This
amounts to summing the total number of violations from all ranks and
potentially resampling the failure points for reporting purposes.
To this end, extract the body of function processViolation() into a
general helper which performs reservoir sampling and records point
IDs and which uses a call-back function to populate the check values
associated to a single failed check. Re-implement the original
function in terms of this helper by wrapping exportCheckValues() in
a lambda function. Extract similar helpers for numPoints() and
anyFailedChecks(), and add a new helper function
SatfuncConsistencyChecks<>::incorporateRankViolations()
which brings sampled points from an MPI rank into the 'root's
internal data structures.
One caveat applies here. Our current approach to collecting check
failures implies that calling member function reportFailures() is
safe only on the 'root' process in a parallel run. On the other
hand functions anyFailedChecks() and anyFailedCriticalChecks() are
safe, and guaranteed to return the same answer, on all MPI ranks.
On a final note, the internal helper functions are at present mostly
implemented in terms of non-owning pointers. I intend to switch to
using 'std::span<>' once we enable C++20 mode.
This commits adds cmake functionality that can
hipify the cuistl framework to support AMD GPUs.
Some tests have been written as HIP does not mirror
CUDA exactly.
CONVERT_CUDA_TO_HIP is the new CMAKE argument.
CMAKE version is increased to include HIP
as a language (3.21 required).
A macro is added to create a layer of indirection
that will make only cuistl files that have been
changed rehipified.
Some BDA stuff is extracted to make sure CUDA
is not accidentally included.
The initial use case is calculating the phase-filled pore-volume
weighted average of the fluid mass densities per PVT region. This
value goes into calculating depth-corrected per-cell phase pressure
values such as the BPPO and BPPG summary vectors.
This class manages a single linear array which separately tracks the
averages' numerators and denominators as running sums per region and
region set. We pick this data structure to simplify the cross-rank
reduction needed in MPI parallel runs. Client code is expected to
add individual per-cell and per-phase contributions using the
addCell() member function and then call the accumulateParallel()
member to affect the cross-rank reduction. The averages will then
be available through the fieldValue() and value() member functions.
As a further view towards the initial use case, we track two
different types of average per phase--one for the phase-filled
volume and one for the pore-volume filled volume. The latter is the
average we would get for the case of the phase saturation being one
throughout the region. This alternative value is the fallback
option for the case of the phase saturation being identically zero
throughout the region.
We accomplish that by passing the module version as a string to the
constructors of LogOutputHelper and EclGenericOutputBlackoilModel
instead of calling moduleVersionName() in LogOutputHelper. That way
moduleVersionName is not needed by libopmsimulators anymore and
compilation works again for people requesting shared libraries via
CMake's BUILD_SHARED_LIBS variable.
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class. This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.
We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
commit e5e7ff7287 introduced a dependency on a generated header in a
header file. this is problematic with super-builds as there is no
explicit dependency for the simulator objects to these generated
headers.
with the ninja-generator it is smart enough to figure out this across
the subdirectories, but the make generator is not. hence we explicitly
add a dependency on the opmcommon target in this case. ideally we would
only depend on the generated header to allow compiling opmcommon and
simulator objects in parallel. however there is as far as i know no way to
depend on OUTPUT targets across subdirectories.
This commit adds a new container class,
ParallelPAvgDynamicSourceData
which inherits from PAvgDynamicSourceData and provides a parallel
view of source contributions. Member function
collectLocalSources
will call the user-provided source term evaluation function for each
source location in its purview--typically those locations owned by
the current MPI rank. Those values will be distributed to other MPI
ranks through member function synchroniseSources which will fill the
base class' 'src_' data member, and become available to clients
through read-only item spans.