This commit adds a new public member function
SatfuncConsistencyChecks<>::collectFailures(root, comm)
which aggregates consistency check violations from all ranks in the
MPI communication object 'comm' onto rank 'root' of 'comm'. This
amounts to summing the total number of violations from all ranks and
potentially resampling the failure points for reporting purposes.
To this end, extract the body of function processViolation() into a
general helper which performs reservoir sampling and records point
IDs and which uses a call-back function to populate the check values
associated to a single failed check. Re-implement the original
function in terms of this helper by wrapping exportCheckValues() in
a lambda function. Extract similar helpers for numPoints() and
anyFailedChecks(), and add a new helper function
SatfuncConsistencyChecks<>::incorporateRankViolations()
which brings sampled points from an MPI rank into the 'root's
internal data structures.
One caveat applies here. Our current approach to collecting check
failures implies that calling member function reportFailures() is
safe only on the 'root' process in a parallel run. On the other
hand functions anyFailedChecks() and anyFailedCriticalChecks() are
safe, and guaranteed to return the same answer, on all MPI ranks.
On a final note, the internal helper functions are at present mostly
implemented in terms of non-owning pointers. I intend to switch to
using 'std::span<>' once we enable C++20 mode.
The intention is that this will ultimately replace the existing
RelpermDiagnostics component which does not really work in parallel
and which does not report enough context to help diagnose underlying
issues. For now, though, we just add the shell of a new set of
checks and hook that up to the build.
Class SatfuncConsistencyChecks<Scalar> manages a configurable set of
consistency checks, the implementations of which must publicly
derive from SatfuncConsistencyChecks<Scalar>::Check. Client code
will configure a set of checks by first calling
SatfuncConsistencyChecks<Scalar>::resetCheckSet()
then register individual checks by calling
SatfuncConsistencyChecks<Scalar>::addCheck()
and finally build requisite internal structures by calling
SatfuncConsistencyChecks<Scalar>::finaliseCheckSet()
Client code will then run the checks by calling
SatfuncConsistencyChecks<Scalar>::checkEndpoints()
typically in a loop. Class SatfuncConsistencyChecks<Scalar> will
count consistency check failures and attribute these to each
individual check as needed. We also maintain separate counts for
"Standard" and "Critical" failures. The former will typically
generate warnings while the latter will typically cause the
simulation run to stop. Individual checks get to decide which check
is "Critical", and client code gets to decide how to respond to
"Critical" failures.
Member function SatfuncConsistencyChecks<Scalar>::reportFailures()
will generate a textual report of the known set of consistency check
failures at a give severity level.
As an internal implementation detail, SatfuncConsistencyChecks uses
"reservoir sampling"
(https://en.wikipedia.org/wiki/Reservoir_sampling) to track details
about individual failed checks. We maintain at most a fixed number
of individual points (constructor argument).
Nearly all exceptions throw when computing well potentoals will not
abort the simulator but result in timestep chops. Hence those should not be
counted as errors (e.g. by calling the OPM_*THROW* macros) and be
reported in the PRT file.
This change will cause at least two more occurences (in
MSWellHelpers) to be treated as problems. For this we added a new
helper function.
Basic support for this keyword was added in commit
OPM/opm-common@5e3e20c552
and this commit enables running models which use that basic support.
Advanced uses, such as including user-defined arguments for the
multipliers, will still be rejected at the input level.
This PR switches to calling the SummaryState constructor which is
aware of the value of undefined UDQs (OPM/opm-common#4052) directly.
While here, also sort headers, split some long lines, and prefer
initialisation lists to constructor body assignments.
This commit implements the parallel version of
EclipseState::computeFipRegionStatistics()
which computes a FIPRegionStatistics object for the current run's
fluid-in-place regions. The object construction uses an MPI-aware
reduction process to compute the maximum region IDs across all MPI
ranks.
While here, also unconditionally form the statistics object as part
of the EclWriter's constructor to ensure that all ranks participate
in the process. The initial approach of constructing the object on
first use is not robust in parallel. We may however wish to compute
these statistics only when needed. If so, that will be the subject
of follow-up work.