The intention is that this will ultimately replace the existing
RelpermDiagnostics component which does not really work in parallel
and which does not report enough context to help diagnose underlying
issues. For now, though, we just add the shell of a new set of
checks and hook that up to the build.
Class SatfuncConsistencyChecks<Scalar> manages a configurable set of
consistency checks, the implementations of which must publicly
derive from SatfuncConsistencyChecks<Scalar>::Check. Client code
will configure a set of checks by first calling
SatfuncConsistencyChecks<Scalar>::resetCheckSet()
then register individual checks by calling
SatfuncConsistencyChecks<Scalar>::addCheck()
and finally build requisite internal structures by calling
SatfuncConsistencyChecks<Scalar>::finaliseCheckSet()
Client code will then run the checks by calling
SatfuncConsistencyChecks<Scalar>::checkEndpoints()
typically in a loop. Class SatfuncConsistencyChecks<Scalar> will
count consistency check failures and attribute these to each
individual check as needed. We also maintain separate counts for
"Standard" and "Critical" failures. The former will typically
generate warnings while the latter will typically cause the
simulation run to stop. Individual checks get to decide which check
is "Critical", and client code gets to decide how to respond to
"Critical" failures.
Member function SatfuncConsistencyChecks<Scalar>::reportFailures()
will generate a textual report of the known set of consistency check
failures at a give severity level.
As an internal implementation detail, SatfuncConsistencyChecks uses
"reservoir sampling"
(https://en.wikipedia.org/wiki/Reservoir_sampling) to track details
about individual failed checks. We maintain at most a fixed number
of individual points (constructor argument).
The simulation will just chop the time step and continue.
Note, that the error count in the PRT file is used by engineers to
decide whether a simulation was successfull. Hence the error count
should not be increased here.
Nearly all exceptions throw when computing well potentoals will not
abort the simulator but result in timestep chops. Hence those should not be
counted as errors (e.g. by calling the OPM_*THROW* macros) and be
reported in the PRT file.
This change will cause at least two more occurences (in
MSWellHelpers) to be treated as problems. For this we added a new
helper function.
-Only output or restart solution tracers for gas/oil tracers with DISGAS/VAPOIL enabled (no solution tracers in water phase!).
-Initial tracers (free/solution) will be set to zero initially if TBLK/TVDP is not given.
- Do not calculate mass transfer between free and solution tracers if it is not necessary.
-Calculate well rates using updated tracer concentrations
If we use transmissibilities for loadbalancing, then we calculate
transmissibilities twice. First on the global grid before
loadbalancing and then on the local grid after that. This is the
default. In this case all warnings will be shown correctly when
calculating the global transmissibilities.
If the user requests the same weights for all faces (command line
parameter --edge-weights-method=0) then the transmissibilities are only
calculated on the loadbalanced grid. Unfortunately, in this case only
rank 0 will issue warnings for his part including the false positives
mentioned below.
Due to load balancing many NNCs might be stored on another process,
but we still use all EDITNNC entries when computing transmissibilties
locally. Hence when applying EDITNNC on the loadbalanced grid we
will issue warnings for cases where there are no problems (e.g. NNC
between two overlap cells.
With this PR we will only warn when computing the transmissibilities
for the first time. For the default settings this will remove spurious
and duplicate warnings.
Not that for --edge-weights-method=0 nothing changes and we will still
see only warnings for the first rank including spurious one.
This is a follow up of the fix in #5414.
The comment said that the ordering of the compressed index of cells is
coherent with the cartesian index. THis is not the case in parallel
where cells in the overlap/ghost region might be ordered last (default).