If we use transmissibilities for loadbalancing, then we calculate
transmissibilities twice. First on the global grid before
loadbalancing and then on the local grid after that. This is the
default. In this case all warnings will be shown correctly when
calculating the global transmissibilities.
If the user requests the same weights for all faces (command line
parameter --edge-weights-method=0) then the transmissibilities are only
calculated on the loadbalanced grid. Unfortunately, in this case only
rank 0 will issue warnings for his part including the false positives
mentioned below.
Due to load balancing many NNCs might be stored on another process,
but we still use all EDITNNC entries when computing transmissibilties
locally. Hence when applying EDITNNC on the loadbalanced grid we
will issue warnings for cases where there are no problems (e.g. NNC
between two overlap cells.
With this PR we will only warn when computing the transmissibilities
for the first time. For the default settings this will remove spurious
and duplicate warnings.
Not that for --edge-weights-method=0 nothing changes and we will still
see only warnings for the first rank including spurious one.
This is a follow up of the fix in #5414.
The comment said that the ordering of the compressed index of cells is
coherent with the cartesian index. THis is not the case in parallel
where cells in the overlap/ghost region might be ordered last (default).
This PR switches to calling the SummaryState constructor which is
aware of the value of undefined UDQs (OPM/opm-common#4052) directly.
While here, also sort headers, split some long lines, and prefer
initialisation lists to constructor body assignments.
Commit 0aaa69c6e (PR #5330) was a little too eager in its effort to
handle UDQ ASSIGN operations after action processing[%]. In
particular, the assignments, which alter the internal structures of
the SummaryState and UDQState objects, would happen prior to writing
summary files. In turn, this would make it appear as if the
assignment happened too early. This commit defers UDQ assignments
triggered by action processing until FlowProblem<>::endEpisode() for
two reasons
1. The problem originally addressed in 0aaa69c6e only presented
when the assignment was triggered on the final time step of an
episode (report step), so handling this situation here is a
more targeted approach.
2. Member function FlowProblem<>::endEpisode() is called after we
write the summary file output so any alterations to the
internal structures of the SummaryState will not be visible in
the summary output until the next time step. This is the
expected behaviour.
[%] Insufficient testing by: [at]bska.
If an action block happens to run a UDQ ASSIGN operation and,
furthermore, happens to run at the last time step of an episode then
the "clear pending assignments" behaviour of the ScheduleState copy
constructor leads to not performing the UDQ assignment at all. This
commit works around this problem by invoking the action handler's
UDQ assignment function after processing all active action blocks.
The underlying problem has been present since at least Pull Request
OPM/opm-common#3587 which introduced the "clear pending assignments"
behaviour of ScheduleState's copy constructor.
In particular, be consistent about four-space indent levels and add
braces to a number of single-statement control blocks. Add a few
blank lines for readability.
While here, also mark a number of objects as 'const'.
from AluGridVanguard and CpGridVanguard.
they exist is becaluse of typo in the function name. There is another
releaseGlobalTransmissibilities() function actually gets used.
which does not have effects, which was suggested by Håkon Hægland.
And also removing the following member variables from
FlowGenericProblem because they are not in use anymore.
krnumx_, krnumy_, krnumz_;
imbnumx_, imbnumy_, imbnumz_;
This commit includes the fraction of pore-volume whose CNV targets
are violated as a new per-iteration quantity in the INFOITER file
(--output-extra-convergence-info=iteration), with the column header
"CnvErrPvFrac". We collect the values which are already calculated
in
BlackoilModel<>::getReservoirConvergence()
and store these as a pair of numerator and denominator in the
ConvergenceReport class. Note that we need both the numerator and
the denominator in order to aggregate contributions from multiple
ranks.
While here, also make a few more objects 'const' and calculate
column widths directly instead of the maximum number of characters
in writeConvergenceHeader().
The 'interiorBorder' category is *probably* equivalent to the
'interior' category for codimension zero elements, but it's better
to be safe than sorry. We don't want to accumulate pore-volume
contributions twice.
This commit implements the parallel version of
EclipseState::computeFipRegionStatistics()
which computes a FIPRegionStatistics object for the current run's
fluid-in-place regions. The object construction uses an MPI-aware
reduction process to compute the maximum region IDs across all MPI
ranks.
While here, also unconditionally form the statistics object as part
of the EclWriter's constructor to ensure that all ranks participate
in the process. The initial approach of constructing the object on
first use is not robust in parallel. We may however wish to compute
these statistics only when needed. If so, that will be the subject
of follow-up work.
In the current approach, the full list of summary vectors,
especially those that are defined at the region level, is not known
until we've processed all UDQ and/or ACTIONX definitions. As a
quick solution to this, switch to using the 'SummaryConfig' object
that's defined in the EclipseIO container instead of the object that
gets constructed when reading the input files.
It is likely that we will have to rethink and refactor this
construction process later.
and add a dedicated header.
this way we don't need to pull in FlowMain.hpp for the prototypes,
avoiding pulling in the entire simulator machinery just to build
some simple utility functions.
The initial use case is calculating the phase-filled pore-volume
weighted average of the fluid mass densities per PVT region. This
value goes into calculating depth-corrected per-cell phase pressure
values such as the BPPO and BPPG summary vectors.
This class manages a single linear array which separately tracks the
averages' numerators and denominators as running sums per region and
region set. We pick this data structure to simplify the cross-rank
reduction needed in MPI parallel runs. Client code is expected to
add individual per-cell and per-phase contributions using the
addCell() member function and then call the accumulateParallel()
member to affect the cross-rank reduction. The averages will then
be available through the fieldValue() and value() member functions.
As a further view towards the initial use case, we track two
different types of average per phase--one for the phase-filled
volume and one for the pore-volume filled volume. The latter is the
average we would get for the case of the phase saturation being one
throughout the region. This alternative value is the fallback
option for the case of the phase saturation being identically zero
throughout the region.
We accomplish that by passing the module version as a string to the
constructors of LogOutputHelper and EclGenericOutputBlackoilModel
instead of calling moduleVersionName() in LogOutputHelper. That way
moduleVersionName is not needed by libopmsimulators anymore and
compilation works again for people requesting shared libraries via
CMake's BUILD_SHARED_LIBS variable.
This commit ensures that we have backing support for region IDs up
to and including NTFIP (TABDIMS(5), REGDIMS(1)). The existing setup
would fail (segmentation violation) in the case of a summary vector
of the form
ROFT
36 31 /
/
when the maximum FIPNUM value was 30. We nevertheless support
maximum FIPNUM values exceeding NTFIP.
We add a new optional parameter to the EclInterRegionFlowMap
constructor. The parameter allows client code to specifiy a
"minimum maximum" region ID that all ranks must support. This value
will be enforced during parallel aggregation.
This makes them available for use in other places. The function
std::string to_string(const ConvergenceReport::WellFailure& wf) is new,
but uses the format already established.
There is no need to restrict PRT file flow reports to those wells
which are active on the current rank since the SummaryState object
holds information for all wells active at the current report step.
Up to now this information is only output to standard out.
To help with debugging and replicating (e.g. in case of crashes)
without saved standard putput, we now also print the imformation about
MPI processes and OMP threads to the PRT file.
This commit splits the production, injection, and cumulative
*Report_() functions into three logically distinct parts, one for
outputting the report header (begin*Report_()), one for outputting a
single report record (output*ReportRecord_()), and one for ending
the report (end*Report_()). This simplifies the logic of the
*Record_() member functions since they no longer need to infer the
context which is already available in the caller and can perform a
more narrow task than before.
With this separation we're also able to remove the dashed lines
which would previously separate each report record, thereby creating
PRT file report sheets which have a more expected layout.
Moreover, as an aid to future maintenance, we also factor out common
code for the well and group cases in each *Record_() function.
Finally, fix a unit conversion problem in the report values for
cumulative gas quantities. The sheet header states that we should
be outputting values in 'MM' prefixed units, but we were only
scaling the gas values by a factor of 1000 instead of 1000*1000. In
other words, the injection and production gas values in the
cumulative sheet were off by a factor of 1000.
The StreamLog::addMessageUnconditionally() member function will end
each message with a newline (std::endl) so we should not add such
newlines ourselves. The extra newline characters produce spurious
blank lines in the report sheets, e.g., for the "PRODUCTION REPORT".
This commit removes the last newline character from each report
request, thus deferring that responsibility to OpmLog::note()
instead. Doing so, however, means we have take a little more care
with the first line of each report lest we create report sheets
which are smushed together.
This commit adds support for loading a three-column NLDD
partitioning scheme of the form
MPI_Rank Cartesian_Index NLDD_Domain_ID
from a text file. In an MPI run it is assumed that the first column
holds integers in the range 0..MPI_Size()-1, and typically that each
such integer is listed at least once. In a sequential run, the MPI
rank column is ignored.
With this scheme we can load the same partition files that we write,
for increased repeatability and determinism, and we can also
experiment with externally generated NLDD partitions.
This commit adds a new (hidden) debugging option,
DebugEmitCellPartition (--debug-emit-cell-partition)
which, when set, will cause each rank to write a three-column text
file of the form
MPI_Rank Cartesian_Index NLDD_Domain_ID
into the directory
partition/CaseName
of the run's output directory. That file will be named according to
the process' MPI rank, so the first column will be the same as the
file name.
The option is primarily intended for debugging the NLDD partitioning
scheme, so is mostly reserved for runs with low MPI sizes (e.g.,
less than 20).
While here, also make the MPIPartitionFromFile helper class aware of
this format so that we can use concatenated output files as an input
to the MPI partitioning algorithm for repeatability.
This commit introduces new, experimental support for loading a
partitioning of the cells from a text file. The name of the file is
passed into the simulator using the new, hidden, command line option
--external-partition=filename
and we perform some basic checking that the number of elements in the
partition matches the number of cells in the CpGrid object.
This commit switches the current implementation of
'partitionCellsZoltan()', i.e., 'partitionCells("zoltan", ...)' into
using the MPI-aware ParallelNLDDPartitioningZoltan utility. In
doing so we make 'partitionCellsZoltan()' private since its
availability is not guaranteed. We also slightly reorder the
parameters and switch from passing a "Grid" into passing a
"GridView" as an argument to partitionCells(), and specialise this
function for the known grid views in OPM Flow.
We extract the Zoltan-related parameters out to an Entity-dependent
helper structure and move the complexity of forming this type to a
new helper function, BlackoilModelEbosNldd::partitionCells().