We accomplish that by passing the module version as a string to the
constructors of LogOutputHelper and EclGenericOutputBlackoilModel
instead of calling moduleVersionName() in LogOutputHelper. That way
moduleVersionName is not needed by libopmsimulators anymore and
compilation works again for people requesting shared libraries via
CMake's BUILD_SHARED_LIBS variable.
The function 'coarsen' for the coarser level is called with the respective flag if it is defined. Then the matrices in the AMG hierarchy are constructed with deterministic indices.
If it is not defined yet, it is called without the flag and the matrices in the AMG hierarchy are constructed with non-deterministic indices.
preconditioner. Uses graph coloring to exploit
parallelism in upper and triangular solves when
computing a diagonal approximate inverse of a
sparse matrix. Supports blocksizes up to 3.
The function 'setUseFixedOrder' is called if it is defined and the matrices in the AMG hierarchy are constructed with deterministic indices.
If it is not defined yet, it is not called and the matrices in the AMG hierarchy are constructed with non-deterministic indices.
This commit captures the rock compaction transmissibility multiplier
in the 'PerfData' during the non-lienear iterations and communicates
the converged value back to the output layer through the new data
member to the data::Connection structure,
double data::Connection::compact_mult
In particular, we don't need the porosity value anymore now that the
"getDFactor()" function takes only the component density of gas at
surface conditions, the current dynamic gas viscosity at reservoir
conditions and a Connection object.
While here, split a few long lines and make more objects 'const' for
better maintainability.
This commit ensures that we have backing support for region IDs up
to and including NTFIP (TABDIMS(5), REGDIMS(1)). The existing setup
would fail (segmentation violation) in the case of a summary vector
of the form
ROFT
36 31 /
/
when the maximum FIPNUM value was 30. We nevertheless support
maximum FIPNUM values exceeding NTFIP.
We add a new optional parameter to the EclInterRegionFlowMap
constructor. The parameter allows client code to specifiy a
"minimum maximum" region ID that all ranks must support. This value
will be enforced during parallel aggregation.
This makes them available for use in other places. The function
std::string to_string(const ConvergenceReport::WellFailure& wf) is new,
but uses the format already established.
There is no need to restrict PRT file flow reports to those wells
which are active on the current rank since the SummaryState object
holds information for all wells active at the current report step.
Up to now this information is only output to standard out.
To help with debugging and replicating (e.g. in case of crashes)
without saved standard putput, we now also print the imformation about
MPI processes and OMP threads to the PRT file.
That way, in-place quantities will go through the short-to-canonical
name translation in FieldProps regardless of their origin--e.g.,
RPTSOL/RPTSCHED or explicit SUMMARY section vectors.
This commit splits the production, injection, and cumulative
*Report_() functions into three logically distinct parts, one for
outputting the report header (begin*Report_()), one for outputting a
single report record (output*ReportRecord_()), and one for ending
the report (end*Report_()). This simplifies the logic of the
*Record_() member functions since they no longer need to infer the
context which is already available in the caller and can perform a
more narrow task than before.
With this separation we're also able to remove the dashed lines
which would previously separate each report record, thereby creating
PRT file report sheets which have a more expected layout.
Moreover, as an aid to future maintenance, we also factor out common
code for the well and group cases in each *Record_() function.
Finally, fix a unit conversion problem in the report values for
cumulative gas quantities. The sheet header states that we should
be outputting values in 'MM' prefixed units, but we were only
scaling the gas values by a factor of 1000 instead of 1000*1000. In
other words, the injection and production gas values in the
cumulative sheet were off by a factor of 1000.
The StreamLog::addMessageUnconditionally() member function will end
each message with a newline (std::endl) so we should not add such
newlines ourselves. The extra newline characters produce spurious
blank lines in the report sheets, e.g., for the "PRODUCTION REPORT".
This commit removes the last newline character from each report
request, thus deferring that responsibility to OpmLog::note()
instead. Doing so, however, means we have take a little more care
with the first line of each report lest we create report sheets
which are smushed together.
For this particular model WetGasPVT::saturationPressure did throw
because convergence in the newton solver is not reached in 20
iterations. Unfortunately, the exception was only seen on one MPI rank
and the others continued.
With this commit we communicate the problem and throw on all MPI
processes. Time step will be cut as a result.
Previously, if the problem occured on an MPI process with rank other
than zero the the logging would not seen (at least in the output
files). Now together with the previous commit the problem should be
logged together with the well name and calling method.
For multi segment well the underlying call to
MultisegmentWell::updateWellStateWithTarget (at least if
updateWellStateWithTHPTargetProd is called for a producer under thp
control) might throw as there might be a singular matrix during the
solve needed in MultisegmentWell::iterateWellEqWithControl.
Previously, if that happened then the MPI process where it happened would
stop the nonlinear iteration as failed and try with a chopped time
step. The others might go one with the current time step and we would
see MPI errors about truncated messages.
Now we communicate any exception happening during this part of
WellModel::updateAndCommunicate and all processes will stop the
nonlinear iteration as failed and chop the time step.
We are experiencing singular matrices when solving mulisegment wells
sometimes. In that case (here during
BlackoilModelEbos::assembleReservoir <-
BlackoilModelEbos::initialLinerization <-
BlackoilModelEbos:::nonlinearIterationNewton ) an exception is thrown
when updating the controls of a well.
The problem here is that this exception only happens on one
process. That one goes to the catch block in
NonLinearSolverEbos::step, marks the nonlinear solve as failed and
cuts the time step. The others move to the collective communication
below. Somehow and somewhen all end up in a non-matching collective
communication with different data types and we get an MPI Error that
the message was truncated.
Now all processes will throw, terminate the nonlinear solver and cut
the timestep as it should be.
at the early stage of computeWellRatesWithBhpIterations. The perforation rates are not updated,
and it is not sensible to update based on the inconsistent well rates and perforation rates.
Better to keep the original explicit quantities for better consistency.
Furthermore, it can be dangerous to update the explicit quantities based on the
irrelevant perforation rates, since the ratios can be very undesirable due to crossflow.
Using bool here is at least frowned upon. To be honest, I have no idea
what happens underneath here if we pass a bool. In contrast to other
pod types we do not associate it with a builtin type of MPI (not even
sure what to use). Hence we probably create a custom type for sending
and receiving. That should work. But I have no idea what will be used
for summation.
BTW: I am debugging a case that previously crashed and now suddenly
works and this seems to be the only relevant change I made in the
meantime.
Implement graphcoloring to expose rows in level sets that that can be
executed in parallel during the sparse triangular solves.
Add copy of A matrix that is reordered to ensure continuous memory reads
when traversing the matrix in level set order.
TODO: add number of threads available as constructor argument in DILU
This commit adds support for loading a three-column NLDD
partitioning scheme of the form
MPI_Rank Cartesian_Index NLDD_Domain_ID
from a text file. In an MPI run it is assumed that the first column
holds integers in the range 0..MPI_Size()-1, and typically that each
such integer is listed at least once. In a sequential run, the MPI
rank column is ignored.
With this scheme we can load the same partition files that we write,
for increased repeatability and determinism, and we can also
experiment with externally generated NLDD partitions.