Previously, if the problem occured on an MPI process with rank other
than zero the the logging would not seen (at least in the output
files). Now together with the previous commit the problem should be
logged together with the well name and calling method.
For multi segment well the underlying call to
MultisegmentWell::updateWellStateWithTarget (at least if
updateWellStateWithTHPTargetProd is called for a producer under thp
control) might throw as there might be a singular matrix during the
solve needed in MultisegmentWell::iterateWellEqWithControl.
Previously, if that happened then the MPI process where it happened would
stop the nonlinear iteration as failed and try with a chopped time
step. The others might go one with the current time step and we would
see MPI errors about truncated messages.
Now we communicate any exception happening during this part of
WellModel::updateAndCommunicate and all processes will stop the
nonlinear iteration as failed and chop the time step.
We are experiencing singular matrices when solving mulisegment wells
sometimes. In that case (here during
BlackoilModelEbos::assembleReservoir <-
BlackoilModelEbos::initialLinerization <-
BlackoilModelEbos:::nonlinearIterationNewton ) an exception is thrown
when updating the controls of a well.
The problem here is that this exception only happens on one
process. That one goes to the catch block in
NonLinearSolverEbos::step, marks the nonlinear solve as failed and
cuts the time step. The others move to the collective communication
below. Somehow and somewhen all end up in a non-matching collective
communication with different data types and we get an MPI Error that
the message was truncated.
Now all processes will throw, terminate the nonlinear solver and cut
the timestep as it should be.
at the early stage of computeWellRatesWithBhpIterations. The perforation rates are not updated,
and it is not sensible to update based on the inconsistent well rates and perforation rates.
Better to keep the original explicit quantities for better consistency.
Furthermore, it can be dangerous to update the explicit quantities based on the
irrelevant perforation rates, since the ratios can be very undesirable due to crossflow.
Using bool here is at least frowned upon. To be honest, I have no idea
what happens underneath here if we pass a bool. In contrast to other
pod types we do not associate it with a builtin type of MPI (not even
sure what to use). Hence we probably create a custom type for sending
and receiving. That should work. But I have no idea what will be used
for summation.
BTW: I am debugging a case that previously crashed and now suddenly
works and this seems to be the only relevant change I made in the
meantime.
Implement graphcoloring to expose rows in level sets that that can be
executed in parallel during the sparse triangular solves.
Add copy of A matrix that is reordered to ensure continuous memory reads
when traversing the matrix in level set order.
TODO: add number of threads available as constructor argument in DILU
This commit adds support for loading a three-column NLDD
partitioning scheme of the form
MPI_Rank Cartesian_Index NLDD_Domain_ID
from a text file. In an MPI run it is assumed that the first column
holds integers in the range 0..MPI_Size()-1, and typically that each
such integer is listed at least once. In a sequential run, the MPI
rank column is ignored.
With this scheme we can load the same partition files that we write,
for increased repeatability and determinism, and we can also
experiment with externally generated NLDD partitions.
This commit adds a new (hidden) debugging option,
DebugEmitCellPartition (--debug-emit-cell-partition)
which, when set, will cause each rank to write a three-column text
file of the form
MPI_Rank Cartesian_Index NLDD_Domain_ID
into the directory
partition/CaseName
of the run's output directory. That file will be named according to
the process' MPI rank, so the first column will be the same as the
file name.
The option is primarily intended for debugging the NLDD partitioning
scheme, so is mostly reserved for runs with low MPI sizes (e.g.,
less than 20).
While here, also make the MPIPartitionFromFile helper class aware of
this format so that we can use concatenated output files as an input
to the MPI partitioning algorithm for repeatability.
This commit introduces new, experimental support for loading a
partitioning of the cells from a text file. The name of the file is
passed into the simulator using the new, hidden, command line option
--external-partition=filename
and we perform some basic checking that the number of elements in the
partition matches the number of cells in the CpGrid object.
The well dfactor is scaled by the well index
If postive the connection dfactor is threated as a well factor
and also scaled. If negative the connection dfactor is not scaled
This commit switches the current implementation of
'partitionCellsZoltan()', i.e., 'partitionCells("zoltan", ...)' into
using the MPI-aware ParallelNLDDPartitioningZoltan utility. In
doing so we make 'partitionCellsZoltan()' private since its
availability is not guaranteed. We also slightly reorder the
parameters and switch from passing a "Grid" into passing a
"GridView" as an argument to partitionCells(), and specialise this
function for the known grid views in OPM Flow.
We extract the Zoltan-related parameters out to an Entity-dependent
helper structure and move the complexity of forming this type to a
new helper function, BlackoilModelEbosNldd::partitionCells().
Invokes Zoltan library and requires MPI. Client code constructs an
abstract connectivity graph by defining connections/edges through
the 'registerConnection()' member function. May also impose a
restriction that certain cells/vertices be placed in the same
domain/block in the resulting partition. Client code must supply a
callback function that defines globally unique cell/vertex/object
IDs, across all MPI ranks, for each vertex in the connectivity
graph.
Member function 'partitionElement()' forms the resulting partition
vector, the size of which is the total number of objects visible to
the local rank-typically the number of cells owned by the rank, and
the number of overlap cells--i.e., the size of the local grid view.
This should replace OPM_DEFLOG_THROW in places where the problem
category is more appropriate than the error category.
In this commit, uses of OPM_DEFLOG_THROW have been replaced whenever
the exception class used was NumericalProblem.
This commit adds a new flag data member,
wellStructureChangedDynamically_
to the generic black-oil well model. This flag captures the
well_structure_changed
value from the 'SimulatorUpdate' structure in the updateEclWells()
member function. Then, in BlackoilWellModel::beginTimeStep(), we
key a well structure update off this flag when set. This, in turn,
enables creating or opening wells as a result of an ACTIONX block
updating the structure in the middle of a report step.
to get more outputting information and also avoid harsh termination due
to assert.
They are technically numerical problem instead of programming mistakes.
checking LIFTOPT active before gliftBeginTimeStepWellTestUpdateALQ and before attempting to optimize alq in gliftBeginTimeStepWellTestIterateWellEquations
in gliftBeginTimeStepWellTestIterateWellEquations.
otherwise, if the increment is 0 (inactive), the function
gliftBeginTimeStepWellTestIterateWellEquations
might return undesired results or enter endless loop.
we set the thp to be zero in the WellState. The previous logic related
this to THP constraints does not hold in multiple situations:
1. VFP table is specified, we need the THP value for output purpose
2. network is involved, we need the THP value for constraint check.
Implement calls to cuBlas, cuSparse and implement necessary
CUDA kernels to perform a single iteration of the jacobi preconditioner.
Add tests that verify new kernels and the preconditioner in its totality.
The preconditioner is verified on 2x2 and 3x3 blocks, which as of now
are the only supported sizes. 1x1 are not supported because cuSparse
does not support it.
The NLDD solver would always take one global non-linear (Newton)
iteration before starting the local non-linear iterations. This
commit introduces a new command-line parameter,
--nldd-num-initial-newton-iter (NlddNumInitialNewtonIter)
which allows the user to configure this value at runtime. The
default value, 1, preserves the current behaviour.
Due to issues with anonymous enums, newer versions of {fmt} (v10) do not seem to handle the implicit conversion from enum value to `int`. Hence we need a cast for `fmt::format`-calls involving `Matrix::block_matrix::rows` (or `cols`) to compile. This seems to only be relevant for one part of the code.
For an isolated example, see https://github.com/kjetilly/fmt_fails_with_enum
We did only break in the loop for rank 0 and not the other ones. Hence
all other processes kept iterating beyond the maximum number of
allowed iterations. This lead to hard to find crashes because of
non-matching MPI communication.
when local_well_solver_control_switching_ is off
and incoporating commenst regarding resetting wellStatus_ in StandardWell within
the function iterateWellEqWithSwitching.
when we do the local solve for well equations, control/status will be
updated during the iteration process, such that the converged well gets
correct control/status regarding to the current reservoir state.
various change in the other parts of the code were made to make the
function work as intended.
This commit switches the region set tag matching algorithm to using
unique prefixes. This enables the simulator to recognise that the
region set name
FIPUNI
should match up with the user defined region set 'FIPUNIT'. In the
current master sources, the above summary vector would produce a
diagnostic message saying that the region set 'FIPUNI' (without the
final 'T') does not exist.
To this end, we instruct the ParallelEclipseState to always pass the
six character substring beginning with 'FIP' for FIP-like region
arrays and defer to the rank-0 EclipseState/FieldProps mechanism to
match this prefix with its canonical region set.
Moved damaris command line parameter accessors to Main.cpp and fixed support for both Python and Paraview Python scripts, as they both cannot be present in the same simulation (seems to be an initialization conflict or double initialization)
added access to DUNE mesh geometry and passing through data to Damaris;
Updated command line so users can specifiy Python or Paraview script names and other paramaters that control Damaris
- Simulation name
- Number of dedicated cores or dedicated nodes
- Shared memory region size
- switch to turn off HDF5 output.
- Damaris logging level
Previously, we did a global summation of the size of the
well_perf_data vector to determine the number of perforations
of a well. In the case of distributed wells this will try to access
more perforations than stored for the well in well_perf_data and hence
might use data from cells that actually are not perforated by this
cell. Note that for well not distributed the code worked as the
summation has no effect.
This commit changes this to only query peforations on the
local process. This should be enough to fix this problem.
In addition it removes the computation of connpos which is never used.
Step one for moving Damaris calls out of EclWriter class and into its own DamarisWriter class;
EclProblem now calls both writeOutput methods and passes in the data::Solution object;
Add fix for first writeOutput() call not having PRESSURE data available;
data::Solution is now passed by rvalue ref into eclWriter::writeOutput();
guard added to prevent inclusion of damariswriter.hh
it was introduced back then for some purpose. The purpose might not
apply anymore due to other development. And also, some issues were
reported for some situtation with the approach.