We are experiencing singular matrices when solving mulisegment wells
sometimes. In that case (here during
BlackoilModelEbos::assembleReservoir <-
BlackoilModelEbos::initialLinerization <-
BlackoilModelEbos:::nonlinearIterationNewton ) an exception is thrown
when updating the controls of a well.
The problem here is that this exception only happens on one
process. That one goes to the catch block in
NonLinearSolverEbos::step, marks the nonlinear solve as failed and
cuts the time step. The others move to the collective communication
below. Somehow and somewhen all end up in a non-matching collective
communication with different data types and we get an MPI Error that
the message was truncated.
Now all processes will throw, terminate the nonlinear solver and cut
the timestep as it should be.
Using bool here is at least frowned upon. To be honest, I have no idea
what happens underneath here if we pass a bool. In contrast to other
pod types we do not associate it with a builtin type of MPI (not even
sure what to use). Hence we probably create a custom type for sending
and receiving. That should work. But I have no idea what will be used
for summation.
BTW: I am debugging a case that previously crashed and now suddenly
works and this seems to be the only relevant change I made in the
meantime.
This commit introduces new, experimental support for loading a
partitioning of the cells from a text file. The name of the file is
passed into the simulator using the new, hidden, command line option
--external-partition=filename
and we perform some basic checking that the number of elements in the
partition matches the number of cells in the CpGrid object.
The well dfactor is scaled by the well index
If postive the connection dfactor is threated as a well factor
and also scaled. If negative the connection dfactor is not scaled
add a method in EclWriter to enable this.
this is called the first time a call is made to WriteOutput,
as that happens after initial conditions have been applied which
is required to get the proper output.
this also fixes a long-standing issue where the initial FIP state was
taken after the first time step.
This commit switches the current implementation of
'partitionCellsZoltan()', i.e., 'partitionCells("zoltan", ...)' into
using the MPI-aware ParallelNLDDPartitioningZoltan utility. In
doing so we make 'partitionCellsZoltan()' private since its
availability is not guaranteed. We also slightly reorder the
parameters and switch from passing a "Grid" into passing a
"GridView" as an argument to partitionCells(), and specialise this
function for the known grid views in OPM Flow.
We extract the Zoltan-related parameters out to an Entity-dependent
helper structure and move the complexity of forming this type to a
new helper function, BlackoilModelEbosNldd::partitionCells().
Invokes Zoltan library and requires MPI. Client code constructs an
abstract connectivity graph by defining connections/edges through
the 'registerConnection()' member function. May also impose a
restriction that certain cells/vertices be placed in the same
domain/block in the resulting partition. Client code must supply a
callback function that defines globally unique cell/vertex/object
IDs, across all MPI ranks, for each vertex in the connectivity
graph.
Member function 'partitionElement()' forms the resulting partition
vector, the size of which is the total number of objects visible to
the local rank-typically the number of cells owned by the rank, and
the number of overlap cells--i.e., the size of the local grid view.