For this particular model WetGasPVT::saturationPressure did throw
because convergence in the newton solver is not reached in 20
iterations. Unfortunately, the exception was only seen on one MPI rank
and the others continued.
With this commit we communicate the problem and throw on all MPI
processes. Time step will be cut as a result.
For multi segment well the underlying call to
MultisegmentWell::updateWellStateWithTarget (at least if
updateWellStateWithTHPTargetProd is called for a producer under thp
control) might throw as there might be a singular matrix during the
solve needed in MultisegmentWell::iterateWellEqWithControl.
Previously, if that happened then the MPI process where it happened would
stop the nonlinear iteration as failed and try with a chopped time
step. The others might go one with the current time step and we would
see MPI errors about truncated messages.
Now we communicate any exception happening during this part of
WellModel::updateAndCommunicate and all processes will stop the
nonlinear iteration as failed and chop the time step.
We are experiencing singular matrices when solving mulisegment wells
sometimes. In that case (here during
BlackoilModelEbos::assembleReservoir <-
BlackoilModelEbos::initialLinerization <-
BlackoilModelEbos:::nonlinearIterationNewton ) an exception is thrown
when updating the controls of a well.
The problem here is that this exception only happens on one
process. That one goes to the catch block in
NonLinearSolverEbos::step, marks the nonlinear solve as failed and
cuts the time step. The others move to the collective communication
below. Somehow and somewhen all end up in a non-matching collective
communication with different data types and we get an MPI Error that
the message was truncated.
Now all processes will throw, terminate the nonlinear solver and cut
the timestep as it should be.
Using bool here is at least frowned upon. To be honest, I have no idea
what happens underneath here if we pass a bool. In contrast to other
pod types we do not associate it with a builtin type of MPI (not even
sure what to use). Hence we probably create a custom type for sending
and receiving. That should work. But I have no idea what will be used
for summation.
BTW: I am debugging a case that previously crashed and now suddenly
works and this seems to be the only relevant change I made in the
meantime.
This commit adds a new flag data member,
wellStructureChangedDynamically_
to the generic black-oil well model. This flag captures the
well_structure_changed
value from the 'SimulatorUpdate' structure in the updateEclWells()
member function. Then, in BlackoilWellModel::beginTimeStep(), we
key a well structure update off this flag when set. This, in turn,
enables creating or opening wells as a result of an ACTIONX block
updating the structure in the middle of a report step.
We did only break in the loop for rank 0 and not the other ones. Hence
all other processes kept iterating beyond the maximum number of
allowed iterations. This lead to hard to find crashes because of
non-matching MPI communication.
when we do the local solve for well equations, control/status will be
updated during the iteration process, such that the converged well gets
correct control/status regarding to the current reservoir state.
various change in the other parts of the code were made to make the
function work as intended.
Previously, we did a global summation of the size of the
well_perf_data vector to determine the number of perforations
of a well. In the case of distributed wells this will try to access
more perforations than stored for the well in well_perf_data and hence
might use data from cells that actually are not perforated by this
cell. Note that for well not distributed the code worked as the
summation has no effect.
This commit changes this to only query peforations on the
local process. This should be enough to fix this problem.
In addition it removes the computation of connpos which is never used.
In preparation of adding support for opening/creating wells or
groups in the middle of a report step. This is needed if an
ACTIONX block runs something like WELOPEN or WELSPECS/COMPDAT.
This commit activates the support for calculating WBPn summary
result values per well in parallel. To affect the calculation we
add two new data members in BlackoilWellModelGeneric:
- conn_idx_map_:
Maps well's connection index (0..getConnections().size() - 1) to
connections on current rank. Its local() connections are
negative 1 (-1) if the connection is not on current rank, and a
non-negative value otherwise. The global() function maps well
connections on current rank to global connection ID for each
well. Effectively the reverse of local(). Finally, the open()
function maps well connections on current rank to open/flowing
connections on current rank. Negative 1 if connection is not
flowing.
- wbpCalculationService:
Parallel collection of WBPn calculation objects that knows how
to exchange source and result information between all ranks in a
communicator. Also handles distributed wells.
We furthermore need a way to compute connection-level fluid mixture
density values. For the standard well class we add a way to access
the StandardWellConnection's 'perf_densities_' values. However,
since these are defined for open/flowing connections only, this
means we're not able to fully meet the requirements of the
WELL/ALL
WPAVE depth correction procedure for standard wells. The
multi-segmented well type, on the other hand, uses the fluid mixture
density in the associated well segment and is therefore well defined
for ALL connections. OPEN well connections are supported for both
well types.