A correct MPI program should do that.
As MPI_Finalize is part of the destructor of MainObject we need to
make sure that it's destructor is called before the return statement.
We do that manually by resetting the unique_ptr that we now use to
store the MainObject,
Instead of unconditionally issuing MPI_Abort if we encounter a fatal
exception, we try to test whether all processes have experienced this
exception and if this is the case just terminate nomally with a exit
code that signals an error. We still use MPI_Abort if not all
processes get an exception as this is the only way to make sure that
the program aborts.
This approach also works around issues in some MPI implementations
that might not correctly return the error.
Multiple messages like this are gone now:
```
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[smaug.dr-blatt.de:129359] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[smaug.dr-blatt.de:129359] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
```
Bu we still see something like this:
```
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[35057,1],0]
Exit code: 1
--------------------------------------------------------------------------
```
In preparation of adding support for opening/creating wells or
groups in the middle of a report step. This is needed if an
ACTIONX block runs something like WELOPEN or WELSPECS/COMPDAT.
Changes
```
Program threw an exception: [/home/mblatt/src/dune/opm/opm-simulators/opm/simulators/timestepping/AdaptiveTimeSteppingEbos.hpp:586] Solver failed to converge after cutting timestep 11 times.
```
to
```
Simulation aborted: Solver failed to converge after cutting timestep 11 times.
```
Which seems more user friendly.
We emit a warning if the model uses connection flag 'ALL', but
continue the run. This behaviour is still being debated and we
may decide to halt the run in this situation.
This commit activates the support for calculating WBPn summary
result values per well in parallel. To affect the calculation we
add two new data members in BlackoilWellModelGeneric:
- conn_idx_map_:
Maps well's connection index (0..getConnections().size() - 1) to
connections on current rank. Its local() connections are
negative 1 (-1) if the connection is not on current rank, and a
non-negative value otherwise. The global() function maps well
connections on current rank to global connection ID for each
well. Effectively the reverse of local(). Finally, the open()
function maps well connections on current rank to open/flowing
connections on current rank. Negative 1 if connection is not
flowing.
- wbpCalculationService:
Parallel collection of WBPn calculation objects that knows how
to exchange source and result information between all ranks in a
communicator. Also handles distributed wells.
We furthermore need a way to compute connection-level fluid mixture
density values. For the standard well class we add a way to access
the StandardWellConnection's 'perf_densities_' values. However,
since these are defined for open/flowing connections only, this
means we're not able to fully meet the requirements of the
WELL/ALL
WPAVE depth correction procedure for standard wells. The
multi-segmented well type, on the other hand, uses the fluid mixture
density in the associated well segment and is therefore well defined
for ALL connections. OPEN well connections are supported for both
well types.
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class. This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.
We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.