Currently, execute() calls runSimulator() to run the simulation. When
the Python step_init() is implemented (see a later commit), it will
instead call an executeStepInit() that will need to do the same
initialization as in execute() except that it should call a
runSimulatorInit() instead of runSimulator(). In order to avoid code
duplication for execute() and executeStepInit(), execute() is here
refactored into an execute_() method.
For some of the scaling approaches the wrong matrix (dereferenced
nullptr) would have been used which should have resulted segmentation
faults. With this commit we add a method getMatrix() that returns the
correct one and use that for scaling.
Somehow that approach went missing in action and it is always the
same as the assembled matrix. Hence no need for that member anymore
and we remove it to prevent confusion.
If the user requested ILU0, the uninitialized valued caused an
arbritrary (quite high) fill-in level to be used which stalled the
computation and exhausted memory when running parallel.
NOTE: this pull request depends on #2555 which should be merged first.
A rewrite of the outdated PR #2543.
Refactors flow_ebos_blackoil.cpp such that we can choose not to execute
the whole simulation using the flowEbosBlackoilMain() function but
instead only initialize by calling flowEbosBlackoilMainInit(). This is
necessary to implement a Python step() method that can advance the
simulator one report step at a time.
Also adds a method initFlowEbosBlackoil() to Main.hpp that can be used
directly from the Python interface's BlackOilSimulator object to gain
access to the FlowMainEbos object before it has initialized the
simulation main loop.
In this case matrix_ is a nullptr and noGhostMat_ is the optimized
matrix to use. Before this commit we experienced a segmentation fault
as the nullptr was dereferenced and passed to the solver.
Previously all the ranks print linear solver statistics in verbose
mode which cluttered the output in parallel runs. Now only rank 0
will print like it should be.
Previously, one had to call a seperate binary called
flow_blackoil_dunecpr. Unforntunately, that was only built if users
requested that the tests (`-DBUILD_TESTING=ON`) and the flow
variants (`-DBUILD_FLOW_VARIANTS=ON`) should be built. In addition
it would use a slightly different nonlinear solver implementation.
With this commit flow can be asked to either use
CPR (`--use-cpr=true --matrix-add-well-contributions=true`) or to use
the flexible solvers bx setting the `--linear-solver-configuration`
option. In all other cases the usual solver implementations are still
used.
Note that the flexible solvers still need
`--matrix-add-well-contributions=true` and hence cannot cope with
multi-segment wells.
The default was 0 for flow and 3 for flow_blackoil_dunecpr. Now we
always use 3. This is safe as it will not be used for any other
solvers than the flexible ones.
as there already is a specialization for an arbitrary Block
and that one leads to compile errors because we redefine it
in OPM (and miss some other specializations for Opm:MatrixBlock,
e.g. isNumber).