while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
The order of an unordered_map is quite unpredictable. (The same
order will only hold with the same hash function, comparison
operator, and insertion order). Therefore we cannot assume that
the global SimulationDataContainer uses the same order for the
cell data as the local one (This was done before this commit).
But we can assume that the local one uses the same order on every
process.
Before this commit data got mixed up (e.g. gasoilratio with surfacevol)
when gathering local data for writing eclipse files on the master
process. This commit fixes this.
Instead of iterating over the cell data of the global state when
writing the data received, we again iterate over the cell data of
the local state and simply use the key to request the correct data
for writing from the global state.
Previously, the call was made after the grid was distributed.
This means that each process wrote it, but only with his cells
active which was just a part of the whole domain.
With this commit we make the writeInit call before distributing the
grid and make sure that only one process calls it.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.
At the moment, for the ParallelDebugOutput, we put a dummy
dyanmic_list_econ_limited, not sure how it will the parallel running.
The basic problem is that when initialzing the globalWellState_, what
will happen if they can not find state information for a well in the Wells*.
If some defaulted values are used, then no big problem here.
This is used to compute the Euclidian product for the saturations.
Thes are ordered in an interleaved manner (all saturations for cell
with index 0, the all for index 1, ...). Up to now the implementation
assumed a different ordering: blockwise (all saturations for phase 0 first,
then all saturations phase 1, ...).
With this commit the computation uses the right assumption.
Using &stdwells.wells() throws an assertion for null pointers
without -DNDEBUG, but was used nevertheless. That prevented running
models without wells.
The wells pointer might be null and we need to access its number of
phases in the constructor to store it. With this commit we prevent that
storage and simply ask the well struct whenever we need the number of
phases. Of course the code using it needs to check that there are wells
but that is done in most parts of the opm-simulators currently
(MultiSegmentWells and Solvent are/might be an exception).
In that case we cannot call numPhases() on the wells as it produces
a floating point exception. As we do not use that information in this case
anyway, we simply use -1 instead to prevent the call.
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here
As for each well only one process is responsible, the output process
does not see all wells. Ergo some well switching information was never
printed in a parallel run.
Therefore with this commit the well switching
message is printed regardless on which process it appears.
This is used to compute the Euclidian product for the saturations.
Thes are ordered in an interleaved manner (all saturations for cell
with index 0, the all for index 1, ...). Up to now the implementation
assumed a different ordering: blockwise (all saturations for phase 0 first,
then all saturations phase 1, ...).
With this commit the computation uses the right assumption.
Using &stdwells.wells() throws an assertion for null pointers
without -DNDEBUG, but was used nevertheless. That prevented running
models without wells.
The wells pointer might be null and we need to access its number of
phases in the constructor to store it. With this commit we prevent that
storage and simply ask the well struct whenever we need the number of
phases. Of course the code using it needs to check that there are wells
but that is done in most parts of the opm-simulators currently
(MultiSegmentWells and Solvent are/might be an exception).
In that case we cannot call numPhases() on the wells as it produces
a floating point exception. As we do not use that information in this case
anyway, we simply use -1 instead to prevent the call.
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here
The changes are:
- Make the WellOps struct public (needed by transport solver).
- Make it possible to store and retrieve total reservoir volume
perforation fluxes with getStoredWellPerforationFluxes(), controlled
by a flag set by setStoreWellPerforationFluxesFlag(), defaulting to
false (needed by pressure solver).