it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)
i.e., the contents of the Opm::details namespace, the IterationReport
and the DefaultBlackoilSolutionState classes. the purpose of this is
to share the code between the existing flow variants and flow_ebos.
while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
The order of an unordered_map is quite unpredictable. (The same
order will only hold with the same hash function, comparison
operator, and insertion order). Therefore we cannot assume that
the global SimulationDataContainer uses the same order for the
cell data as the local one (This was done before this commit).
But we can assume that the local one uses the same order on every
process.
Before this commit data got mixed up (e.g. gasoilratio with surfacevol)
when gathering local data for writing eclipse files on the master
process. This commit fixes this.
Instead of iterating over the cell data of the global state when
writing the data received, we again iterate over the cell data of
the local state and simply use the key to request the correct data
for writing from the global state.
Previously, the call was made after the grid was distributed.
This means that each process wrote it, but only with his cells
active which was just a part of the whole domain.
With this commit we make the writeInit call before distributing the
grid and make sure that only one process calls it.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.