this code mostly used the Eigen vectors as arrays anyway, so let's use
`std::vector`.
also, this patch only "mostly eliminates" Eigen from from these parts
of the code because the source files of the VFP code still use
AutoDiffBlock; Unfortunately this cannot easily be changed because
`flow_legacy` depends on these methods. (`flow_ebos` does not use the
incriminating methods.)
* master: (42 commits)
Let only one rank write to step_timing.txt
Do not refer users to issue tracker if multiple procs log.
Remove unused variable.
Use vector instead of VLA, also add missing includes.
changed: bundle eigen3 in the original tarball for debian
update redhat6 packaging
Bugfix parallel computation of weighted pressure etc.
Fixed uninitialized bug, and added logging/comment
Removed superfluous std::move
Refactoring
Initial version of summary data
Do not store collective communication in the wells object.
Make sure that updateWellControls is called on each process.
Make WellSwitchingLogger work with DUNE 2.3
Schedule::getGroup returns reference, not pointer
Removed warning in WellSwitchLogger::calculateMessageSize
Correctly initialize MPI for multisegment wells test
Changed some names in WellSwitchingLogger
Use speaking name for bool in getCellData
Whitespace and other formatting changes
...
All ranks were still writing to step_timing.txt at the same time.
This made it unusable for parallel runs. With this commit only
one processes writes to this file.
Previously, for all step zero was reported. With this commit
we set these numbers in the SimulatorReport and now they end
up correctly in step_timings.txt
Its first implementation computed wrong results in parallel. With this commit
we noe have completely parallelized the computations and the results seem correct
for parallel runs with norne.
almost all of them were caused by recent changes in the master
branch:
- there were methods added which depend on the types `V` and
`DataBlock`. these do not make much sense in the context of the
frankenstein simulator. Also, these types are defined globally for the
whole Opm namespace in `BlackoilModelBase_impl.hpp` (which should be
prosecuted as a fellony IMO)! Besides this, their names are useless;
'V' is the letter which comes after `U` in the alphabet and when it
comes to computers basically everything can be seen as a chunk of data
(i.e., a `DataBlock`).
- it seems like the new and shiny dense-AD based well model was never
compiled with assertations enabled, at least some asserts referenced
non-existing variables.
- the recent output-related API changes were pretty unfortunate
because they had the effect of tying the (sub-optimal, IMO) internal
structure of the model even closer to the output code: as far as I can
see, `rq` does only make sense if the model works *exactly* like
BlackoilModelBase and friends. (for flow_ebos, this could be
replicated, but first it would be another unnecessary conversion step
and second, most of the quantities in `rq` are of type `ADB` and much
of the "frankenstein" excercise is devoted to getting rid of these.) I
thus reverted back to an old version of the output code and created a
`frankenstein` branch in my personal `opm-output` github fork.
Instead of the WellsManager guessing which wells are handled by other
processes we now use tha ouput of the load balancer to compute wells
that are handled by other processes.
With the previous approach it was not possible to calculate this information
correctly. Wells with only one completion next to the border of the
processes' partition were represented on multiple processes. In additition
wells that the eclipse schedule section defined with completions on non-active
cells in sequential runs were not at all calculated in parallel runs.
With the new approach the CpGrid::loaBalance routine returns the set names of
wells that are not handled by this process when setting up the simulation. This
information is then used throughout the simulation.
while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
Previously, the call was made after the grid was distributed.
This means that each process wrote it, but only with his cells
active which was just a part of the whole domain.
With this commit we make the writeInit call before distributing the
grid and make sure that only one process calls it.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.
At the moment, for the ParallelDebugOutput, we put a dummy
dyanmic_list_econ_limited, not sure how it will the parallel running.
The basic problem is that when initialzing the globalWellState_, what
will happen if they can not find state information for a well in the Wells*.
If some defaulted values are used, then no big problem here.