The new WellSwitchingLogger within updateWellControls uses
collective communication with all processes. Therefore all
of them need to enter the function as other flow_mpi will deadlock.
Therefore this commit calls the method even with non local wells
active.
This completes f94459d5ed
Each process with rank >0 will use .<deckname>.<rank>.DEBUG, and
<deckname>-<rank>.PRT for logging (instead of <file>.<rank>as before.
After the simulator has finished running we will append the content
of those files to the usual log files. If these files have a non-zero
size we will omit a warning as this should not happen if logging is
done right.
These files will be empty unless we fail to to log messages
only on the root process. Currently that is the case for
the messages about switching the well controls.
Only the root process did set the output_dir correctly. Others
used the default. Therefore all messages logged by non-root
processes did end up in the current directory even if an
output_dir was passed to flow_mpi.
Previously, for all step zero was reported. With this commit
we set these numbers in the SimulatorReport and now they end
up correctly in step_timings.txt
Its first implementation computed wrong results in parallel. With this commit
we noe have completely parallelized the computations and the results seem correct
for parallel runs with norne.
Both hcpv and res will be used to save only dims elements. As dims
will most likely be much smaller than the number of cells, we only
allocate containers of size dims with this commit.
* remotes/totto82/frankenstein_mod:
Fix seg-fault for cases without wells
Some micro performance improvments and cleaning
Add THP support in the denseAD well model
Only solve the linear system when it is not converged.
Revert changes to NewtonIterationBlackoilInterleaved.cpp
add and use class wellModelMatrixAdapter
Remove unused code and remove Eigen vectors
New updateState
Some cleaning and small changes
almost all of them were caused by recent changes in the master
branch:
- there were methods added which depend on the types `V` and
`DataBlock`. these do not make much sense in the context of the
frankenstein simulator. Also, these types are defined globally for the
whole Opm namespace in `BlackoilModelBase_impl.hpp` (which should be
prosecuted as a fellony IMO)! Besides this, their names are useless;
'V' is the letter which comes after `U` in the alphabet and when it
comes to computers basically everything can be seen as a chunk of data
(i.e., a `DataBlock`).
- it seems like the new and shiny dense-AD based well model was never
compiled with assertations enabled, at least some asserts referenced
non-existing variables.
- the recent output-related API changes were pretty unfortunate
because they had the effect of tying the (sub-optimal, IMO) internal
structure of the model even closer to the output code: as far as I can
see, `rq` does only make sense if the model works *exactly* like
BlackoilModelBase and friends. (for flow_ebos, this could be
replicated, but first it would be another unnecessary conversion step
and second, most of the quantities in `rq` are of type `ADB` and much
of the "frankenstein" excercise is devoted to getting rid of these.) I
thus reverted back to an old version of the output code and created a
`frankenstein` branch in my personal `opm-output` github fork.
With GCC version (Debian 4.9.2-10) 4.9.2 we get the following error
when compiling with -std=c++11 (default for dune 2.4):
converting to ‘const std::unordered_set<std::basic_string<char> >’ from initializer list would use explicit constructor
Instead of the WellsManager guessing which wells are handled by other
processes we now use tha ouput of the load balancer to compute wells
that are handled by other processes.
With the previous approach it was not possible to calculate this information
correctly. Wells with only one completion next to the border of the
processes' partition were represented on multiple processes. In additition
wells that the eclipse schedule section defined with completions on non-active
cells in sequential runs were not at all calculated in parallel runs.
With the new approach the CpGrid::loaBalance routine returns the set names of
wells that are not handled by this process when setting up the simulation. This
information is then used throughout the simulation.
-- The jacobian and residual in the reservoir is updated directly
-- The sparsity pattern are provided to the well matrices.
-- Some cleaning in updateWellState()
Check that we actually have data values for relative permeability
properties {WAT,OIL,GAS}KR before attempting to output the arrays.
While here, also correct an apparent misprint in the criterion for
whether or not to activate relperm output. We should check
'liquid_active' and 'vapour_active', not 'aqua_active', when
considering OILKR and GASKR properties respectively.
this simplifies handling the pore volume by centralising the code,
i.e., moving it into opm-parser. (in particular, it makes the MINPV
handling consistent with the active cells which get removed by the
grid.) If eclipse turns out to be inconsistent here, we need to deal
with atrocities like the MULTREGP keyword on a case-by-case basis,
i.e., it would considerably uglify the code and be an additional
maintainance burden.
note that besides supporting of MULTREGP, the code should now also
handle explicitly setting the pore volume via the PORV keyword
correctly....
-- isRS and phaseCondition is removed and hydroCarbonState in the state
is used instead
-- input of pressurediffs to computeHydrostaticCorrection() is changed
to double from Vector in WellHelpers.hpp
-- a new updateState is implemented based on dune vectors
-- the old is kept for comparision in this PR
-- the updateState is not identical.
Tested on spe1, spe9 and norne and it improves the convergence compares
to the old one.
- unused code is removed
- the scaled normed is stored in residual_norm_history for usage in
stabilized newton
- number of linear iterations is outputted
- linear solver tolerance is reduced to 0.01
- make compute wellFlux local
- rewrite ADB::V to std::vector<double>
When running in parallel a well state object with the well information
of the whole grid needs to constructed to gather the information from all
processes. Previously, this was done with the report step exported by the
timer. This was wrong for the following reason:
The output occurs after solving the time step and the timer is already
incremented. This means that we constructed the well state for gathering the
data for the next report step, already. Unfortunately, at that step some
wells that we have computed results for might have been shut. In that case
an exception with message "global state does not contain well ..." was thrown.
This problem occured for Model number 2 and might have been due to shut wells
because of banned cross flow.
With this commit we use the last report step if this is not an initial write
and not a substep.