Instead of the WellsManager guessing which wells are handled by other
processes we now use tha ouput of the load balancer to compute wells
that are handled by other processes.
With the previous approach it was not possible to calculate this information
correctly. Wells with only one completion next to the border of the
processes' partition were represented on multiple processes. In additition
wells that the eclipse schedule section defined with completions on non-active
cells in sequential runs were not at all calculated in parallel runs.
With the new approach the CpGrid::loaBalance routine returns the set names of
wells that are not handled by this process when setting up the simulation. This
information is then used throughout the simulation.
-- The jacobian and residual in the reservoir is updated directly
-- The sparsity pattern are provided to the well matrices.
-- Some cleaning in updateWellState()
Check that we actually have data values for relative permeability
properties {WAT,OIL,GAS}KR before attempting to output the arrays.
While here, also correct an apparent misprint in the criterion for
whether or not to activate relperm output. We should check
'liquid_active' and 'vapour_active', not 'aqua_active', when
considering OILKR and GASKR properties respectively.
this simplifies handling the pore volume by centralising the code,
i.e., moving it into opm-parser. (in particular, it makes the MINPV
handling consistent with the active cells which get removed by the
grid.) If eclipse turns out to be inconsistent here, we need to deal
with atrocities like the MULTREGP keyword on a case-by-case basis,
i.e., it would considerably uglify the code and be an additional
maintainance burden.
note that besides supporting of MULTREGP, the code should now also
handle explicitly setting the pore volume via the PORV keyword
correctly....
-- isRS and phaseCondition is removed and hydroCarbonState in the state
is used instead
-- input of pressurediffs to computeHydrostaticCorrection() is changed
to double from Vector in WellHelpers.hpp
-- a new updateState is implemented based on dune vectors
-- the old is kept for comparision in this PR
-- the updateState is not identical.
Tested on spe1, spe9 and norne and it improves the convergence compares
to the old one.
- unused code is removed
- the scaled normed is stored in residual_norm_history for usage in
stabilized newton
- number of linear iterations is outputted
- linear solver tolerance is reduced to 0.01
- make compute wellFlux local
- rewrite ADB::V to std::vector<double>
When running in parallel a well state object with the well information
of the whole grid needs to constructed to gather the information from all
processes. Previously, this was done with the report step exported by the
timer. This was wrong for the following reason:
The output occurs after solving the time step and the timer is already
incremented. This means that we constructed the well state for gathering the
data for the next report step, already. Unfortunately, at that step some
wells that we have computed results for might have been shut. In that case
an exception with message "global state does not contain well ..." was thrown.
This problem occured for Model number 2 and might have been due to shut wells
because of banned cross flow.
With this commit we use the last report step if this is not an initial write
and not a substep.
remove the now unnecessary inclusion of
"NewtonIterationBlackoilInterleaved.cpp" (mind the .cpp extension!)
and include "dune/istl/solvers.hh" instead.
1) The wellsolution is stored in wellVariables as a vector of DenseAD
objects. The wellVariables is computed from the wellstate in
setWellVariables()
2) BHP and well fluxes are not stored directly but calculated based on
the wellVariables
3) The initial well accumulation term is stored as a vector of doubles.
4) addWellFluxEq() uses DenseAd flux and accumulation terms
5) computePropertiesForWellConnectionPressures() no longer uses state as
input. The wellstate is used to get the bhp pressure.
6) The current wellcontrol is set when updated in updateWellControls()
in order to use well_controls_get_current_type(wc) insted of
well_controls_iget_type(wc, currentIdx)
Tested on SPE1, SPE9 and Norne.
Effects the convergence but the results are identical.
Start using the well model desribed in SPE 12259 "Enhancements to the
Strongly Coupled, Fully Implicit Well Model: Wellbore CrossFlow Modeling
and Collective Well Control"
The new well model uses three well primary variables: (the old one had
4)
1) bhp for rate controlled wells or total_rate for bhp controlled wells
2) flowing fraction of water Fw = g_w * Q_w / (g_w * Q_w + q_o * Q_o +
q_g + Q_g)
3) flowing fraction of gas Fg = g_g * Q_g / (g_w * Q_w + q_o * Q_o +
q_g + Q_g)
where g_g = 0.01 and q_w = q_o = 1;
Note 1:
This is the starting point of implementing a well model using denseAD
The plan is to gradually remove Eigen from the well model and instead
use the fluidState object provided by the Ebos simulator
Note 2:
This is still work in progress and substantial cleaning and testing is
still needed.
Note 3: Runs SPE1, SPE9 and norne until 950 days.
there was a screw-up with the output directory (to set it you need to
modify the EclipseState by means of the IOConfig object? WTF?) and it
seems like the FlowMain.hpp was modified since FlowMainEbos.hpp was
added so there was some terminal output missing.
for the gas-only case with vaporized oil enabled, the oil pressure was
used, but ebos uses the gas pressure as a primary variable in that
case. (the reason is that it is faster to check if oil appears if the
gas pressure is available in that case.) This patch corrects that and
does no longer use the PrimaryVariables::set$FOO_PV() methods because
these methods are hard to keep in sync with the indices for the
primary-variables-as-a-Dune::FieldVector feature (which is actually
quite important for the linear solver).
it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)
i.e., the contents of the Opm::details namespace, the IterationReport
and the DefaultBlackoilSolutionState classes. the purpose of this is
to share the code between the existing flow variants and flow_ebos.
while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
The order of an unordered_map is quite unpredictable. (The same
order will only hold with the same hash function, comparison
operator, and insertion order). Therefore we cannot assume that
the global SimulationDataContainer uses the same order for the
cell data as the local one (This was done before this commit).
But we can assume that the local one uses the same order on every
process.
Before this commit data got mixed up (e.g. gasoilratio with surfacevol)
when gathering local data for writing eclipse files on the master
process. This commit fixes this.
Instead of iterating over the cell data of the global state when
writing the data received, we again iterate over the cell data of
the local state and simply use the key to request the correct data
for writing from the global state.
Previously, the call was made after the grid was distributed.
This means that each process wrote it, but only with his cells
active which was just a part of the whole domain.
With this commit we make the writeInit call before distributing the
grid and make sure that only one process calls it.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.