When running in parallel a well state object with the well information
of the whole grid needs to constructed to gather the information from all
processes. Previously, this was done with the report step exported by the
timer. This was wrong for the following reason:
The output occurs after solving the time step and the timer is already
incremented. This means that we constructed the well state for gathering the
data for the next report step, already. Unfortunately, at that step some
wells that we have computed results for might have been shut. In that case
an exception with message "global state does not contain well ..." was thrown.
This problem occured for Model number 2 and might have been due to shut wells
because of banned cross flow.
With this commit we use the last report step if this is not an initial write
and not a substep.
remove the now unnecessary inclusion of
"NewtonIterationBlackoilInterleaved.cpp" (mind the .cpp extension!)
and include "dune/istl/solvers.hh" instead.
1) The wellsolution is stored in wellVariables as a vector of DenseAD
objects. The wellVariables is computed from the wellstate in
setWellVariables()
2) BHP and well fluxes are not stored directly but calculated based on
the wellVariables
3) The initial well accumulation term is stored as a vector of doubles.
4) addWellFluxEq() uses DenseAd flux and accumulation terms
5) computePropertiesForWellConnectionPressures() no longer uses state as
input. The wellstate is used to get the bhp pressure.
6) The current wellcontrol is set when updated in updateWellControls()
in order to use well_controls_get_current_type(wc) insted of
well_controls_iget_type(wc, currentIdx)
Tested on SPE1, SPE9 and Norne.
Effects the convergence but the results are identical.
Start using the well model desribed in SPE 12259 "Enhancements to the
Strongly Coupled, Fully Implicit Well Model: Wellbore CrossFlow Modeling
and Collective Well Control"
The new well model uses three well primary variables: (the old one had
4)
1) bhp for rate controlled wells or total_rate for bhp controlled wells
2) flowing fraction of water Fw = g_w * Q_w / (g_w * Q_w + q_o * Q_o +
q_g + Q_g)
3) flowing fraction of gas Fg = g_g * Q_g / (g_w * Q_w + q_o * Q_o +
q_g + Q_g)
where g_g = 0.01 and q_w = q_o = 1;
Note 1:
This is the starting point of implementing a well model using denseAD
The plan is to gradually remove Eigen from the well model and instead
use the fluidState object provided by the Ebos simulator
Note 2:
This is still work in progress and substantial cleaning and testing is
still needed.
Note 3: Runs SPE1, SPE9 and norne until 950 days.
there was a screw-up with the output directory (to set it you need to
modify the EclipseState by means of the IOConfig object? WTF?) and it
seems like the FlowMain.hpp was modified since FlowMainEbos.hpp was
added so there was some terminal output missing.
for the gas-only case with vaporized oil enabled, the oil pressure was
used, but ebos uses the gas pressure as a primary variable in that
case. (the reason is that it is faster to check if oil appears if the
gas pressure is available in that case.) This patch corrects that and
does no longer use the PrimaryVariables::set$FOO_PV() methods because
these methods are hard to keep in sync with the indices for the
primary-variables-as-a-Dune::FieldVector feature (which is actually
quite important for the linear solver).
it uses ebos for linearization of the mass balance equations and the
current flow code from opm-simulators for all the rest. currently, the
results match the ones from plain `flow` for SPE1, SPE9 and Norne, but
performance is not optimal: on SPE9, converting from and to the legacy
data structures takes about a third of the time to do the actual mass
balance assembly. nevertheless `flow_ebos` is almost as fast as plain
`flow` for SPE9. (for Norne `flow_ebos` is about 15% slower, even
though the results match quite closely. the reason for this is that it
requires more iterations for some reason.)
i.e., the contents of the Opm::details namespace, the IterationReport
and the DefaultBlackoilSolutionState classes. the purpose of this is
to share the code between the existing flow variants and flow_ebos.
while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
The order of an unordered_map is quite unpredictable. (The same
order will only hold with the same hash function, comparison
operator, and insertion order). Therefore we cannot assume that
the global SimulationDataContainer uses the same order for the
cell data as the local one (This was done before this commit).
But we can assume that the local one uses the same order on every
process.
Before this commit data got mixed up (e.g. gasoilratio with surfacevol)
when gathering local data for writing eclipse files on the master
process. This commit fixes this.
Instead of iterating over the cell data of the global state when
writing the data received, we again iterate over the cell data of
the local state and simply use the key to request the correct data
for writing from the global state.
Previously, the call was made after the grid was distributed.
This means that each process wrote it, but only with his cells
active which was just a part of the whole domain.
With this commit we make the writeInit call before distributing the
grid and make sure that only one process calls it.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.
At the moment, for the ParallelDebugOutput, we put a dummy
dyanmic_list_econ_limited, not sure how it will the parallel running.
The basic problem is that when initialzing the globalWellState_, what
will happen if they can not find state information for a well in the Wells*.
If some defaulted values are used, then no big problem here.
This is used to compute the Euclidian product for the saturations.
Thes are ordered in an interleaved manner (all saturations for cell
with index 0, the all for index 1, ...). Up to now the implementation
assumed a different ordering: blockwise (all saturations for phase 0 first,
then all saturations phase 1, ...).
With this commit the computation uses the right assumption.
Using &stdwells.wells() throws an assertion for null pointers
without -DNDEBUG, but was used nevertheless. That prevented running
models without wells.
The wells pointer might be null and we need to access its number of
phases in the constructor to store it. With this commit we prevent that
storage and simply ask the well struct whenever we need the number of
phases. Of course the code using it needs to check that there are wells
but that is done in most parts of the opm-simulators currently
(MultiSegmentWells and Solvent are/might be an exception).
In that case we cannot call numPhases() on the wells as it produces
a floating point exception. As we do not use that information in this case
anyway, we simply use -1 instead to prevent the call.
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here
As for each well only one process is responsible, the output process
does not see all wells. Ergo some well switching information was never
printed in a parallel run.
Therefore with this commit the well switching
message is printed regardless on which process it appears.
This is used to compute the Euclidian product for the saturations.
Thes are ordered in an interleaved manner (all saturations for cell
with index 0, the all for index 1, ...). Up to now the implementation
assumed a different ordering: blockwise (all saturations for phase 0 first,
then all saturations phase 1, ...).
With this commit the computation uses the right assumption.
Using &stdwells.wells() throws an assertion for null pointers
without -DNDEBUG, but was used nevertheless. That prevented running
models without wells.
The wells pointer might be null and we need to access its number of
phases in the constructor to store it. With this commit we prevent that
storage and simply ask the well struct whenever we need the number of
phases. Of course the code using it needs to check that there are wells
but that is done in most parts of the opm-simulators currently
(MultiSegmentWells and Solvent are/might be an exception).
In that case we cannot call numPhases() on the wells as it produces
a floating point exception. As we do not use that information in this case
anyway, we simply use -1 instead to prevent the call.
This commit adds sequential solvers, including a simulator variant
using them (flow_sequential.cpp) with an integration test (running
SPE1, same as for fully implicit).
The sequential code is capable of running several (but not all) test
cases without tuning or special parameters, but reducing ds_max a bit
(from default 0.2 to say 0.1) helps with transport solver
convergence. The Norne model runs fine (esp. with a little tuning). A
parameter iterate_to_fully_implicit (defaults to false) is available,
when set the simulator will iterate with alternating pressure and
transport solves towards the fully implicit solution. Although that
takes a lot extra time it serves as a correctness check.
Performance is not competitive with fully implicit at this point:
essentially both the pressure and transport models inherit the fully
implicit model and do a lot of double (or triple) work. The point has
been to establish a proof of concept and baseline for further
experiments, without disturbing the base model too much (or at all, if
possible).
Changes to existing code has been minimized by merging most such
changes as smaller PRs already, the only remaining such change is to
NewtonIterationBlackoilInterleaved. Admittedly, that code (to solve
the pressure system with AMG) is not ideal because it duplicates
similar code in CPRPreconditioner.hpp and is not parallel. I propose
to address this later by refactoring the "solve elliptic system" code
from CPRPreconditioner into a separate class that can be used also
from here
The changes are:
- Make the WellOps struct public (needed by transport solver).
- Make it possible to store and retrieve total reservoir volume
perforation fluxes with getStoredWellPerforationFluxes(), controlled
by a flag set by setStoreWellPerforationFluxesFlag(), defaulting to
false (needed by pressure solver).
As for each well only one process is responsible, the output process
does not see all wells. Ergo some well switching information was never
printed in a parallel run.
Therefore with this commit the well switching
message is printed regardless on which process it appears.
Changes to BlackoilOutputWriter as mandated by the split and rewrite of
opm-output. Notable changes:
* BlackoilOutputWriter is no longer a child class of OutputWriter.
* Minor interface changes; writeTimeStep requires a Wells pointer
* restore requires a Wells* pointer
* VTK/Matlab support rewrites; no longer inherits OutputWriter
* WellStateFullyImplicitBlackoil::report added, to write its data to a
opm-output understood format
Relies on utility/Compat.hpp for quick conversion to the opm-output
defined formats.
ParallelDebugOutput will always dereference its member variable
globalReservoirState_ even if it will not be used for any output.
In g++ this throws when using -D_GLIBCXX_DEBUG -DDEBUG -DGLIBCXX_FORCE_NEW.
I any case we will have a dangling reference into Nirvana in
PackUnPackSimulationDataContainer. This commit fixes this by always
initializing the pointer globalReservoirState_. In the case where the rank
does not perform any output its size will be zero.
The size is the number of cells and not dependent on the number of phases.
This was a typical copy&paste mistake that wasted space.
Kudos to atgeirr for catching this.
This is needed since hydro carbon state is used to get the
primal variables since PR #687, commit 9c771ab6. Therefore we
now distribute the state along with rest of the black oil state
variable at the start of a parallel simulation.
This closes issue #695
The non-cartesian connections are required by the output facilities,
while the truly non-neighbour connections (meaning those not in the grid)
should be used by all other code.
to solve the different interfaces of computeWellConnectionPressures for
StandardWells and MultisegmentWells, a function
computeWellConnectionPressures was introduced for the models.