When the group has wells both under individual control and group
control, since the well rates under individual control changes each
iteration, the well targets for this kind of group need to be updated
each iteration.
When we change to use implicit well potentials later, which is supposed
to be more accurate, we probably should always (unless we decided not to)
update the well targets each iteration.
since the unit code within opm-parser is now a drop-in replacement,
this simplifies things and make them less error-prone.
unfortunately, this requires quite a few PRs. (most are pretty
trivial, though.)
This reverts commit 09205dfa074af24b381595d02c15e799523ddb2b.
We cannot use the index as it might change for a well between different
report steps. Unfortunately the only persistent way to identify wells
over all report steps in the schedule seems to be the well name.
Before this commit we tried to compute whether a well is represented on
the processor using the grid information. Due to the overlap region and
possible completion on deactivated cells of the global grid this is not
even possible. E.g. we cannot distinguish whether a completion is just
not represented on the domain of a process or the corresponding cell is
not active in the simulation.
With this commit we refactor to passing the well manager an explicit
list of name of wells that should be completely neglected. This information
can easily by computed after the loadbalancer has computed partitions.
when well is closed due to rate economic limits, based on the auto
shut-in configuration, the well can be STOP or SHUT.
When well is closed due to all the connections are closed, it should be
SHUT.
The default guide rates are caculated using the well potentials.
The well potentials are calculated in the simulator and given as input
to the wellsManager.
Several files stopped compiling due to relying on opm-parser headers
doing includes. From opm-parser PR-656
https://github.com/OPM/opm-parser/pull/656 this assumption is no longer
valid.
This should prevent misunderstandings about what the
well_index_on_proc is. It is not the well_index according to
the eclipse state (on open wells count) but the index of the
wells that are stored on this process' domain.
In the parallel run there are cases where wells perforate cells
that are neighbors of overlap/halo cells. On other process only
parts of the well are seen as perforations. These wells should be
ignored there. While the well was indeed ignored, the perforations
found where mistakenly added to the well found due not clearing the
wellperf_data[well_index]. This commit now does this clearing and
results in the right handling of wells for e.g. SPE9.
This PR adds allow_cf to the wells structure that determine whether
crossflow is allowed or not. An extra argument is added to addWell(..)
to specify the allow_cf flag.
While hopefully not a bug it raises an exception with gcc's
libc debugging mode. Therefore we resort to using C++11's
std::vector::data instead.
The exception was rosen when running SPE9 in parallel.
calculations
The dz calculated in WellDetails::getCubeDim is not correct in cases
where the face centroid of the horizontal faces is located above or
below the face centroid or the vertical faces. The cell thickness in
EclipseGrid, calculated using the Z-coordinates, is therefore used
instead.
If on one process a well completion is next to border then
it might also be stored in the neighbor process. Still not
all the completions of the well are known to the neighbor.
This breaks the previous assumption that for each well all
completions must belong to the partition of the process.
Therefore with this commit we allow wells that only have a
part of their completions assigned to the partition of the process.
This wells are deactivated under the assumption that they must
exist completely on another process due to the partitioning.
Previously well with just some shut completions errorneously triggered an
exception in parallel runs. This is fixed with this commit.
Due to the logic shut completions will always be marked as existing
on a process. (Initially all completions are marked as found. For
each open completion we check whether the cartesian index belongs to
the local grid. If that is not the case we mark it as not found).
Therefore we now check whether the found number of completions
is either the number of shut completions or the number of all completions.
In the former case the well is not stored on this process, and in the latter
case it is. In other cases we throw an exception.
Previously, we used the setStatus method to set wells that do not
exist on the local grid to SHUT. Or at least this is what I thought
that ```well.setStatus(timestep, SHUT)```. Unfortunately, my
assumption was wrong. This was revealed while testing a parallel run
with SPE9 that threw an expeption about "Elements must be added in
weakly increasing order" in Opm::DynamicState::add(int, T). Seems like
the method name is a bit misleading.
As it turns out the WellManager has its own complete list of active
wells (shut wells are simply left out). Therefore we can use this
behaviour to our advantage: With this commit we not only exclude shut
wells from the list, but also the ones that do not exist on the local
grid. We even get rid of an ugly const_cast.
Currently, I have running a parallel SPE9 test that has not yet
aborted.
In a parallel run each process only knows a part of the grid. Nevertheless
it does hold the complete well information. To resolve this the WellsManager
must be able to handle this case.
With this commit its constructor gets a flag indicating whether this is
a parallel run. If it is, then it does not throw if a well has cells that
are not present on the local part of the grid. Nevertheless it will check
that either all or none of the cells of a well are stored in the local part
of the grid.
Wells with no perforated cells on the local will still be present but set to SHUT.