This should prevent misunderstandings about what the
well_index_on_proc is. It is not the well_index according to
the eclipse state (on open wells count) but the index of the
wells that are stored on this process' domain.
In the parallel run there are cases where wells perforate cells
that are neighbors of overlap/halo cells. On other process only
parts of the well are seen as perforations. These wells should be
ignored there. While the well was indeed ignored, the perforations
found where mistakenly added to the well found due not clearing the
wellperf_data[well_index]. This commit now does this clearing and
results in the right handling of wells for e.g. SPE9.
This PR adds allow_cf to the wells structure that determine whether
crossflow is allowed or not. An extra argument is added to addWell(..)
to specify the allow_cf flag.
While hopefully not a bug it raises an exception with gcc's
libc debugging mode. Therefore we resort to using C++11's
std::vector::data instead.
The exception was rosen when running SPE9 in parallel.
calculations
The dz calculated in WellDetails::getCubeDim is not correct in cases
where the face centroid of the horizontal faces is located above or
below the face centroid or the vertical faces. The cell thickness in
EclipseGrid, calculated using the Z-coordinates, is therefore used
instead.
If on one process a well completion is next to border then
it might also be stored in the neighbor process. Still not
all the completions of the well are known to the neighbor.
This breaks the previous assumption that for each well all
completions must belong to the partition of the process.
Therefore with this commit we allow wells that only have a
part of their completions assigned to the partition of the process.
This wells are deactivated under the assumption that they must
exist completely on another process due to the partitioning.
Previously well with just some shut completions errorneously triggered an
exception in parallel runs. This is fixed with this commit.
Due to the logic shut completions will always be marked as existing
on a process. (Initially all completions are marked as found. For
each open completion we check whether the cartesian index belongs to
the local grid. If that is not the case we mark it as not found).
Therefore we now check whether the found number of completions
is either the number of shut completions or the number of all completions.
In the former case the well is not stored on this process, and in the latter
case it is. In other cases we throw an exception.
Previously, we used the setStatus method to set wells that do not
exist on the local grid to SHUT. Or at least this is what I thought
that ```well.setStatus(timestep, SHUT)```. Unfortunately, my
assumption was wrong. This was revealed while testing a parallel run
with SPE9 that threw an expeption about "Elements must be added in
weakly increasing order" in Opm::DynamicState::add(int, T). Seems like
the method name is a bit misleading.
As it turns out the WellManager has its own complete list of active
wells (shut wells are simply left out). Therefore we can use this
behaviour to our advantage: With this commit we not only exclude shut
wells from the list, but also the ones that do not exist on the local
grid. We even get rid of an ugly const_cast.
Currently, I have running a parallel SPE9 test that has not yet
aborted.
In a parallel run each process only knows a part of the grid. Nevertheless
it does hold the complete well information. To resolve this the WellsManager
must be able to handle this case.
With this commit its constructor gets a flag indicating whether this is
a parallel run. If it is, then it does not throw if a well has cells that
are not present on the local part of the grid. Nevertheless it will check
that either all or none of the cells of a well are stored in the local part
of the grid.
Wells with no perforated cells on the local will still be present but set to SHUT.
With this commit the WellsManager will check the status of completions
before adding them to the internal struct wells
datastructure. Completions can be in the four states:
OPEN, SHUT, AUTO, POPN
Completions with state == SHUT will be ignored, wheras the wellsmanager
will throw if the states AUTO or POPN are encountered. The WELOPEN
keyword can also have the value 'STOP'; for completions that is
translated to 'SHUT' by Schedule object.
Shut wells are not added to the well list and thus not considered in the
simulator.
The shut well test in test_wellsmanager is modified to assert this
behaviour.
BUG: This change provokes an assert in the EclipeWriter as number of
wells in wellstate is different from number of wells in the schedule.
This commit extends the feature set of the WellsManager to support
horizontal ("X" and "Y") completions and include the net-to-gross
ratio in the Peaceman index ("Completion Transmissibility Factor,
CTF") of a well completion. The NTG factor is included if present
in the input deck represented by the "eclipseState".
There are two separate, though related, parts to this commit. The
first part splits the calculation of Peaceman's "effective radius"
out to a separate utility function, effectiveRadius(), and
generalises WellsManagerDetail::computeWellIndex() to account for
arbitrary directions and NTG factors. The second part uses
GridPropertyAccess::Compressed<> to extract the NTG vector from the
input if present while providing a fall-back value of 1.0 if no such
vector is available.
Note: We may wish to make the extraction policy configurable at some
point in the future.
This commit tightens the function header of method
WellsManager::createWellsFromSpecs()
to accept a reference-to-const 'cartesian_to_compressed' map. It
used to be a complete, copy-constructed object, so this is a slight
performance enhancement as we no longer need to copy a (somewhat)
large object on every call to the method.
This commit generalises the implementation of utility function
'getCubeDim' to support arbitrary number of space dimensions. In
actual practice there's no change in features as we only really use
a compile-time constant (= 3) to specify the number of space
dimensions.
Removed conflicts in
opm/core/wells/WellsManager.cpp
that were due to the change
```diff
- pd.well_index = WellsManagerDetail::computeWellIndex(radius, cubical, cell_perm, completion->getDiameter());
+ pd.well_index = WellsManagerDetail::computeWellIndex(radius, cubical, cell_perm, completion->getSkinFactor());
```
in WellsManager::createWellsFromSpecs which moved from WellsManager.cpp to WellsManager_impl.hpp file in a previous commit.