This adds an utility that creates a vector of all above values for
the local perforations. For distributed wells this is needed as the
perforation above might live on another processor. We use the parallel
index sets together with the global index of the cells that are
perforated.
The B matrix is basically a component-wise multiplication
with a vector followed by a parallel reduction. We do that
reduction to all ranks computing for the well to save the
broadcast when applying C^T.
BlackoilWellModel now stores an instance of this class for each
well. Inside that class there is a custom communicator that only
contains ranks that will have local cells perforated by the well.
This will be used in the application of the distributed well operator.
This is another small step in the direction of distributed wells,
but it should be safe to merge this (note creation of the custom
communicators is a collective operation in MPI but done only once).
This commit adds a new helper function,
WellInterfacePtr createWellPointer(wellID, reportStep) const
which is responsible for creating appropriately typed derived well
pointers depending on well types (multi-segment vs. standard).
This, in turn, allows us to centralise this logic and use the same
factory function both when creating the 'well_container_' and when
forming the well-test objects.
Finally, this helper will become useful for calculating PI/II values
of shut/stopped wells in the context of WELPI.
IMHO this might have happened if perf_data_ is empty
or if the last connection is closed. (Discovered while
working on distributed wells but might happen already
before!)
that simplifies the code a bit and will work with
distributed wells. Previously, we assumed that all
non-shut perforations are stored locally. That does
not hold any more.
The original code assumed that
well_container_.size() == numLocalWells()
This assumption does not hold when wells open/shut dynamically in
the context of WECON and/or WTEST.
Switch to indexing into the 'prod_index_calc_' vector using the
well's own linear index instead of manually advancing iterators.
Pointy Hat: [at]bska
Only after rank zero does the filtering the schedule the well
definitions in there are guarateed to have no perforations to inactive
cells. Therefore we broadcast the schedule another time to publish
this to all processes.
Previously, we did the filtering locally on these processes bit that
did also remove perforations to cells that are active globally but
not locally. That seems very hard to work with when allowing
distributed wells.
We don't need to do the calculations in terms of EvalWell when we're
going to reduce this to the .value() before calling the PI/II
calculation routine. We can also get by with a simpler approach to
computing the II by assuming we always inject pure phases and no
cross flow in injectors.
Suggested by: [at]atgeirr