Well rates of distributed wells might be stored on multiple processes
but should be summed only once. Hence only the owner does the
summation with this commit.
This commit implements the WELPI feature. We calculate new PI/II
values for all wells in the event of a WELPI request and use those
values for well-specific WELPI request, to calculate CTF scaling
factors. We then apply those factors to all subsequent editions of
the well provided the connection factors are eligible for
WELPI-based rescaling.
If we trigger a rescaling event we also reset the WellState's
internal copies of the CTFs and reinitialize the Well PI calculators
to ensure the rescaling takes effect immediately. Since we rely on
PI values being available at the end of each time step we must also
take care to forward those values from the WellState of one report
step to the WellState of the next report step.
Finally, take care not to redo a WELPI scaling if we've already
performed the scaling operation and restart a report step. This,
in turn, happens if WELPI is requested on the first report step.
This commit adds a new member function
WellState::resetConnectionTransFactors
which overwrites the transmissibility factor of 'well_perf_data_'
pertaining to a particular well. This is to keep the values in
sync following a rescaling operation such as WELPI.
This commit adds a new member function
WellProdIndexCalculator::reInit(const Well& well)
which reinitializes the internal arrays in the same way as the
constructor. This is needed to ensure that the PI calculation
device is synchronised in the case of CTF rescaling-e.g., as a
result of WELPI.
This adds an utility that creates a vector of all above values for
the local perforations. For distributed wells this is needed as the
perforation above might live on another processor. We use the parallel
index sets together with the global index of the cells that are
perforated.
The B matrix is basically a component-wise multiplication
with a vector followed by a parallel reduction. We do that
reduction to all ranks computing for the well to save the
broadcast when applying C^T.
BlackoilWellModel now stores an instance of this class for each
well. Inside that class there is a custom communicator that only
contains ranks that will have local cells perforated by the well.
This will be used in the application of the distributed well operator.
This is another small step in the direction of distributed wells,
but it should be safe to merge this (note creation of the custom
communicators is a collective operation in MPI but done only once).
This commit adds a new helper function,
WellInterfacePtr createWellPointer(wellID, reportStep) const
which is responsible for creating appropriately typed derived well
pointers depending on well types (multi-segment vs. standard).
This, in turn, allows us to centralise this logic and use the same
factory function both when creating the 'well_container_' and when
forming the well-test objects.
Finally, this helper will become useful for calculating PI/II values
of shut/stopped wells in the context of WELPI.
IMHO this might have happened if perf_data_ is empty
or if the last connection is closed. (Discovered while
working on distributed wells but might happen already
before!)
that simplifies the code a bit and will work with
distributed wells. Previously, we assumed that all
non-shut perforations are stored locally. That does
not hold any more.
The original code assumed that
well_container_.size() == numLocalWells()
This assumption does not hold when wells open/shut dynamically in
the context of WECON and/or WTEST.
Switch to indexing into the 'prod_index_calc_' vector using the
well's own linear index instead of manually advancing iterators.
Pointy Hat: [at]bska
Only after rank zero does the filtering the schedule the well
definitions in there are guarateed to have no perforations to inactive
cells. Therefore we broadcast the schedule another time to publish
this to all processes.
Previously, we did the filtering locally on these processes bit that
did also remove perforations to cells that are active globally but
not locally. That seems very hard to work with when allowing
distributed wells.
We don't need to do the calculations in terms of EvalWell when we're
going to reduce this to the .value() before calling the PI/II
calculation routine. We can also get by with a simpler approach to
computing the II by assuming we always inject pure phases and no
cross flow in injectors.
Suggested by: [at]atgeirr