This commit sets the 'data::Well::dynamicStatus' based on the
dynamically updated 'Schedule' object (i.e., from ACTIONX and
similar) and the results of well/operability testing (WECON and/or
WTEST). If a well is closed due to economic limits (WECON) we still
provide summary-style data at the timestep that closed the well, but
omit this data at later steps until the well reopens.
We add a new parameter to WellState::report() to distinguish these
situations.
This is in preparation of making the 'BlackoilWellModel' manage both
open and shut wells alike.
Some of our computations are heavily serial and need a complete
representation of the data attached to all perforation no matter
whether a perforation lives on the local partition or not. This commit
adds a factory that allows to easily create such a representaion and
helps writing data back to the local representation.
This commit adds a new member function
WellProdIndexCalculator::reInit(const Well& well)
which reinitializes the internal arrays in the same way as the
constructor. This is needed to ensure that the PI calculation
device is synchronised in the case of CTF rescaling-e.g., as a
result of WELPI.
This adds an utility that creates a vector of all above values for
the local perforations. For distributed wells this is needed as the
perforation above might live on another processor. We use the parallel
index sets together with the global index of the cells that are
perforated.
The B matrix is basically a component-wise multiplication
with a vector followed by a parallel reduction. We do that
reduction to all ranks computing for the well to save the
broadcast when applying C^T.
BlackoilWellModel now stores an instance of this class for each
well. Inside that class there is a custom communicator that only
contains ranks that will have local cells perforated by the well.
This will be used in the application of the distributed well operator.
This is another small step in the direction of distributed wells,
but it should be safe to merge this (note creation of the custom
communicators is a collective operation in MPI but done only once).