This computation is serial and needs a complete representation
of data attached to all preforations (even those stored on
another process). This commit uses the newly created factory to
correctly compute the connection densities for distributed wells.
Some of our computations are heavily serial and need a complete
representation of the data attached to all perforation no matter
whether a perforation lives on the local partition or not. This commit
adds a factory that allows to easily create such a representaion and
helps writing data back to the local representation.
Initialize the well rates with well potentials when computing rates from bhp or bhp(thp)
The bhp was already initialized.
Scale the segment rates and pressure to adapt to changes in well rate and bhp
Improves convergence of the well potential calculations
As this is as sequential (ordering matters!) as it can get we need to
communicate all perforations, do the partial sum with them and save
result back to the local perforations.
Well rates of distributed wells might be stored on multiple processes
but should be summed only once. Hence only the owner does the
summation with this commit.
This commit implements the WELPI feature. We calculate new PI/II
values for all wells in the event of a WELPI request and use those
values for well-specific WELPI request, to calculate CTF scaling
factors. We then apply those factors to all subsequent editions of
the well provided the connection factors are eligible for
WELPI-based rescaling.
If we trigger a rescaling event we also reset the WellState's
internal copies of the CTFs and reinitialize the Well PI calculators
to ensure the rescaling takes effect immediately. Since we rely on
PI values being available at the end of each time step we must also
take care to forward those values from the WellState of one report
step to the WellState of the next report step.
Finally, take care not to redo a WELPI scaling if we've already
performed the scaling operation and restart a report step. This,
in turn, happens if WELPI is requested on the first report step.
This commit adds a new member function
WellState::resetConnectionTransFactors
which overwrites the transmissibility factor of 'well_perf_data_'
pertaining to a particular well. This is to keep the values in
sync following a rescaling operation such as WELPI.
This commit adds a new member function
WellProdIndexCalculator::reInit(const Well& well)
which reinitializes the internal arrays in the same way as the
constructor. This is needed to ensure that the PI calculation
device is synchronised in the case of CTF rescaling-e.g., as a
result of WELPI.
This adds an utility that creates a vector of all above values for
the local perforations. For distributed wells this is needed as the
perforation above might live on another processor. We use the parallel
index sets together with the global index of the cells that are
perforated.
The B matrix is basically a component-wise multiplication
with a vector followed by a parallel reduction. We do that
reduction to all ranks computing for the well to save the
broadcast when applying C^T.
BlackoilWellModel now stores an instance of this class for each
well. Inside that class there is a custom communicator that only
contains ranks that will have local cells perforated by the well.
This will be used in the application of the distributed well operator.
This is another small step in the direction of distributed wells,
but it should be safe to merge this (note creation of the custom
communicators is a collective operation in MPI but done only once).
This commit adds a new helper function,
WellInterfacePtr createWellPointer(wellID, reportStep) const
which is responsible for creating appropriately typed derived well
pointers depending on well types (multi-segment vs. standard).
This, in turn, allows us to centralise this logic and use the same
factory function both when creating the 'well_container_' and when
forming the well-test objects.
Finally, this helper will become useful for calculating PI/II values
of shut/stopped wells in the context of WELPI.
IMHO this might have happened if perf_data_ is empty
or if the last connection is closed. (Discovered while
working on distributed wells but might happen already
before!)
that simplifies the code a bit and will work with
distributed wells. Previously, we assumed that all
non-shut perforations are stored locally. That does
not hold any more.
The original code assumed that
well_container_.size() == numLocalWells()
This assumption does not hold when wells open/shut dynamically in
the context of WECON and/or WTEST.
Switch to indexing into the 'prod_index_calc_' vector using the
well's own linear index instead of manually advancing iterators.
Pointy Hat: [at]bska
Only after rank zero does the filtering the schedule the well
definitions in there are guarateed to have no perforations to inactive
cells. Therefore we broadcast the schedule another time to publish
this to all processes.
Previously, we did the filtering locally on these processes bit that
did also remove perforations to cells that are active globally but
not locally. That seems very hard to work with when allowing
distributed wells.
We don't need to do the calculations in terms of EvalWell when we're
going to reduce this to the .value() before calling the PI/II
calculation routine. We can also get by with a simpler approach to
computing the II by assuming we always inject pure phases and no
cross flow in injectors.
Suggested by: [at]atgeirr
This commit makes the PI/II calculation more closely mirror the
approach taken when computing connection flow rates. In particular,
we switch to using total mobility, mixing and volume ratios for
injecting connections while producing connections continue to use
the phase mobilities and formation volume factors derived from
conditions in the connecting cells. We also include dissolved
gas/oil ratios and vaporised oil/gas ratios in order to fully
capture the surface flow conditions.
We split the handling of producing/injecting connections out to
separate helper functions in order to make the overall logic in
updateProductivityIndex() more manageable.ex() more manageable.
This commit ensures that we calculate the well and connection level
per-phase steady-state productivity index (PI) at the end of a
completed time step (triggered from endTimeStep()).
We add a new data member,
BlackoilWellModel<>::prod_index_calc_
which holds one WellProdIndexCalculator for each of the process'
local wells and a new interface member function
WellInterface::updateProductivityIndex
which uses a per-well PI calculator to actually compute the PI
values and store those in the WellState. Implement this member
function for both StandardWell and MultisegmentWell. Were it not
for 'getMobility' existing only in the derived classes, the two
equal implementations could be merged and moved to the interface.
We also add a new data member to the WellStateFullyImplicitBlackoil
to hold the connection-level PI values. Finally, remove the
conditional PI calculation from StandardWell's well equation
assembly routine.
As it was, the getALQ() call would insert injectors into the ALQ maps,
leading to trouble.
Also, this gets rid of the slightly weird thing that the output data
structure's producer/injector status was only set after creation,
in BlackoilWellModel::wellData().
A const well state was passed to functions that were modifying it by
calling setALQ(). Now the setALQ() method is made non-const, mutable
references to the well state are passed where sensible. The getALQ()
method uses map::at() instead of map::operator[] and no longer modifies
current_alq_. With this, it is now easy to see which methods modify the
well state and which don't. The alq-related members in the
WellStateFullyImplicitBlackoil class are no longer 'mutable'-qualified.
In serial we use the first cell of the first well to determine the
pvt region index for a group. Previously, we used the first cell of
the first local well in a parallel run. Unfortunately that may lead
to different pvt region indices being used for the same goup on
different processes.
We fix this by using the same approach in parallel as we already use
in serial. For this we use Well::seqIndex() to determine the needed
ordering.
and use it in the WellInterface instead of creating a vector
with these indices there. The original approach recreates
information in another path of the well and assumes that all
connections are in a process's local partition. That assumption
does not hold any more for distributed wells.
Currently the simulator creats the polyhedreal grid from an eclGrid from opm-common
TODO
- make it possible to create the grid directly from DGF or MRST format
- fix issue on norne.
1) Corrected phaseIsActive with PhaseIdx argument instead of CompIdx argument in "subtraction of dissolved gas from oil phase and vapporized oil from gas phase".
2) Fix for well accumulation calculation in case oil is absent.
Restores the original cwd after each unittest in test_basic.py. Also
simplifies add_test() in python/simulators/CMakeLists.txt such that the
Bash script wrapper run-python-tests.sh is no longer needed to run the
tests.
In OPM the matrix graph might be unsymmetric as we do not store
the full sparsity pattern for copy rows but only the diagonal.
Unfortunately, DUNE assumes that matrices from finite elements and
finite volumes have a symmetric sparsity pattern for copy rows to
and uses this assumption to create the graphs for PTScotch/ParMETIS
more easily. But PTScotch/ParMetis assume a symmetric graph.
The Polymer, Brine, and Solvent quantities would be extracted from
elements 0..#perf-1 of their pertinent container rather than from
the elements associated to the particular well.
It is only used within this context and produces a warning of the
form
ISTLSolverEbos.hpp:128:25: warning: unused variable ‘gridForConn’
unless the build configures accelerator support.
With this, a slightly more sophisticated procedure is used for well rate intialization.
Since it changes existing results, it defaults to false, giving the existing behaviour.
nvcc exits compilation if the header dune/istl/basearray.hh (form DUNE
2.6) is included as it does not seem to understand the friend declaration
there (friend class for a struct).
```
/usr/include/dune/istl/basearray.hh:101:49: error: ‘typename Dune::Imp::base_array_unmanaged<B, A>::RealIterator’ names ‘template<class B, class A> template<class T> struct Dune::Imp::base_array_unmanaged<B, A>::RealIterator’, which is not a type
friend class RealIterator<const ValueType>;
^
```
Switches between using the logarithmic and unit scaling factor based
on whether or not the well has an explicit, positive drainage radius
(WELSPECS item 7). Does presently not include the D factor.
Add a set of unit tests to exercise the facility.