If guiderate is violated change to group controll.
Note that a factor 1.01 is added to minimize oscilations.
Fix missing multiplication with group efficiency when accumulating guiderates
Renames some methods and variables to reflect that the well is no
longer necessarily a StandardWell. It can be either a MultisegmentWell
or a StandardWell. This should avoid confusion about the nature of
the variable.
If a well is under a group that is limited by a target, it should use as little gaslift as possible.
The reduction algorithm will reduce the gaslift of the well as long as the groups potential is above the groups target.
Introduces two new data types BasicRates and LimitedRates to capture
oil, gas, and water rates, and whether they have been limited by well
or group targets. This reduces the number of variables that are passed
to and returned from various methods and thus makes the code easier to
read.
Introduces a gaslift debugging variable in ALQState in WellState. This
variable will persist between timesteps in contrast to when debugging
variables are defined in GasLiftSingleWell, GasLiftGroupState, or GasLiftStage2.
Currently only an integer variable debug_counter is added to ALQState,
which can be used as follows: First debugging is switched on globally
for BlackOilWellModel, GasLiftSingleWell, GasLiftGroupState, and
GasLiftStage2 by setting glift_debug to a true value in BlackOilWellModelGeneric.
Then, the following debugging code can be added to e.g. one of
GasLiftSingleWell, GasLiftGroupState, or GasLiftStage2 :
auto count = debugUpdateGlobalCounter_();
if (count == some_integer) {
displayDebugMessage_("stop here");
}
Here, the integer "some_integer" is determined typically by looking at
the debugging output of a previous run. This can be done since the
call to debugUpdateGlobalCounter_() will print out the current value
of the counter and then increment the counter by one. And it will be
easy to recognize these values in the debug ouput. If you find a place
in the output that looks suspect, just take a note of the counter
value in the output around that point and insert the value for
"some_integer", then after recompiling the code with the desired value
for "some_integer", it is now easy to set a breakpoint in GDB at the
line
displayDebugMessage_("stop here").
shown in the above snippet. This should improve the ability to quickly
to set a breakpoint in GDB around at a given time and point in the simulation.
This is needed to get consistent estimates for the summary vectors
* {F,G,W}OP{R,T}{F,S} -- Free/Vaporized Oil Production
* {F,G,W}GP{R,T}{F,S} -- Free/Dissolved Gas Production
in the case of distributed wells.
Thanks to [at]blattms for the suggested fix.
Refactors getOilRateWithLimit_(), getGasRateWithLimit_(), and
getWaterRateWithLimit_() in GasLiftSingleWellGeneric.cpp. The
common part of the methods is split out into a new method called
getRateWithLimit_(). The purpose of the refactorization is to reduce
reptetive code and make the code easier to maintain.
This is in agreement with C++ Core Guidelines. A member function should
be marked const unless it changes the object’s observable state. This
gives a more precise statement of design intent, better readability, more
errors caught by the compiler, and sometimes more optimization opportunities.
Refactor getOilRateWithGroupLimit_(), getGasRateWithGroupLimit_(),
getWaterRateWithGroupLimit_(), and getLiquidRateWithGroupLimit_() into
a single generic method called getRateWithGroupLimit_().
Consider all groups when reducing oil rate to group limits.
The current code just checks the first group limit in the set.
But there might be groups later in the set with more restrictive
limits, causing the oil rate to be reduced more than the first
limit.
As discussed in PR #3728, it is better to move the two methods
reduceALQtoGroupTarget() and checkGroupTargetsViolated() from
OptimizeState to the parent class, then we do not have to abuse
OptimizeState in maybeAdjustALQbeforeOptimizeLoop_() just to call
reduceALQtoGroupTarget().
Also fixes a typo (as discussed in PR #3729) in reduceALQtoGrouptTarget()
where the water rate is updated with the gas flow rate instead of the
water flow rate. Should be like this:
water_rate = -potentials[this->parent.water_pos_];
instead of
water_rate = -potentials[this->parent.gas_pos_];
The cell pressure is independent of well model and belongs to the interface
This should move the MSW model one step closer to supporting GasWater cases
If a well is shut due to physical or economical reason the wellstate status is shut
so we can not check the wellstate we instead check the well from the schedule to
make sure we don't test wells that are shut by the user.
The reduction rate is computed differently for cases without wells under GRUP
For a well to check whether to switch to GRUP it needed to use the reduction rate
that would have been computed if this particular well was under GRUP control
and thus recompute the reduction rate without entering the no-grup path
This commit extends the guide rate reporting to always extracting
and reporting pertinent production guide rates at the well level
(i.e., WxPGR) if at least one of the groups in the well's upline has
an entry in GCONPROD. This is for increased compatibility with
ECLIPSE.
This commit uses the new GroupTreeWalker helper class to ensure that
we always extract and report pertinent injection guide rates at the
well level (i.e., WxIGR) if at least one of the groups in the well's
upline has an entry for the corresponding phase in GCONINJE. This
is for increased compatibility with ECLIPSE.
Prior to this change we would report zero-valued WWIGR vectors on a
real field case which made analysing simulation results needlessly
difficult.
This commit extracts a helper class, GroupTreeWalker, from the
current implementation of 'calculateAllGroupGuiderates()'. This is
in preparation of adding a similar approach to extracting WxIGR for
all wells for which at least one group in the upline has an entry in
GCONINJE.
The user can add visitor operations for wells and groups, typically
with side effects, and then choose whether to run a pre-order walk
(visit groups before their children) or a post-order walk (visit
children-i.e., wells, before their parents).
This guarantees, under the assumption that the group tree does not
have cycles, that we do not accumulate group-level guide rate values
until all of its children are fully evaluated. We use an iterative
depth-first post-order tree traversal with an explicit stack instead
of a recursive implementation.
The previous implementation, which tried to do the same kind of
child-to-parent accumulation, might visit a parent group multiple
times which in turn might lead to losing updates. This is a more
formalised approach to the value accumulation than was originally
employed.
This is needed for distributed wells to save most of the code
from checking whether a perforation is in the interior.
We add new methods compressedIndexForInterior that return -1
for non-interior cells and use that for the wells. This restores
the old behaviour before 1cfe3e0aad
We used to tie reporting these quantities to whether or not a group
had any guide rates for production (GuideRate::has()), but this is
clearly insufficient for injection guide rates.
This commit adds an overload set named
loadRestartGuideRates
which, collectively, initialises the contained GuideRate object
using guide rate values from the restart file. This is necessary,
but not quite sufficient, to restart models in prediction mode with
group-level constraints.
Mostly for readability in preparation of initialising the guide rate
object with values from the restart file. The new stages, each with
its own 'loadRestart*Data()' member function are
- Connection
- Segment
- Well
- Group
While here, also switch to using std::any_of() in place of a raw
loop.
The output layer expects its input values to be strictly SI, but we
know that the GuideRate container's values are in output units.
Forgotten in PR #3467 (commit 517db198f8).
As multisegment wells may throw in applyUMFPack this is now needed and
the exception needs to communicated to all processes. We do this in
the linearize method of the well model.
Before this change this is what could happen:
- The process with the exception would have chopped the time step
- The others would have successfully setup the systems and entered the
linear solve
This poduced a deadlock. One processes was waiting in
OPM_END_PARALLEL_TRY during the setup of the shorter time step and in
collective communication during the setup of the linear solver for the
unchopped time step.
This saves some (expensive?) lookups that already have been done
in the well model. We had to make the well_container accessible from
the well model for this.
Using the perforation data will automatically make sure that the
perforations are not shut and reside on this process in a parallel run.
There cannot happen any collective blocking communication within a
parallel try-catch clause if exceptions might be thrown before the
communication. The communication has to either be reached by all
processes or no processes.
Although not declared as such, prepareTimeStep seems to be an internal
function (despite usage in a test) and hence error control can be done
in code calling it.
There was the following problem with the try-catch approach taken:
The calling site `BlackoilWellModel::assemble` looked like this:
```
OPM_BEGIN_PARALLEL_TRY_CATCH();
{
if (iterationIdx == 0) {
calculateExplicitQuantities(local_deferredLogger); // no parallel try-catch
prepareTimeStep(local_deferredLogger); //includes parallel try-catch
}
updateWellControls(local_deferredLogger, /* check group controls */ true);
// Set the well primary variables based on the value of well solutions
initPrimaryVariablesEvaluation();
maybeDoGasLiftOptimize(local_deferredLogger);
assembleWellEq(dt, local_deferredLogger);
}
OPM_END_PARALLEL_TRY_CATCH_LOG(local_deferredLogger, "assemble() failed: ",
terminal_output_);
```
calculateExplicitQuantities had no parallel-try-catch clause inside,
but prepareTimeStep had one.
Unfortunately, calculateExplicitQuantities might throw (on some
processors). In that case non-throwing processors will try to trigger a
collective communication (to check for errors) in
prepareTimeStep. While the one throwing will move to the
OPM_END_PARALLEL_TRY_CATCH_LOG macro at the end and also trigger a different
collective communication. Booom, we have a deadlock.
With this patch there is no (nested parallel)-try-catch clause in the
functions called. (And if an exception is thrown in prepareTimeStep, it
will be logged as being an assemble failure).
The other option would have been to add parallel-try-catch clauses
to all functions called. That would have created a lot more
synchronization points limiting scalability even further.
Not a big fan of Macros but here at least they seem ot be the only
option. The problem is that the catch clauses must all catch the same
exceptions that have a entry in ExceptionType, because they might be
nested. In addition we did not have a catch all clause, which is added
now and is needed in case a called method throws an unexpected exception.
Previously, exceptions happening at this stage have deadlocked
flow. E.g. UniformTabulated2DFunction in opm-material throws
a NumericalIssue if the values passed are outside the tabulated
reason. This function is e.g. called in 2-phase CO2-storage cases
during BlackoilModel::initializeWellState
BTW: This is only the first step as it is not very user friendly that
a simulation aborts at this (late) stage.