This is in preparation of adding support for outputting the network
node pressure quantity, GPR, to the summary file. In particular,
'GroupValues' is renamed to 'GroupAndNetworkValues' and has new
individual datamembers for the former group-level data and the new
node-level data.
Update BlackoilWellModel::groupData() and CollectToIORank
accordingly and bring the parallel restart facility in line with the
new layout.
This commit switches the helper function
WellGroupHelpers::updateGuideRateForGroups<>()
to include efficiency factors in the potential rates at grouptree
levels below a particular group. We furthermore switch the helper
function
WellGroupHelpers::updateGuideRatesForWells<>()
to not include efficiency factors at all.
The motivation for this change is that efficiency factors always
apply to the level we're accumulating rate values into rather than
to the rate values themselves.
As the ErrorGuard also dumps warnings we now always dump
it (previously only on error) to get these messages in the
console.
If there are error encountered, we log a meaningful error
message (the real cause was missing previously) and do a
graceful exit after MPI_Finalize.
As part of support the RPR__xxx summary keywords the ecloutputblackoilmodule.hh
file hase been refactored significantly:
- std::optional<> is used to manage the calculate once initial values.
- several small functions are extracted from the outputFipLog() function.
- std::array<> is used instead of ScalarBuffer to manage containers over all
FipTypes.
- SummaryConfig nodes for the requested summary output is stored in the class.
- A small struct RegionSum is created to hold the region summed properties.
For PINCH(5)==ALL, we take the minimum of MULTZ+ and ignore MULTZ-.
We also prepare for PINCH(5)==TOP taking only the toplevel MULTZ+
value.
For non-vertical directions we use both MULTZ+ and MULTZ-
We used a method isGlobalIdxOnThisRank to determine whether to write
an entry for a summary keyword (like BPR). Unfortunately, this did
exactly what the name suggested, but we actually passed a cartesian
index to it. That meant that a lower cartesian index might have found on
many processes (with different cartesian index and hence resulting in
wrong values), while higher for ones no process would have been found
with it (resulting in writing zeros).
With this commit we store a sorted list of cartesian indices and query
that in the renamed and restructured function isCartesianidxOnThisRank.
Most probably this broke during refactoring.
Closes#2665
This will process the same faces in serial and parallel.
Hence it make the silent assumption that we only process faces
from low cartesian index to high cartesian index hold again.
This should fix PINCH MULTZ ALL in parallel.