This commit ensures that compute inter-region flow rates on all
ranks and collect those on the I/O rank using CollectDataToIORank.
We add a trivial EclInterRegFlowMap data member to the communication
object. This data member only knows the pertinent FIP region array
names, but uses existing read/write support to collect contributions
from all ranks into this "global" object. We then pass this global
object on to the summary evaluation routine.
make it a template for grid types. this allows using
explicit template instantation and compile this code
only once per grid type, instead of once per simulator object.
This is needed to properly communicate type-specific parameters to
other processes. In particular, outputting the Carter-Tracy summary
vectors AAQTD and AAQPD needs this in a parallel simulation run.
This is in preparation of adding support for outputting the network
node pressure quantity, GPR, to the summary file. In particular,
'GroupValues' is renamed to 'GroupAndNetworkValues' and has new
individual datamembers for the former group-level data and the new
node-level data.
Update BlackoilWellModel::groupData() and CollectToIORank
accordingly and bring the parallel restart facility in line with the
new layout.
We used a method isGlobalIdxOnThisRank to determine whether to write
an entry for a summary keyword (like BPR). Unfortunately, this did
exactly what the name suggested, but we actually passed a cartesian
index to it. That meant that a lower cartesian index might have found on
many processes (with different cartesian index and hence resulting in
wrong values), while higher for ones no process would have been found
with it (resulting in writing zeros).
With this commit we store a sorted list of cartesian indices and query
that in the renamed and restructured function isCartesianidxOnThisRank.
Most probably this broke during refactoring.
Closes#2665
This commit makes the 'groupData()' function return a
map<string, Opm::data::GroupData>
object instead of a
map<string, Opm::data::GroupConstraints>
object. The 'GroupData' structure adds a level of indirection to
the current per-group summary quantities that are directly assigned
by the simulator. While here, also move the assignment of the
current group constraints/control values out to a separate helper
to reduce the body of the per-group loop in 'groupData()'.
This is in preparation of adding support for reporting group-level
production/injection guiderates (Gx[IP]GR) to the summary file.
This is needed to support dune-fem where the local mapper might
be
`MultipleCodimMultipleGeomTypeMapper<GridView<GridPart2GridViewTraits<AdaptiveLeafGridPart<CpGrid, (PartitionIteratorType)4,
false> > >, Dune::Impl::MCMGFailLayout>`
as opposed to the global one being
`MultipleCodimMultipleGeomTypeMapper<GridView<DefaultLeafGridViewTraits<CpGrid>>, Dune::Impl::MCMGFailLayout>`.
Closes#2095.
Since the indexMaps do not contain the global element index anymore
(but the global id). The old code did not work anymore.
Unfortunately, we are using CpGrid specific functions (scatterData)
to get the mapping. Therefore this might be broken if other grids are
used.
Previously, it was still assumed that all ranks knew the global grid
and each map on CollectDataToIORank::indexMaps_ was a mapping of
send/receive index to the index of the cell using the mapper of the
corresponding global grid.
With this patch inside of CollectDataToIORank::DistributeIndexMapping
indexMaps is a mapping from send/receive index to global cartesian
index until the destructor is run. Inside of the destructor of the
iorank the remapping to mapped index of the global grid happens and
the ranks array is computed.
IMO the term "vanguard" expresses better what these classes are
supposed to do: level the ground for the cavalry. Normally this simply
means to create and distribute a grid object, but it can become quite
a bit more complicated, as exemplified by the vanguard classes of
ebos..
The new eclwriter output and restart using the eclIO from opm-output
All tests in opm-simulator pass and
MPI restart works.
Standard initialization is done prior to restart in order
to compute correct initial thpressure values. This is not
necessary if thpressures are written to the restart file
TODO: Some trivial fields are written out in order mimic legacy code,
this should be cleaned up
TODO: Output of wells, FIP, NNC is still done in opm-simulators.
This should be moved later.
Judging from ParallelDebugOutput.hh this is what is should be.
Before this commit it was empty as it had space reserved but never
any entries pushed (they were inserted with operator[]).
This reverts commit dde79daf4ec2004148a58250c4c8af5390251689.
Judging from ParallelDebugOutput.hh this should not be a map from element
index to interior element index, but an a list of indices of all interior
elements. Therefore we need to reserve and later on push_back.
This reverts commit 09db2fd412abe4b8a2f52274bcc0041e4b20a94d.
Judging from ParallelDebugOutput.hh this should not be a map from element
index to interior element index, but an a list of indices of all interior
elements
Previously distributedCartesianIndex was resized and afterwards all
entries were added with push_back. Therefore the array was twice as
big as expected and contained wrong values in the front.
With this commit we insert the values using random access. Thus the
size is as expeceted and the index of the entries do not depend on the
order of the grid traversal.