- the diffusion one is basically done on runtime anyways
- the energy one gives some small code elimination gains
however, it complicates the writing of downstream templates.
That way we don't get "unused variable" warnings when someone
defines NDEBUG. While here, also switch to using Geometry
references where appropriate to avoid needlessly copying Geometry
objects.
You can use EclCpGridVanguard::setExternalLoadBalancer() to
set an external funtion that creates a vector of integers (containing
the partition for each cell) from the grid. If it is set then this
information will be used for loadbalancing, otherwise ZOLTAN.
We introduce a new parameter --enable-distributed-wells=<true|false>
for this. During startup we check that the model either only has
standard wells or that multisegement wells are actively interpreted
as standard wells (by way of passing --enable-multisegment-wells=false
as an option).
Only after rank zero does the filtering the schedule the well
definitions in there are guarateed to have no perforations to inactive
cells. Therefore we broadcast the schedule another time to publish
this to all processes.
Previously, we did the filtering locally on these processes bit that
did also remove perforations to cells that are active globally but
not locally. That seems very hard to work with when allowing
distributed wells.
It would remove perforated cells from wells that cross the local
domain's border. That would make it impossible to figure out the
first connection. In addition we would not be able to check that
the connections exist (as rank 0 would have the complete information
-> inconsistency).
Previously, we exported an unordered map containing all names of
wells that are not present in the local part of the grid.
As we envision to have wells that are distributed across multiple
processors, this information does not seem to be enough. We need
to be able to set up communication for each well. To do this we need
to find out who handles perforations of each well.
We now export a full list of well name together with a boolean
indicating whether it perforates local cells (vector of pair of string
and bool).
There are field properties that can usually be queried even if they
are not explicitly specified in the input
file (e.g. PVTNUM). Unfortunately, the ParallelEclipseState cannot
forsee which of these will be queried at startup and after the
loadbalancing only the master process is able to auto creates
these (easily). Hence this commit uses a fall-back if an unstored
keyword is queried. In this case we use get_global-* to auto create
the keyword and use functions of the cartesian mapper to extract the
relevant values on the process.
Of course this temporarily wastes space and we might want to resort to
a more memory savy approach later.
They are attached to the cells as well and are now distributed
during CpGrid::loadBalance. Due to this change we also rename
FieldPropsDataHandle to PropsCentroidsDataHandle.
The created data handle for the communication could in theory be used
with other DUNE grids, too. In reality we will need to merge with the
handle that ALUGrid already uses to communicate the cartesian indices.
This PR gets rid of using the get_global_(double|int) method in
ParallelEclipseState and reduces the amount of boilerplate code there.
... instead of a reference This is needed as we only want to read the
full deck and construct the EclipseGrid only on the master process
with rank 0. Hence all other ranks will pass a nullptr to this
function. This will be possible with this move from a reference to a
pointer.
When filterConnections_ is called the grid is not load
balanced, yet. Currently that means that grid() will also return the
unbalanced grid and all processes will see the whole global grid.
We will change semantics of the unbalanced grid soon: Only the root
process will see the whole grid and the others will see an empty
partition of it. Hence filtering on this partition will remove all
connections on all wells in the schedule for non-root processes and
produce wrong results.
For non-root process the filtering needs to be done on the load
balanced grid. This is accomplished by this commit.
This removes a deadlock experienced for some models
where we have specified connections to non-active cells.
On non-IO ranks we are using the local grid since in the
future there will be no global grid available. Wells connecting
cells not on these processors are neglected anyway.
Closes#2101