This commit adds a new (hidden) debugging option,
DebugEmitCellPartition (--debug-emit-cell-partition)
which, when set, will cause each rank to write a three-column text
file of the form
MPI_Rank Cartesian_Index NLDD_Domain_ID
into the directory
partition/CaseName
of the run's output directory. That file will be named according to
the process' MPI rank, so the first column will be the same as the
file name.
The option is primarily intended for debugging the NLDD partitioning
scheme, so is mostly reserved for runs with low MPI sizes (e.g.,
less than 20).
While here, also make the MPIPartitionFromFile helper class aware of
this format so that we can use concatenated output files as an input
to the MPI partitioning algorithm for repeatability.
This commit introduces new, experimental support for loading a
partitioning of the cells from a text file. The name of the file is
passed into the simulator using the new, hidden, command line option
--external-partition=filename
and we perform some basic checking that the number of elements in the
partition matches the number of cells in the CpGrid object.
also:
-- fixed but not checked for diffusion
-- bug for zero thermal diffusion (nan in derivatives)
-- added checking of consistency between input and taged phases
-- add new function need for tpfa linearizer in thermal
-- set tpfa linearizer for blackoil with energy
-- set tpfa linearizer for gasoil and energy which include co2store
-- NB diffusion is disabled for this simulators
Mostly to reduce the number of nested scopes in doLoadBalance_() and
to make the data flow a little easier to track for human readers.
While here, also use down-casts to pointer instead of references.
Doing so replaces try/catch blocks with simpler if/else blocks.
Reimplement face transmissibility calculation/extraction loop in
terms of helper functions elements() and intersections().
- the diffusion one is basically done on runtime anyways
- the energy one gives some small code elimination gains
however, it complicates the writing of downstream templates.
That way we don't get "unused variable" warnings when someone
defines NDEBUG. While here, also switch to using Geometry
references where appropriate to avoid needlessly copying Geometry
objects.
You can use EclCpGridVanguard::setExternalLoadBalancer() to
set an external funtion that creates a vector of integers (containing
the partition for each cell) from the grid. If it is set then this
information will be used for loadbalancing, otherwise ZOLTAN.
We introduce a new parameter --enable-distributed-wells=<true|false>
for this. During startup we check that the model either only has
standard wells or that multisegement wells are actively interpreted
as standard wells (by way of passing --enable-multisegment-wells=false
as an option).
Only after rank zero does the filtering the schedule the well
definitions in there are guarateed to have no perforations to inactive
cells. Therefore we broadcast the schedule another time to publish
this to all processes.
Previously, we did the filtering locally on these processes bit that
did also remove perforations to cells that are active globally but
not locally. That seems very hard to work with when allowing
distributed wells.
It would remove perforated cells from wells that cross the local
domain's border. That would make it impossible to figure out the
first connection. In addition we would not be able to check that
the connections exist (as rank 0 would have the complete information
-> inconsistency).
Previously, we exported an unordered map containing all names of
wells that are not present in the local part of the grid.
As we envision to have wells that are distributed across multiple
processors, this information does not seem to be enough. We need
to be able to set up communication for each well. To do this we need
to find out who handles perforations of each well.
We now export a full list of well name together with a boolean
indicating whether it perforates local cells (vector of pair of string
and bool).