Invokes Zoltan library and requires MPI. Client code constructs an
abstract connectivity graph by defining connections/edges through
the 'registerConnection()' member function. May also impose a
restriction that certain cells/vertices be placed in the same
domain/block in the resulting partition. Client code must supply a
callback function that defines globally unique cell/vertex/object
IDs, across all MPI ranks, for each vertex in the connectivity
graph.
Member function 'partitionElement()' forms the resulting partition
vector, the size of which is the total number of objects visible to
the local rank-typically the number of cells owned by the rank, and
the number of overlap cells--i.e., the size of the local grid view.
Implement calls to cuBlas, cuSparse and implement necessary
CUDA kernels to perform a single iteration of the jacobi preconditioner.
Add tests that verify new kernels and the preconditioner in its totality.
The preconditioner is verified on 2x2 and 3x3 blocks, which as of now
are the only supported sizes. 1x1 are not supported because cuSparse
does not support it.
Step one for moving Damaris calls out of EclWriter class and into its own DamarisWriter class;
EclProblem now calls both writeOutput methods and passes in the data::Solution object;
Add fix for first writeOutput() call not having PRESSURE data available;
data::Solution is now passed by rvalue ref into eclWriter::writeOutput();
guard added to prevent inclusion of damariswriter.hh
This commit adds a parallel calculation object derived from the serial
PAvgCalculator class. This parallel version is aware of MPI
communicators and knows how to aggregate contributions from wells that
might be distributed across ranks.
We also add a wrapper class, ParallelWBPCalculation, which knows how to
exchange information from PAvgCalculatorCollection objects on different
ranks and, especially, how to properly prune inactive cells/connections.
This commit adds a new container class,
ParallelPAvgDynamicSourceData
which inherits from PAvgDynamicSourceData and provides a parallel
view of source contributions. Member function
collectLocalSources
will call the user-provided source term evaluation function for each
source location in its purview--typically those locations owned by
the current MPI rank. Those values will be distributed to other MPI
ranks through member function synchroniseSources which will fill the
base class' 'src_' data member, and become available to clients
through read-only item spans.