depending on the grid implementation, the grid view / grid part object
does not necessarily follow the change. For some reason, the grid part
still does not work in the parallel case (tested with dune-fem 2.4),
but that seems to be an issue on the dune-fem side.
this was found using GCC-7's address sanitizer. I suspect that this
did not surface earlier (i.e., with valgrind), because newly allocated
memory gets initialized to zero by the operating system, so the value
of the pointer was zero and the delete operator did what was right by
coincidence. the new asan seems to initialize memory randomly, though.
This only makes these transmissibilities available for parallel
computations. The reason is that in the sequential case, they do not
need to be computed during grid creation and they are are also
accessible via the problem object.
Due to the nature of CpGrid (manages one shared pointer to the equil grid
and one the distributed grid) this should be faster as it only adjusts one
shared pointer.
this makes creating the grid a bit slower because the
transmissibilities need to be calculated twice: once for the
sequential grid and once for the distributed one. while corresponds to
the way `flow_legacy` does the load balancing and it should allow
better results, this does not seem to be the case for the Norne deck
if ZOLTAN is not available:
After loadbalancing process 3 has 4413 cells.
After loadbalancing process 2 has 12390 cells.
After loadbalancing process 0 has 13629 cells.
After loadbalancing process 1 has 21253 cells.
i.e., process 1 is responsible for almost 5 as many cells as process
3.