Using cachedIntensiveQuantities on parallel grids will cause/is causing
dereferencing a null pointer here. Therefore we resort to iterating over
the grid and using the element Context.
If this turns out ot be performance regression @andlaus owes me a beer!
Using cachedIntensiveQuantities on parallel grids will cause/is causing
dereferencing a null pointer here. Therefore we resort to iterating over
the grid and using the element Context.
If this turns out ot be performance regression @andlaus owes me a beer!
Closes#1110
This used to be done in solveJacobianSystem(), but this method is only
supposed to solve the linearized system of equations, not to modify it
IMO.
I tested this patch with Norne: It did not change anything.
The cat must have dragged that in during some of the various rebases of this branch.
This introduced a segmentation fault as for the second setup eclIO was already null.
Currently, all parallel DUNE grid store some cells in addition to
interior cells. Therefore assuming that the global number of cells
(i.e. the number of cells a sequential grid needs to cover the same
whole domain with indentical cells) is not the sum of the number of
cells of the local grid. Previously, the latter was used.
Before this commit only the solution of process 0 was written.
To fix this we make the equilGrid of Ebos available. It is used
for the output writer. The properties written initially are gathered from
all processes first using the new gather/scatter utility.
For cells with swat == 1 Ecl outputs; rs = rsSat and rv=rvSat, in all
but the initial step where it outputs rs and rv values calculated by the
initialization. To be compatible we overwrite rs and rv with the values
passed by the localState. Volume factors and densities needs to be
recalculated with the updated rs and rv values.
this unifies the code paths of the code that calculates the FIP field
totals for the parallel and the sequential cases and makes the code
more robust because it does not hard-code the presence of an intensive
quantities cache anymore. also, rock compressibility is now also
included in the field totals instead of just the FIP regions. this was
forgotten in the last FIP PR because the region values are calculated
in a different class using completely different code. (i.e., regions
are done by the model, field totals by the simulator. that design
should win an award, IMO.)
with this patch, the field totals for Norne and SPE1 seem to match
those produced by E100 _very_ closely and also parallel and sequential
runs for Norne and SPE1 of flow_ebos produce exactly the same
numbers. (This is probably the case for all decks, but I haven't
tested anything else.)
mainly this should now work properly in parallel, because non-interior
cells are not counted multiple times anymore. also, the number of
loops over the global arrays has been reduced, some variables have
been renamed and some comments were added.
finally this fixes the average pressure for regions that do not
contain hydrocarbons (or at least it unifies it with the approach for
regions that contain hydrocarbons).