the performance summary at the end of a Norne run which are printed by
`flow_ebos` now looks like this on my machine:
```
Total time (seconds): 773.757
Solver time (seconds): 753.349
Assembly time (seconds): 377.218 (Failed: 23.537; 6.23965%)
Linear solve time (seconds): 352.022 (Failed: 23.2757; 6.61201%)
Update time (seconds): 16.3658 (Failed: 1.13149; 6.91375%)
Output write time (seconds): 22.5991
Overall Well Iterations: 870 (Failed: 35; 4.02299%)
Overall Linearizations: 2098 (Failed: 136; 6.48236%)
Overall Newton Iterations: 1756 (Failed: 136; 7.74487%)
Overall Linear Iterations: 26572 (Failed: 1786; 6.72136%)
```
for the flow_legacy family, nothing changes.
Previously the substep summary reports were cumulative, misleading the user.
Also, made output a little more compact and readable, ensuring numbers line up
unless unusually many digits are needed for times and iteration counts.
Previously, we kind of hard coded the problem using the TypeTag system.
Instead of this we now simply pass the only additional thing needed, the
ElementContext, as an additional template parameter.
Removes the include of removed header BlackoilModelEbosTypeTags.hpp.
We will need the typetag information also for the wells.
If it is not in a separate header we get problems
with recursive inclusion of the headers (BlackoilEbos.hpp
includes the header that also needs the typetag information).
Using cachedIntensiveQuantities on parallel grids will cause/is causing
dereferencing a null pointer here. Therefore we resort to iterating over
the grid and using the element Context.
If this turns out ot be performance regression @andlaus owes me a beer!
Using cachedIntensiveQuantities on parallel grids will cause/is causing
dereferencing a null pointer here. Therefore we resort to iterating over
the grid and using the element Context.
If this turns out ot be performance regression @andlaus owes me a beer!
Closes#1110
This used to be done in solveJacobianSystem(), but this method is only
supposed to solve the linearized system of equations, not to modify it
IMO.
I tested this patch with Norne: It did not change anything.
Currently, all parallel DUNE grid store some cells in addition to
interior cells. Therefore assuming that the global number of cells
(i.e. the number of cells a sequential grid needs to cover the same
whole domain with indentical cells) is not the sum of the number of
cells of the local grid. Previously, the latter was used.
For cells with swat == 1 Ecl outputs; rs = rsSat and rv=rvSat, in all
but the initial step where it outputs rs and rv values calculated by the
initialization. To be compatible we overwrite rs and rv with the values
passed by the localState. Volume factors and densities needs to be
recalculated with the updated rs and rv values.
mainly this should now work properly in parallel, because non-interior
cells are not counted multiple times anymore. also, the number of
loops over the global arrays has been reduced, some variables have
been renamed and some comments were added.
finally this fixes the average pressure for regions that do not
contain hydrocarbons (or at least it unifies it with the approach for
regions that contain hydrocarbons).
for now "all pore volume multipliers" means compressibility. the
storage term of the simulator includes them, so they need to be
considered when calculating the fluid in place as well.
in particular, the rock compressibility effects are not considered in
the FIP numbers anymore. While I'm not sure if this is correct or not,
it at least makes the results consistent with those produced by
'flow_legacy'.