while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.
At the moment, for the ParallelDebugOutput, we put a dummy
dyanmic_list_econ_limited, not sure how it will the parallel running.
The basic problem is that when initialzing the globalWellState_, what
will happen if they can not find state information for a well in the Wells*.
If some defaulted values are used, then no big problem here.
Changes to BlackoilOutputWriter as mandated by the split and rewrite of
opm-output. Notable changes:
* BlackoilOutputWriter is no longer a child class of OutputWriter.
* Minor interface changes; writeTimeStep requires a Wells pointer
* restore requires a Wells* pointer
* VTK/Matlab support rewrites; no longer inherits OutputWriter
* WellStateFullyImplicitBlackoil::report added, to write its data to a
opm-output understood format
Relies on utility/Compat.hpp for quick conversion to the opm-output
defined formats.
The non-cartesian connections are required by the output facilities,
while the truly non-neighbour connections (meaning those not in the grid)
should be used by all other code.
* Compute NNC by face2cell information from a grid
* Pass NNC information to EclipseWriter
* Made NNC respect ECLIPSE input data
* nncStructure() is now exportNncStructure()
A boolen user parameter is added to controll the computation of well
potential.
This is a temporary fix to assure that no extra computation time is used
on well potential calculation if it is not needed. The long term fix
will require a more thorough revising of the well group implementation.
- the computation of well potentials in the model class calculates the
well potentials using computeWellFlux()
- in this way the well potential calculations also handle well where
some perforations are closed by the simulator due to cross-flow.
- the well potentials pr perforation and phase is stored in the well
state.
The well potentials are caculated based on the well rates and pressure
drawdown at every time step. They are used to calculate default guide
rates used in group controlled wells.
well_perforation_pressure_diffs is stored in
WellStateFullyImplicitBlackoil as it is needed in the well potential
calculations.
Several files stopped compiling due to relying on opm-parser headers doing
includes. From opm-parser PR-656 https://github.com/OPM/opm-parser/pull/656
this assumption is no longer valid.
i.e. it now supports stuff like MULTFLT in the schedule
section. Possibly, the MPI-parallel code paths need some fixes. (but
if the geology is not changed during the simulation, the parallel code
will do the same as before.)
the most fundamental change of this patch is that the
reference/pointer to the DerivedGeology object is made
non-constant. IMO that's okay, though, becase the geology can no
longer assumed to be constant over the whole simulation run.
Previously, local averages were calculated and used in the
well equations. With this commit we add versions of defineState and
calcAverages that take into account the parallel domain decomposition
and calculate correct averages.
Function calcAverages has a boolean template parameter
indicating whether this is a parallel run. Additionally we introduce
AverageIncrementCalculator with the same boolean template parameter.
In a parallel run we check whether the cell is owned by the process and
only in this case return an increment bigger than zero. In a sequential run
(no MPI or just one process -> empty boost::any parameter) no overhead is
introduced.