while the printed number of "Non linear iterations" was correct in a
strict sense, it was very confusing if one was working on the
linearization code because the last Newton iteration of each time step
was linearized but not solved for (and the solution was thus not
updated hence it does not count as a "non linear iteration"). This
makes sense for large problems were the total runtime is completely
dominated by the performance of the linear solver, but smaller
problems exhibit the opposite behavior (i.e., for them, runtime is
typically dominated by the linearization proceedure), so one is more
interested in the number of linearizations, not the number of linear
solves.
Previously, the call was made after the grid was distributed.
This means that each process wrote it, but only with his cells
active which was just a part of the whole domain.
With this commit we make the writeInit call before distributing the
grid and make sure that only one process calls it.
models may need a more detailed picture of where they are in the
simulation. Note that since the timer objects are available at every
call site, this is also not a very deep change.
At the moment, for the ParallelDebugOutput, we put a dummy
dyanmic_list_econ_limited, not sure how it will the parallel running.
The basic problem is that when initialzing the globalWellState_, what
will happen if they can not find state information for a well in the Wells*.
If some defaulted values are used, then no big problem here.
Changes to BlackoilOutputWriter as mandated by the split and rewrite of
opm-output. Notable changes:
* BlackoilOutputWriter is no longer a child class of OutputWriter.
* Minor interface changes; writeTimeStep requires a Wells pointer
* restore requires a Wells* pointer
* VTK/Matlab support rewrites; no longer inherits OutputWriter
* WellStateFullyImplicitBlackoil::report added, to write its data to a
opm-output understood format
Relies on utility/Compat.hpp for quick conversion to the opm-output
defined formats.
The non-cartesian connections are required by the output facilities,
while the truly non-neighbour connections (meaning those not in the grid)
should be used by all other code.
* Compute NNC by face2cell information from a grid
* Pass NNC information to EclipseWriter
* Made NNC respect ECLIPSE input data
* nncStructure() is now exportNncStructure()
A boolen user parameter is added to controll the computation of well
potential.
This is a temporary fix to assure that no extra computation time is used
on well potential calculation if it is not needed. The long term fix
will require a more thorough revising of the well group implementation.
- the computation of well potentials in the model class calculates the
well potentials using computeWellFlux()
- in this way the well potential calculations also handle well where
some perforations are closed by the simulator due to cross-flow.
- the well potentials pr perforation and phase is stored in the well
state.
The well potentials are caculated based on the well rates and pressure
drawdown at every time step. They are used to calculate default guide
rates used in group controlled wells.
well_perforation_pressure_diffs is stored in
WellStateFullyImplicitBlackoil as it is needed in the well potential
calculations.
Several files stopped compiling due to relying on opm-parser headers doing
includes. From opm-parser PR-656 https://github.com/OPM/opm-parser/pull/656
this assumption is no longer valid.
i.e. it now supports stuff like MULTFLT in the schedule
section. Possibly, the MPI-parallel code paths need some fixes. (but
if the geology is not changed during the simulation, the parallel code
will do the same as before.)
the most fundamental change of this patch is that the
reference/pointer to the DerivedGeology object is made
non-constant. IMO that's okay, though, becase the geology can no
longer assumed to be constant over the whole simulation run.
Previously, local averages were calculated and used in the
well equations. With this commit we add versions of defineState and
calcAverages that take into account the parallel domain decomposition
and calculate correct averages.
Function calcAverages has a boolean template parameter
indicating whether this is a parallel run. Additionally we introduce
AverageIncrementCalculator with the same boolean template parameter.
In a parallel run we check whether the cell is owned by the process and
only in this case return an increment bigger than zero. In a sequential run
(no MPI or just one process -> empty boost::any parameter) no overhead is
introduced.
In the rare occasion that there are no wells int the model the wells_ pointer in
BlackoilModelBase class is a null pointer. Therefore we need to test whether it is
null and only process well information if it is not.
This problem was reintroduced with PR #460 and gets fixed by this
one. No we can run the equilibrium examples without wells again.
Sorry for the inconvenience.
Explicitly adds bhp control on rate controlled history matching
injector.The default bhp limit is a large number to make sure that the
well does not switch. Alternativly bhp limit can be specified using
WELTARG. This is typically done to make sure the bhp limit stays within
the pressure limits in the PVT tables. Support for WELTARG is also added
to the history matching producers (WCONHIST)
1) NNC are added the grad, div and average operators
2) NNC are added the upwindSelector
3) NNC transmissibilities are added to the face transmissibilities
in particular, where to put empty lines and spaces. Also added a
copyright statement for myself to a few files and added a comment. the
new comment was requested by [at]bska, the rest was requested by
[at]atgeirr.
this is necessary because some older simulations only provide the
full-fledged solver class but no physical model.
(also, this allows to use something else than the standard newton
solver.)
so, far it is just a copy of the old "SimulatorFullyImplicitBlackoil"
class (which became a simple forward to the base class). The intention
is to unify the common simulator code in this class to avoid excessive
copy-and-pasting.