further, this cleans up the code of the parameter system and the
startup routines a bit and finally, it adds positional parameters
support to ebos as well as brief descriptions to ebos and the lens
problem.
The energy conservation is enabled by specifying either TEMP or
THERMAL in the deck. The deck also needs to contatin relevant fluid and rock
heat properties.
The blackoil + energy equations are solved fully implicit.
Dune::set_singularity_limit() was removed and the ILU preconditioners
seem to have been refactored. The ILU refactoring included making the
order of the preconditioner a template parameter of the preconditioner
class, i.e., it can no longer be specified at runtime.
Note that the AMG code in the dune master currently produces quite a
few warnings because of the latter point, but as far as I can see,
there is nothing which can be done about this from outside of
dune-istl.
IMO the term "vanguard" expresses better what these classes are
supposed to do: level the ground for the cavalry. Normally this simply
means to create and distribute a grid object, but it can become quite
a bit more complicated, as exemplified by the vanguard classes of
ebos..
instead of passing a "minimal" fluid state that defines the
thermodynamic conditions on the domain boundary and the models
calculating everything they need based on this, it is now assumed that
all quantities needed by the code that computes the boundary fluxes
are defined. This simplifies the boundary flux computation code, it
allows to get rid of the `paramCache` argument for these methods and
to potentially speed things up because quantities do not get
re-calculated unconditionally.
on the flipside, this requires slightly more effort to define the
conditions at the boundary on the problem level and it makes it less
obvious which quantities are actually used. That said, one now has the
freedom to shoot oneself into the foot more easily when specifying
boundary conditions and also tools like valgrind or ASAN will normally
complain about undefined quantities if this happens.
according to wikipedia the term "heat" is the energy transferred due
to a temperature gradient, i.e., it only makes sense if such a
gradient is present and this is not necessary for the storage term.
this means that technically the term "heat conductivity" is
meaningful, but "thermal conductivity" is IMO more consistent.
this has partially already been done in opm-material and eWoms it was
pretty inconsistent, so it also requires a patch in opm-material.
it broke because of the recent refactoring of the energy material laws
in opm-material. The reason why nobody noticed is that this test
requires dune-alugrid to be compiled.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
It has been replaced with the faster local-ad-based code, that is now
part of the integrated flow.cpp application.
We do not remove the old sequential implicit polymer simulators.
Note 1: The initialization code now always consider 3 phases.
For 2-phase cases a trivial (0) state is returned.
Note 2: The initialization code does not compute a BlackoilStats,
but instead pass the initialization object with the initial state.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
in particular, this implied some changes to the MPI initialization
code. since dune-fem's GridPart class currently has issues with
CpGrid's implementation of loadBalance(), parallel computations still
do not work if dune-fem is around, but at least sequential ones now
do even if MPI is enabled.
so far, the linker bailed out due to duplicate definitions of
variables if multiple compile units used the same type tag. This is
problematic if the sources are split into separate compile units and
that use the same type tag; in particular, this applies for
traditional libraries.
Due to various C++ peculiarities, this patch complicates the internal
implementation of the property system quite a bit, but given that the
usage of it (as well as the compile time) stay unchanged, I do not
consider this to be a big problem. Note that the introspection code is
particularly problematic because it needs static initializers that do
not cause the linker to choke in the case of multiple compile units.
Finally, to prevent future regressions, this patch adds a unit test
for the lens problem which uses multiple compile units. (This test is
called lens_immiscible_ecfv_ad_mcu and basically identical to the
existing lens_immiscible_ecfv_ad test and I thus think that it is
pretty unimaginative -- improvement proposals are welcome.)
there seems to be only a *very* limited amount of interest, the code
of the model is quite complex and there are currently no suitable
discretizations for free-flow equations in eWoms (i.e., the model
tends to be very unstable and oscillates a lot). Combined, all of this
makes maintaining this model a pain in the back, so let's remove it
some interest in these kinds of problems surfaces and until
appropriate discretizations -- like staggered grid methods -- are
available.
This works by having a "focus degree of freedom" during
linearization. When evaluating the local residual, all derivatives of
the residual/fluxes are with regard to the primary variables of that
DOF.
The two main offenders were the Forchheimer velocity model and the
model for the Stokes equations. To ensure that they continue to work,
the "powerinjection" and the "stokestest2c" problems are now both
compiled and tested with both, automatic differentiation and finite
differences, and the results of these tests is compared against the
same reference solution.
The majority of the time required to develop this patch was actually
required for testing: All tests compile and pass with debugging and
aggressive optimization flags with at least GCC 5, GCC 7 and clang
3.8, as well as Dune 2.3 and 2.4. Also, the results of flow_ebos stay
identical for Norne whilst the performance difference is below the
measurement noise on my machine. (the version with this patch applied
was actually about 1% faster.)