The energy conservation is enabled by specifying either TEMP or
THERMAL in the deck. The deck also needs to contatin relevant fluid and rock
heat properties.
The blackoil + energy equations are solved fully implicit.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
It has been replaced with the faster local-ad-based code, that is now
part of the integrated flow.cpp application.
We do not remove the old sequential implicit polymer simulators.
Note 1: The initialization code now always consider 3 phases.
For 2-phase cases a trivial (0) state is returned.
Note 2: The initialization code does not compute a BlackoilStats,
but instead pass the initialization object with the initial state.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
in particular, this implied some changes to the MPI initialization
code. since dune-fem's GridPart class currently has issues with
CpGrid's implementation of loadBalance(), parallel computations still
do not work if dune-fem is around, but at least sequential ones now
do even if MPI is enabled.
No extra equation is added for polymer in the well equation.
Seperate executables are added for polymer: flow_ebos_polymer
and solvent: flow_ebos_solvent
Tested and verified on the test cases in polymer_test_suite
This PR should not effect the performance and results of the blackoil
simulator
Now the actual pressure and transport model classes are not specified,
but taken as template template parameters, also grid and well model
are templates for both the sequential model and the simulator class,
although at this point only StandardWells is expected to work with
the sequential model.
inconsistent and unnecessary.
this is purely a cosmetic change, the only exception was a function with
the generic name 'split', which was renamed to splitParam to avoid confusion.
Motivated by
- proliferation of identical code
- need to avoid strange behaviour with "." directory on some boost versions
- potenial for further refactoring to avoid boost entirely
this information is already part of the EclipseState. The reason why
this should IMO be avoided is that this enforces an implementation
detail (ordering of the permeability matrices) of the simulator on the
well model. If this needs to be done for performance reasons, IMO it
would be smarter to pass an array of matrices instead of passing a raw
array of doubles. I doubt that this is necessary, though: completing
the full Norne deck takes about 0.25 seconds longer on my machine,
that's substantially less than 0.1% of the total runtime.