- Change the brief description slightly
- Do not print anything anymore if there are no unused parameters
- Change the boiler plate text for printing the parameters to the
PRT/DBG files
in part, this has been requested by [at]atgeirr.
this has been requested by [at]atgeirr.
Note: The FlowLinearSolverVerbosity, FlowNewtonMaxIterations and
FlowNewtonMinIterations parameters are still prefixed because they
clashes with parameters registered deeply within eWoms.
while they do no longer appear in the help message, in the code they
are still there and can be specified and used as normal.
also, this patch makes --print-parameters=1 and --print-properties=1
work.
this has several advanges:
- a consistent and complete help message is now printed by passing the
-h or --help command line parameters. most notably this allows to
generically implement tab completion of parameters for bash
- the full list of runtime parameters can now be printed before the simulator
has been run.
- all runtime parameters understood by ebos can be specified
- no hacks to marry the two parameter systems anymore
- command parameters now follow the standard unix convention, i.e.,
`--param-name=value` instead of `param_name=value`
on the negative side, some parameters have been renamed and the syntax
has changed so calls to `flow` that specify parameters must adapted.
The energy conservation is enabled by specifying either TEMP or
THERMAL in the deck. The deck also needs to contatin relevant fluid and rock
heat properties.
The blackoil + energy equations are solved fully implicit.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
It has been replaced with the faster local-ad-based code, that is now
part of the integrated flow.cpp application.
We do not remove the old sequential implicit polymer simulators.
Note 1: The initialization code now always consider 3 phases.
For 2-phase cases a trivial (0) state is returned.
Note 2: The initialization code does not compute a BlackoilStats,
but instead pass the initialization object with the initial state.
it seems like most build systems pass a -DHAVE_CONFIG_H flag to the
compiler which still causes `#if HAVE_CONFIG_H` to be false while it
clearly is supposed to be triggered.
That said, I do not really see a good reason why the inclusion of the
`config.h` file should be guarded in the first place: the file is
guaranteed to always available by proper build systems, and if it was
not included the build either breaks at the linking stage or -- at the
very least -- the runtime behavior of the resulting libraries will be
very awkward.
The motivation for this PR is that currently the build fails on my
Ubuntu 17.10 laptop with two processes because that machine "only" has
8 GB of RAM (granted, the optimization options may have been a bit too
excessive). under the new scheme, each specialization of the simulator
is put into a separate compile unit which is part of
libopmsimulators. this has the advantages that the specialized
simulators and the main binary automatically stay consistent, the
compilation is faster (2m25s vs 4m16s on my machine) because all
compile units can be built in parallel and that compilation takes up
less RAM because there is no need to instantiate all specializations
in a single compile unit.
on the minus side, all specializations must now always be compiled,
the approach means slightly more work for the maintainers and the
flow_* startup code gets even more complicated.
in particular, this implied some changes to the MPI initialization
code. since dune-fem's GridPart class currently has issues with
CpGrid's implementation of loadBalance(), parallel computations still
do not work if dune-fem is around, but at least sequential ones now
do even if MPI is enabled.